Child Sexual Abuse Material Policy and Animal Welfare Policy

We have a zero-tolerance policy towards any material that sexualizes, sexually exploits, or endangers children on our platform. If we find or are made aware of it, we will report it.

DaToRo Media is deeply committed to fighting the spread of child sexual abuse material (CSAM). This includes media, text, illustrated, or computer-generated images. We view it as our responsibility to ensure our platform is not used for sharing or consuming CSAM, and to deter users from searching for it.

Any content featuring or depicting a child (real, fictional, or animated) or promoting child sexual exploitation is strictly forbidden on our platform and is a severe violation of our Terms of Conditions . Written content (including, but not limited to, comments, content titles, content descriptions, messages, usernames, or profile descriptions) that promotes, references, or alludes to the sexual exploitation or abuse of a child is also strictly prohibited.

For the purposes of this policy, a child is any person under eighteen (18) years of age. We report all cases of apparent CSAM to the National Center for Missing and Exploited Children (NCMEC), a nonprofit organization which operates a centralized clearinghouse for reporting incidents of online sexual exploitation of children. NCMEC makes reports available to appropriate law enforcement agencies globally.

Additionally, at DaToRo Media, we endorse and stand behind the objectives of the Voluntary Principles to Counter Online Child Sexual Exploitation and Abuse ; a collaborative initiative launched by the Five Country Ministerial (*) (5 Eyes) and backed by industry-leading tech companies to combat online child sexual exploitation and abuse. While some bad actors seek to exploit advances in technology and the digital world, we believe robust, efficient, and flexible policies, as well as participating in and supporting global cross-sector collaboration, can effectively eradicate the spread of online abuse.    

If you encounter child sexual abuse material on DaToRo Media’s websites - see our Terms of Conditions , please report it to us: support@datoromedia.com

Anyone can report potential violations of this policy.

For more information on how to report content, see the section titled “How Can you help us”.

All complaints and reports to DaToRo Media are kept confidential and are reviewed by human moderators who work swiftly to handle the content appropriately. If you believe a child is in imminent danger, please also alert your local law enforcement authorities immediately.  

* The Five Country Ministerial is made up of the Homeland Security, Public Safety and Immigration Ministers of Australia, Canada, New Zealand, the United Kingdom, and the United States, who gather annually to collaborate on meeting common security challenges.

  If you encounter child sexual abuse material online, please report it to us and alert your local law enforcement authorities immediately.


Guideline

DO NOT post material (whether visual, audio or written content) that*:

  • Features, involves, or depicts a child.

  • Sexualizes a child. This includes content that features, involves, or depicts a child (including any illustrated, computer-generated, or other forms of realistic depictions of a human child) engaged in sexually explicit conduct or engaged in sexually suggestive acts.

* This serves as an indicative list and does not constitute an exhaustive list. For a more detailed description, please review our Terms of Conditions , under the section entitled, “Prohibited Uses''. DaToRo Media reserves the right at all times to determine whether content is appropriate and in compliance with our Terms of Conditions , and may, without prior notice and in its sole discretion, remove content at any time.


Enforcement

We have strict policies, operational mechanisms, and technologies in place to tackle and take swift action against CSAM. We also cooperate with law-enforcement investigations and promptly respond to valid legal requests received to assist in combating the dissemination of CSAM on our platform.

Our team of human moderators work around the clock to review all content uploaded to prevent any content which may be in violation of our CSAM or other policies, from appearing on our platform. Additionally, when we are alerted to an actual or potential instance of CSAM appearing on the platform, we remove and investigate the content and report any material identified as CSAM. As part of our ongoing efforts, we regularly audit our websites to update and expand our list of banned search words, titles, and tags, to ensure our community remains safe, inclusive, diverse, and free from abusive and illegal content.

In conjunction with our team of human moderators and regular audits of our platform, we also rely on innovative industry-standard technical tools to assist in identifying, reporting, and removing CSAM and other types of illegal content from our platform. We use automated detection technologies as added layers of protection to keep CSAM off our platform.

These technologies include:

  • Youtube’s CSAI Match , a tool that assists in identifying known child sex abuse videos

  • Microsoft’s PhotoDNA , a tool that aids in detecting and removing known images of child sexual abuse

  • Google's Content Safety API , a cutting-edge artificial intelligence (AI) tool that scores and prioritizes content based on the likelihood of illegal imagery to assist reviewers in detecting unknown CSAM

  • Safer , Thorn's comprehensive CSAM detection tool utilized to keep platforms free of abusive material

  • Instant Image Identifier, the Centre for Expertise on Online Sexual Child Abuse ( EOKM ) tool, commissioned by the European Commission, detects known child abuse imagery using a triple verified database

  • MediaWise® service from Vobile ®, a state-of-the-art fingerprinting software and database, which scans all new user uploads to help prevent previously identified offending content from being re-uploaded

  • Safeguard –our proprietary image fingerprinting and recognition technology designed with the purpose of combatting both child sexual abuse imagery and the distribution of non-consensual intimate images, and to help prevent the re-uploading of that content to our platform

We also utilize age estimation capabilities to analyze content uploaded to our platform using a combination of internal proprietary software and Microsoft Azure Face API   in an effort to strengthen the varying methods we use to prevent the upload and publication of potential or actual CSAM.

Together, these tools play a fundamental role in our shared fight against the dissemination of CSAM on our platform, as well as our mission to assist in collective industry efforts to eradicate the horrendous global crime that is online child sexual exploitation and abuse.


How Can You Help Us

If you believe you have come across CSAM, or any other content that otherwise violates our Terms of Conditions , we strongly encourage you to immediately alert us .

If you are the victim or have first-hand knowledge that content violates our CSAM policy, please report the content to us. Please include all relevant URL links to the content in question and we will address your request confidentially and remove the content expeditiously.

Anyone can report violations of this policy, whether they have an account on our platform or not.


Consequences for violating this policy

We have a zero-tolerance policy towards any content that involves a child or constitutes child sexual abuse material. All child sexual abuse material that we identify or are made aware of results in the immediate removal of the content in question and the banning of its uploader. We report all cases of apparent CSAM to the National Center for Missing and Exploited Children.


Animal Welfare Policy

DaToRo Media is a content-hosting and sharing platform for consenting adult use and entertainment only. At DaToRo Media, we denounce the involvement in any way of animals in any abusive and/or sexual context.

That is why all forms of animal* abuse and cruelty, whether physical or psychological, is prohibited. More specifically, we don’t allow any visual or written content involving animal abuse, including “crushing.” We also prohibit content that involves or advocates for the sexual exploitation of animals, including as the object of sexual interest, and/or contact with an animal for a sexual purpose, including “bestiality.”

* For the purposes of the present guideline, 'animal' is considered to be any actual, true, or realistic representation or depiction (including animated) non-human animal.


Guidelines 

The following guidelines apply to audio, written, and visual, as well as actual, simulated, and animated content .

Do NOT post any content (including animated) that features, depicts, advocates for, or promotes:

  • Any sexual act or contact in sexual context, between a human and an animal, whether visual, verbal, or written.

  • Placement of animals by humans within cruel or unusual confinement, regardless of the presence of humans.

  • The depiction or appearance of emotional distress caused to an animal by a human.

  • Forcing animals into an act (violent or sexual) with a human or other animal.

  • Sexual arousal by, from, or with animals, whether visual, verbal, or written. 

  • Sexual activity while in proximity of an animal where the animal appears to be the primary focus and/or the object of arousal for the individual(s).

  • Subjecting animals to any type of bodily harm.

  • Animal and insect crushing.

Enforcement

We are steadfast in our commitment to protecting the safety of our users. As a user-generated content hosting platform, DaToRo Media relies on technology, our team of human moderators, and our wider community of users to help identify violative content.

  24/7 Human Moderation Team

Our team of moderators and support staff work 24 hours a day, 7 days a week in order to review all uploaded content for violations of our Terms of Conditions , and address user concerns, and remove all content that we identify or which we are made aware of and deem as violating our policies and/or Terms of Conditions

  Community Oversight and Detection

If you come across content, users, or comments that you think may violate this policy or otherwise be harmful for the community, you may alert us this content for review. 

  Automated Tools

Content that is identified and removed for violating our Animal Welfare policy may be digitally fingerprinted using both the MediaWise® service from Vobile® and Safeguard, our own proprietary digital fingerprinting software. Digitally fingerprinting unauthorized content serves as an added layer of protection against further distribution of the content on our platform.


Consequences for Violating this Policy  

In all cases where we identify or are made aware of content that may violate our Terms of Conditions and Community Guidelines, our dedicated team works swiftly to review and remove any violative content.

Depending on the findings, our team may also:

  • Utilize sophisticated industry-standard tools to fingerprint the content in question in order to block potential future uploads of the same content to our sites; AND

  • Suspend or permanently terminate the associated user’s account, where appropriate.

If you believe you have come across content that may otherwise violate our Terms of Conditions reach out to us at support@datoromedia.com .