As prominent social media platforms from Facebook to Twitter to YouTube wrestle with how to stop jihadists, neo-Nazis and other extremists from disseminating their dangerous content online – without infringing on one’s constitutional rights – a somewhat less-known but slowly-growing platform has just rolled out a new system that they believe strikes the ever-challenging balance.

“What we are seeing is that the mainstream networks are not being consistent in their enforcement. The moderation teams make tons of mistakes and the policies are not in line with the First Amendment,” Bill Ottman, CEO, and Founder of the Connecticut-headquartered Minds – which has grown to around 2 million users since its 2015 launch – told Fox News. “What we have is an advanced system with the users themselves in the process. People want to feel involved.”

While Minds does have an automated flagging system for suspect content – a bot they called Steward – Ottman stressed the need for human filters so that A.I. and algorithms “can’t get out of control.” But, he underscored, the greatest need when it comes to social media platforms is for a user to appeal any violations that have been leveled against them by either other users or electronic trolling.

A TECH TOOLBOX OF NEW AND OLD: INSIDE THE PAINFUL QUEST OF RELOCATING FAMILY MISSING IN WAR

Thus, last month they rolled out a “Jury System.”

“Every time a post is moderated – such as being marked as Not Safe for Work (NSFW) or spam – the user will probably be notified of the specific action and provided with the ability to appeal to give additional context as to why the decision should be changed,” Ottman explained. “The appeal will then be randomly sent to 12 unique active users on the site who are not subscribed to the reported channel. These users will be given the choice to participate, pass or opt-out of all future jury events. If so, another user will be notified until 12 have joined the jury and voted on the appeal.”

Minds – which has in the past attracted some controversy in the tech world for being too lax in its harnessing of perilous postings – also operates a “strike offense” system, which can also be appealed. The first two strikes are warnings for those who violate codes of conduct, and the third is either that they are marked in a NSFW category or banned from the site.

FILE - In this April 11, 2018, file photo, Facebook CEO Mark Zuckerberg pauses while testifying before a House Energy and Commerce hearing on Capitol Hill in Washington about the use of Facebook data to target American voters in the 2016 election and data privacy. Zuckerberg said Facebook will start to emphasize new privacy-shielding messaging services, a shift apparently intended to blunt both criticisms of the company's data handling and potential antitrust action. (AP Photo/Andrew Harnik, File) (AP)

Those determined to disseminate spam with malicious content, or those engaging in illegal acts such as terrorism, fraud, inciting violence, extortion or revenge porn, face immediate prohibition.

In the vast majority of cases, Ottman underscored, the initial rulings are upheld but ultimately the goal is to set a foundation for “digital democracy.”

Moreover, Minds bills itself as a community-owned and run alternative pulpit that shields its users from algorithm manipulation, de-monetization, data collection, and surveillance. It also runs on a chronological posting mechanism, rather than one muddily selected or curated toward the user.

WAR CRIMINALS AMONG US: INSIDE THE QUIET EFFORT TO PROSECUTE AND DEPORT VIOLATORS DISGUISED AS REFUGEES

In upholding their quest for transparency, all their moderation actions are “open source” and algorithms and codes are freely available for anyone to inspect, unlike – Ottman said – most other mainstream platforms that operate under a closed code to maintain a certain degree of control over the market and the community.

When it comes to the bigger picture, Jason Glassberg, co-founder of Casaba Security and former cybersecurity expert at Ernst & Young and Lehman Brothers, told Fox News that indeed the question of weighing the right to free speech with very real dangers of extremist points-of-view is a complex situation and one that will take the larger social media honchos a long time to nail.

“We’ve entered a new digital age where anyone has the ability to publish or broadcast their ideas to hundreds, thousands or millions of people instantly with very little oversight or hindrance,” he said. “However, while many people today are touting tougher regulation as the answer, what they fail to grasp is that it is not the platforms that are the problem – it is the technology.”

Glassberg noted that if the federal government does “crackdown” on Facebook and penalize Twitter and YouTube, it may temporarily shut down some of the vitriol, but it won’t end it.

“Other sites will pop up and the hate will go back online. It’s a game of whack-a-mole. For every site you try to regulate, ban or shut down, a dozen new ones are waiting to take its place,” he added. “Social media networks are experimenting with AI, but this technology is still in the early stages of development.”

GET THE FOX NEWS APP

From Ottman’s vantage point, this is precisely why more human interaction needs to be part of social media’s moderation future.

“It’s going to take many years for A.I. to detect the language in certain contexts, as it now people get banned for merely mentioning a subject – not endorsing or spreading it,” he added. “A single word can mean many different things. We still need to be focused on the First Amendment.”