It’s me again, back to drop some food for thought. I’ve been wondering how an online platform could tackle some of the issues we’ve seen in today’s media landscape. Check out this video and watch till the end:
Pretty interesting, right? Let’s give props to this guy for his patience; he’s got loads of it! What really caught my eye is the comment about how social media debates work. He’s onto something—often, it turns into a shouting match where the loudest, most obnoxious voices drown everyone out. Meanwhile, decent folks who actually know what they’re talking about usually don’t want to waste their time arguing with trolls.
So, can a platform like Olas help fix this? Some people are already flagging bad behavior in the comments, which shows how we can hold people accountable. This could lay the groundwork for a crowd-sourced reputation system, rating users based on their contributions to the conversation—something social media seriously lacks. Attempts like Twitter’s ratings before Musk didn’t pan out well, and it’s clear we need better solutions.
What do you think about a blockchain-based rating system where users are ranked based on engagement quality? It seems doable since blockchain can create strong, unique digital identities. But here’s the catch: trolls could band together to drag down ratings, like we see on platforms such as Booking or JustEat. Could we solve this by asking users to stake some value for their ratings? I’m not sure about the best approach, so I’d love to hear your thoughts! Thanks a bunch!
Creating a blockchain-based reputation system could definitely be an interesting solution to improve the quality of discussions.
The idea of staking value for ratings could help incentivize good behavior and discourage trolling. However, the challenge, as you mentioned, would be preventing manipulation from groups of bad actors.
Maybe integrating both a staking mechanism to ensure the users who actively participate have “skin in the game“, combined with community moderation would be a potential solution.
Are there any proven mechanisms that could balance accountability while maintaining fairness?
Not only social media debates. This is the common practice in parliaments or political shows around the world where the same rules reign. Happens all the time in my country.
This made me think. Okay, there will be mechanisms in place that will make sure the conversation is civilized and always on point. The reputation system based on blockchain is definitely promising for improving the quality of public debate.
But how will it gauge and manage humor? Humor has always been a powerful tool of expression. Satire has been used since the ancient times to critique the public and get the message accross, alongside irony and sarcasm.
Humor is also nuanced and can be easily misunderstood, especially in the context without knowing the tone and body language, So, how will Olas handle this? I assume it won’t be an entirely humorless platform, because they might be misinterpreted or flagged by algorithms.
The balance lies in allowing humor—satire, irony, and sarcasm—to coexist with productive conversations without it being flagged as inappropriate. Context and intent will play crucial roles, and the system will need to gauge these nuances effectively.
I believe a silver lining between algorithms, community moderation, and guidelines could be a potential solution.
Yes Olas will absolutely help here. At the base layer at least, it’s not a social media protocol (however I expect social media protocols to be built on top for journalists and readers alike) but the protocol is entirely designed to reward high value information and punish low value or inaccurate information. So someone being obnoxious wouldn’t do well!
Ye I think this is an interesting point, however I don’t think staking or skin in the game is or should be the solution to most existing web2 problems. I think a system similar to Reddit’s upvoting and downvoting could work well. The only difference would be that users would be able to ‘attest’ to a comment using something like Ethereum Attestation System (EAS). I think this would work well for several reasons:
First off, attestations create natural accountability without requiring users to lock up funds. When your actions are permanently tied to your on-chain identity, people tend to think twice before posting or attesting to low-quality content. It’s similar to how GitHub shows your contribution history or how academic citations build reputation over time - your track record becomes your credibility. The interesting thing about EAS is that it makes all of this cryptographically verifiable. Every attestation adds to your on-chain history, creating something like PGP’s web of trust but way more user-friendly. Gitcoin Passport is already showing how well this can work for identity verification.
Here’s how I envision this working in practice: When you upvote or downvote content, you’re actually creating a permanent attestation. Other users can see who’s vouching for what content, and you could even add context like “This was particularly informative because…” Think of it like Stack Overflow’s reputation system, but portable across different platforms and verifiable on-chain.
This approach has some clear advantages over traditional staking systems. It doesn’t lock out users who can’t afford to stake tokens. Instead, it rewards knowledge and quality contributions.
Of course, there are some challenges we’d need to address. Privacy is a big one - though we could easily handle this by letting users maintain their anonymity using ZK proofs to verify their humanness without actually revealing their identity. We’d also need to prevent gaming of the system, which we could do through network analysis to spot coordinated attestation patterns and by weighing attestations based on historical quality or something similar.
The end goal is to promote valuable content and commentary through on-chain accountability rather than financial stakes. It’s more like how reputation works in academic or professional settings - your history of citations matters a lot.
What do you think about this approach? I’m particularly interested in potential edge cases or ways this system could be gamed in real world scenarios.
I missed this post at the time. It’s a good question!
I guess this is the risk we all run any time we tell a joke right? The person delivering has to be careful it lands right. I don’t see why it’d be any different on Olas. The writer would also be able to defend themselves in front of any panel as well and explain it’s humour.
This is maybe a noob question because I’m not a dev but I’m wondering if the down- and up-votes are going to be visible for all on Olas, or are they going to be “hidden” in the algorithm?
Reddit actually first comes to mind when thinking about that. Essentially, we are prone to estimate a comment as negative if we automatically see -56 score. Maybe we could employ less time to critically assess it. Maybe it is not untrue opinion, but unwanted at that point in time.
We have a very different system to reddit in that all the votes/bets are hidden until the market is settled and then each article is given an overall score. That’ll be visible for sure.