• AA5B@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    3 days ago

    If I need 10 downvotes to make you disappear then I only need 10 Smurf accounts.

    At the same time, 10 might be a large portion of some communities while miniscule in others.

    I suppose you limit votes to those in the specific community, but then you’d have to track their activity to see if they’re real or just griefing, and track activity in relation to others to see if they’re independent or all grief together. And moderators would need tools to not only discover but to manage briefing, to configure sensitivity

    • GreenKnight23@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      2 days ago

      you’re right. the threshold is entirely dependent on the size of the community. it would probably be derived from some part of community subscribers and user interactions for the week/month.

      should a comment be overwhelmingly positive that would offset the threshold further.

      in regards to griefing, if a comment or post is overwhelmingly upvoted and hits the downvote threshold that’s when mods step in to investigate and make a decision. if it’s found to not break rules or is beneficial to the community all downvoters are issued a demerit. after so many demerits those users are silenced in the community and follow through typical “cool down” processes or are permanently silenced for continued abuse.

      the same could be done for the flip-side where comments are upvote skewed.

      in this way, the community content is curated by the community and nurtured by the mods.

      appeals could be implemented for users whom have been silenced and fell through the cracks, and further action could be taken against mods that routinely abuse or game the system by the admins.

      I think it would also be beneficial to remove the concept of usernames from content. they would still exist for administrative purposes and to identify problem users, but I think communities would benefit from the “double blind” test. there’s been plenty of times I have been downvoted just because of a previous interaction. also the same, I have upvoted because of a well known user or previous interaction with that user.

      it’s important to note this would change the psychological point of upvote and downvotes. currently they’re used in more of an “I agree with” or “I cannot accept that”. using the rules I’ve brought up would require users to understand they have just as much to risk for upvoting or downvoting content. so when a user casts their vote, they truly believe it’s in the interests of the community at large and they want that kind of content within the community. to downvote means they think the content doesn’t meet the criteria for the community. should users continue to arbitrarily upvote or downvote based on their personal preferences instead of community based objectivity, they might find themselves silenced from the community.

      it’s based on the principles of “what is good for society is good for me” and silences anyone in the community that doesn’t meet the standards of that community.

      for example, a community that is strictly for women wouldn’t need to block men. as soon as a man would self identify or share ideas that aren’t respondent to the community they would be silenced pretty quickly. some women might even be silenced but they would undoubtedly have shared ideas that were rejected by the community at large. this mimics the self-regulation that society has used for thousands of years IMO.

      I think we need to stop looking at social networks as platforms for the individuals and look at them as platforms for the community as a whole. that’s really the only way we can block toxicity and misinformation from our communities. undoubtedly it will create echo chambers