The Fediverse is a great system for preventing bad actors from disrupting “real” human-human conversations, because all of the mods, developers and admins are all working out of a desire to connect people (as opposed to “trust and safety” teams more concerned about user retention).
Right now it seems that the Fediverses main protection is that it just isn’t a juicy enough target for wide scale spam and bad faith agenda pushers.
But assuming the Fediverse does grow to a significant scale, what (current or future) mechanisms are/could be in place to fend off a flood of AI slop that is hard to distinguish from human? Even the most committed instance admins can only do so much.
For example, I have a feeling all “good” instances in the near future will eventually have to turn on registration applications and only federate with other instances that do the same. But it’s not crazy to imagine that GPT could soon outmaneuver most registration questions which means registrations will only slow the growth of the problem but not manage it long-term.
Any thoughts on this topic?
I don’t think there is any way to have a genuine “open forum” amongst complete strangers. There have always been human troll farms pushing narratives using sock puppet accounts, AI is just enabling it to reach new scales.
I actually am for echo chambers when it comes to social media, but one in which you only follow people you know or trust and ignore complete strangers and to make sure you get news and critical information from OUTSIDE social media, again with institutions you trust.
Yes, strong moderation by members of the community is sufficient to recognize and remove bad (human) actors. The question is one of volume and overwhelming those human mods. GPT can create hundreds of bad-faith accounts.
Hi there! Admin of Tucson.social here.
I think that the only way the fediverse can honestly handle this is through local/regional nodes not interest based global nodes.
Ideally this would manifest as some sort of non-profit entity that would work with municipalities to create community owned spaces that have paid moderation.
So then comes the problem of folks not agreeing with a local nodes moderation staff - but that’s also WHY it should be local. It’s much easier to petition and organize against someone who exists in your town than some guy across the globe who happens to own a large fediverse node.
This model just doesn’t work (IMO) if nodes can’t be accountable to a local community. If you don’t like how Mastodon, or lemmy.world are moderated you have zero recourse. For Tucson.social - citizens of Tucson can appeal to me directly, and because they are my fellow citizens I take them FAR more seriously.
Only then will people be trusting enough to allow for the key element to protecting against AI Slop. Human Indemnification Systems. Right now, if you wanted to ask the community of lemmy.world to provide proof they are human, you’d wind up with an exodus. There’s just no trust for something like that and it would be hard to acquire enough trust.
With a local node, that conversation is still difficult, but we can do things that just don’t scale with global nodes. Things like validating a person by meeting them to mark them as “indemnified” on a platform, or utilizing local political parties to validate if a given person is “real” or not using voter rolls.
But yeah, this is a bit rambly, but I’ll conclude that this is a problem that exists at the intersection between trust and scale and that I believe that local nodes are the only real solution that can handle both.
lemmy.world are moderated you have zero recourse
“Power tripping mods” definitionally cannot exist on the fediverse where anyone can create an instance or community. Even on Reddit, 99% of the time someone said a mod was “power tripping” it was just a right winger upset that the mod removed their disruptive nonsense.
The purpose of communities like the one you linked to is to shame mods into employing a passive, generic bare-minimum style of moderation, when we should be encouraging the opposite if we want diversity in the fediverse.
Three examples from that community, where other people can discuss the moderation, and see whether it’s power tripping or not.
right winger upset
Right wingers aren’t that numberous of Lemmy, but when this happens it gets quickly disqualified by the people commenting
anyone can create an instance or community
Enjoy your empty community nobody cares about because people post on the one where most of the people are, where the power tripping mod is operating
Mods and admins on the Fediverse are not democratically elected, they have complete control. Accusing one of “power tripping”, in their own community, on the instance they presumably pay for, is not a rational accusation, since they definitionally cannot exist in a state of less power. What that community is trying to do is use the threat of public shaming to influence behavior. It’s how you get weak moderation and generic communities where bad actors can thrive. A community dedicated to “Stopping bad mods” sounds good on the surface, but it’s an argument made in bad faith.
The first sentence you wrote is either misleading or incorrect, and I think it’s important to reexamine. Each administrator has control over the instance they run, but they don’t have control over the Fediverse itself, and because it’s so easy for people to move to other instances, they have little control over other users.
What’s the incentive to operate an LLM on the fediverse that is truly helpful and not just trying to secretly sell something/push an agenda?
Well, I am not saying that the scenario is a perfect match, just that it reminded me of that:-).
Though to answer your question, if Reddit were all AI slop whereas we were not, then they would be foolish to not exploit (for moar profitz) the source of legitimately true info that could be useful to answer people’s questions, e.g. on topics such as whether and how to use Arch Linux btw. :-P
Maybe it was silently assumed but nobody so far mentioned the endless stream of scrapers that go through my probably juicy but private instance. I‘m banning a new bot every week and by now they have switched to distributed actions. I get over 400 requests per hour by a couple ips for the same stuff with changing useragents because I wrote automated detection mechanisms. I might just make my instance login only.
Instead of trying to detect and block it, just disincentivize it.
Most AI spam on social media tries to exploit various systems intended to predict “good” content on the basis of a user’s past activity, by tracking reputation/karma/etc. Bots build up karma by posting a massive amount of innocuous (but usually insipid) content, then leverage that karma to increase the visibility of malicious content. Both halves of this process result in worse content than if the karma system didn’t exist in the first place.
I have had similar thoughts, I think the answer ultimately lies in active mods that can really get to know a community and it’s users and identify when users are pushing a narrative even if they can’t confirm if they are a bot or not.
Also as @dessalines@lemmy.ml pointed out, user registrations. On startrek.website we have a question that is easy for a star trek fan to answer but not easy for a bot (although getting back to your concern, chatGPT probably would have no problem)
There are two groups here, bots, and bad actors. We’ve found that these measures have mostly stopped them both.
Bots
- Registration applications. Its been extremely easy to differentiate bots from real people by asking a series of simple questions, and only let the real people in.
- Reports: so that mods / admins can see them quickly.
- Blocking open-signup servers that don’t have required applications, that usually serve as spam-attacks against the whole fediverse.
Some bots still get through occasionally, but not many compared to before. And some servers have more “lax” application questions, so they let more through.
Bad actors
- Registration applications. Most of the trolls are of a temperament where they refuse to do the work of answering questions earnestly. They can’t help themselves but give obviously trolling answers, if they do even bother to do that work at all.
- Reports: same as above.
- Ban + remove. Mods and admins can ban and remove all a person’s content at the click of a button. So even if the troll did the work of getting past the front door, then all their work is nullified by an action that takes less than 5 seconds. So they wasted much more of their time, than they did for admins, and accomplished nothing lasting.
Great response, thank you. My concern is more so focused on future measures; what happens if/when registration applications are answerable by a bot? It’s not hard to imagine. What happens when a GPT powered bot leaves totally “normal” unique comments 90% of the time, but occasionally recommends a product or pushes a political agenda?
All I can say is that in practice, bots can’t answer most simple questions in a believable way, especially questions that require actual personal opinions, or that require any context outside of what they were asked.
The most we’ve seen is that people created seemingly lemmy-specific signup bots, but they always answer questions in the same transparent way.
The blogspam bots that have gotten through (not for many months now here on lemmy.ml) are all transparent, because they all post links to the same domain. All it takes is one report, and we can remove their entire history.
That last one had better at least require two or three people to sign off on it. One shitty mod could easily become a bigger problem than a troll with that in power in hand.
Why are you putting up with a “shitty” mod? Are you trying to force your speech in a community who has asked you not to?
This is the kind of response id expect from a shitty mod :)
Blocked
deleted by creator
I fully agree. What worries me is if bad actors create bots that are able to overwhelm the human moderators.
I think that being human scale is largely the appeal of the Fediverse. Each instance isn’t meant to grow to the size of a centralized platform, but to be a relatively small community of people with some shared interests. I look at it similarly to the way IRC channels worked back in the day. You tend to have a group of people whom you interact with frequently and that’s how you know they’re human. If some bot enters the community then it becomes obvious very quickly.
As you said, a 44k monthly active users plateform is probably not worth investing time from spammers and agenda pushers.
If at some point we’ll make it, we’ll see. Seems like we are still quite far.
You say that, but they’re already here. I see completely automated commercial spam posts every few days. And we all know there’s already political agenda-pushers. Hell, Lemmy was created by some.