Follow

From @aral : ar.al/2022/11/09/is-the-fedive on fairly scary scaling problems with Mastodon. Protocol design people, please read & consider. I have some ideas too, but later.

@aral @timbray there's almost certainly a protocol-level solution but absent that, doing fanout outside of ruby threads probably would go a long way.

@timbray @aral I mean I was doing fanout for a rails app in a node worker 10 years ago. We have the technology.

@zrail @aral
Feels like there's maybe an event-bus architecture lurking in here? You could build a Kafka fleet that wouldn't break a sweat handling these kind of numbers.

@timbray @aral yeah something like that. Asking small admins to run Kafka might be a bit much but there definitely are generic event bus fanout type things.

@timbray @aral

I understand that there's a lot of people just back in the job market who have a lot of experience in this space. Might be a great time to establish a brains trust.

@timbray @aral this challenge was always going manifest. In the past I have donated to Wikipedia because I think they do good work, and I have an idea what it costs to host things at scale. What is to stop fediverse being funded in the same way?

@timbray @aral I think it would be wise to contact the folks who designed ActivityPub on their thoughts on this.

@evan @wwahammy @aral
The *right* thing to do would be for me to quickly become an ActivityPub expert. In the meantime I think smell a need for a bulk-exchange protocol. I mean If S.Fry is on Instance A and Beyonce is no Instance B, with millions of followers each side, clearly there's a case for bulk-interchange between A & B. Is this contemplated in the protocol currently?

@timbray @evan @wwahammy @aral Unfortunately, Kafka and any bulk exchange is that it just reinforces the incentive to run fewer, larger instances. If your instance has to post to 1000s or 10s of 1000s of other Kafka endpoints, you’re not going to see any of the perf features Kafka boasts

@timbray I know it's not quite what you're saying, but it reads a bit like poor performance due to thousands of poorly-batched RPCs. :D

@timbray @aral Hmm...this doesn't quite make sense to me, other than the bits about trying to fundamentally redesign the system. Aral is paying 50 euros a month and possibly more because of how many responses his birthday post got? A post that by definition happens once a year? Doesn't it kinda make more sense to have one big server, which can more easily absorb the load when one users' post blows up? One person getting a hundred replies should be nothing to a server with thousands of people posting and replying all day. The greater the number of users the more consistent I'd expect the load to be and the more efficiently the server resources can be used. Plus according to this article the number of jobs created is based on the number of servers your followers are distributed among, so more smaller servers means more Sidekiq jobs which means more load on ALL of the servers, right?

(and I say this as someone on a $6/mo personal instance that is apparently gonna catch fire if I ever get too popular lol)

@admin @timbray @aral Economies of scale are a real thing. I think it's going to be fascinating watching how the economics play out against the federated ideal.

I still need to read up on the architecture, but I wonder if there's room for heavyweight fan-out servers that aren't where users live, but handle distribution?

@timbray @aral
Amusingly, this came up when I was interviewing somewhere last year. They had a different algorithm for doing notifications on accounts with large numbers of followers for precisely this reason.

@timbray @aral a real real quick perusal of the ActivityPub docs leads me to the “sharedInbox” concept that may hold a clue as to how to deal with this or related issues.

@timbray @aral One difference between large servers and small servers is that, from what I have heard, mastodon.social has paid moderators. This is something that smaller instances won't have. So you have part time/hobbyist admins trying to police instances and by having more servers, you increase the possibilities of drama between federation/non-federation servers.

@timbray @aral interesting read. For me, the draw to a particular instance is the mood of the "local" timeline; I wouldn't want to self-host because I want *neighbors*

Social pressure drives me to a locally central instance and at the same time keeps me from the big ones. The protocol only enters into this peripherally, in the sense that I can't have a local federation without a local instance.

@aral @timbray First, if the problem/cost of scaling high follower numbers rests with the one instance where the account is hosted, this seems a good thing.

Optimizing implementation and protocol, there is most likely plenty of room here. It is interesting, it is rewarding, I am not worried.

@aral @timbray

Consider me your canary in the coal mine…

«Chirp! Chirp! Chirp!»

This indicates that everything is fine.

This is my strongest reaction to the article.

fediverse meta 

So.. @aral you want everyone on the IndieWeb instead of in the fediverse? Perhaps you're right that there's a natural evolution from DataFarms to fediverse to IndieWeb, although some would argue the next stage of that evolution is to Gemini or ScuttleButt ;)

You're definitely right though that celebrities and others with massive follower counts ought to pay their share of the hosting bill, either by having a solo instance or being a major patron of the one they use.

@timbray

fediverse meta 

But could it be that the problem here is not ActivityPub as a protocol, but Mastodon's implementation? Maybe the folks from other fediverse projects like Pleroma and Friendica have been carping about Mastodon federation breaking things for good reason? Maybe the Mastodon crew are much better at UX than them, but much worse at protocol implementation?

@aral @timbray

Sign in to participate in the conversation
mastodon.cloud

Everyone is welcome as long as you follow our code of conduct! Thank you. Mastodon.cloud is maintained by Sujitech, LLC.