So after we’ve extended the virtual cloud server twice, we’re at the max for the current configuration. And with this crazy growth (almost 12k users!!) even now the server is more and more reaching capacity.
Therefore I decided to order a dedicated server. Same one as used for mastodon.world.
So the bad news… we will need some downtime. Hopefully, not too much. I will prepare the new server, copy (rsync) stuff over, stop Lemmy, do last rsync and change the DNS. If all goes well it would take maybe 10 minutes downtime, 30 at most. (With mastodon.world it took 20 minutes, mainly because of a typo :-) )
For those who would like to donate, to cover server costs, you can do so at our OpenCollective or Patreon
Thanks!
Update The server was migrated. It took around 4 minutes downtime. For those who asked, it now uses a dedicated server with a AMD EPYC 7502P 32 Cores “Rome” CPU and 128GB RAM. Should be enough for now.
I will be tuning the database a bit, so that should give some extra seconds of downtime, but just refresh and it’s back. After that I’ll investigate further to the cause of the slow posting. Thanks @veroxii@lemmy.world for assisting with that.
I really appreciate what you’re doing, but I’m worried how this instance will continue scaling. What happens when it gets to 1 million users? 10 million? We can scale vertically only somewhat, but horizontal scaling seems to be limited to “just join a new instance 4head” and that just…doesn’t have a good experience.
This server can easily host 1M users.
Most stress on the server comes from all the signups and newcomers posting a lot. After a while that becomes less. On Mastodon, the first days in November I had over 100k active users. Now I have 165k accounts but around 32k active.
And I’m sure the Lemmy devs will also improve the performance of the site. They never really had to, a few days ago the total number of Lemmy users over all instances was 7k.
Ya what are the limitations with scaling horizontally? Scaling up is a stop gap.
Ruud, thank you for your investment here though.