- Joined
- Nov 14, 2012
Purpose of thread is to outline why the site is so fucking slow. Actually, it's to outline why I don't know, and to demonstrate that I am absolutely NOT SATISFIED with the state the forum is in and this is NOT due to a lack of effort. I have been fucking with this miserable pile of shit to keep it running for FUCKING YEARS.
The Kiwi Farms is mostly one sever but also about 10 servers. Our topology has three layers:
1. Entry points. These are the IPs you connect to. They are simple TCP reverse proxies. I call them "zero-trust proxies". They know nothing. If they are compromised, even by the host, nothing happens.
2. KiwiFlare. This is a rental server in a datacenter I trust. It's only job is to run KiwiFlare and handle abuse from Tor and the Entry Points.
3. Kiwi Farms's general application server.
The Kiwi Farms is fundamentally and historically cursed. Something like 2 years ago, during a routine RAM installation, our old server straight up fucking died. The motherboard shit itself. The installation was done by a professional.
I bought another server. I can't remember how much I spent but it's a lot. It has 256 cores, 512GB of ram, an NVMe boot drive, two SSD raids: Raid A is an 21TB SSD ZFS named 'SNEED', Raid B is Raid-6 NVMe with 7TB named 'CHUCK'.
Kiwi Farms runs XenForo. It is a traditional LEMP stack. PHP8.4-FPM handles requests from OpenResty (Nginx), using MariaDB for a database, on Debian Linux. We use S3 for storage. Right now, we are using zero storage on the drive because we are relying entirely on a remote S3 to do disk work.
I've hosted the Kiwi Farms for 13 years and at this point I have completely exhausted my own personal understanding of computers, the understanding of everyone around me, and also AI. I do not know why the site is slow. This machine should not experience mutli-second latency doing anything.
The full scope of the problem looks like this:
I was 100% convinced that SNEED was the problem, and using a ZFS raid to host MariaDB would fix it. However, NOTHING is on SNEED besides the idle files from S3. Nothing at all. It is stil slow.
I have changed every single configuration line in nginx, mysql, and php. I have reduced reservations, I have increased reservations. I have completely reconfigured Nothing has helped. Sometimes it does help and it doesn't make sense why.
It's made me schizophrenic. At some point in the last two weeks I started switching every single service I could think from tcp/ip into sockets because I thought maybe it was tcp port usage. Before that I had even moved every service onto its own loopback ip so that it wasn't sharing sockets on just 127.0.0.1.
If the transfer to SeaweedFS doesn't fix this I am going to be forced to assume the issue is motherboard, pci bus, or cpu related and just start pulling shit out of it and putting it on a new mainboard. This would be thousands of dollars of waste.
I *need* this to work because of a serious, fundamental flaw in how XenForo works: XenForo's attachment system is FUCKING GARBAGE written by ANGLOID COCKSUCKERS. The core attachment system transfers files THROUGH PHP. It does not CACHE, it does not use X-Send, it sends chunks of data THROUGH PHP-FPM WORKERS in a way that CAN NOT be accelerated. It cannot be cached. It cannot be dealt with in any sane, reasonable way. It exists specifically to fucking spite me.
The Kiwi Farms is mostly one sever but also about 10 servers. Our topology has three layers:
1. Entry points. These are the IPs you connect to. They are simple TCP reverse proxies. I call them "zero-trust proxies". They know nothing. If they are compromised, even by the host, nothing happens.
2. KiwiFlare. This is a rental server in a datacenter I trust. It's only job is to run KiwiFlare and handle abuse from Tor and the Entry Points.
3. Kiwi Farms's general application server.
The Kiwi Farms is fundamentally and historically cursed. Something like 2 years ago, during a routine RAM installation, our old server straight up fucking died. The motherboard shit itself. The installation was done by a professional.
I bought another server. I can't remember how much I spent but it's a lot. It has 256 cores, 512GB of ram, an NVMe boot drive, two SSD raids: Raid A is an 21TB SSD ZFS named 'SNEED', Raid B is Raid-6 NVMe with 7TB named 'CHUCK'.
Kiwi Farms runs XenForo. It is a traditional LEMP stack. PHP8.4-FPM handles requests from OpenResty (Nginx), using MariaDB for a database, on Debian Linux. We use S3 for storage. Right now, we are using zero storage on the drive because we are relying entirely on a remote S3 to do disk work.
I've hosted the Kiwi Farms for 13 years and at this point I have completely exhausted my own personal understanding of computers, the understanding of everyone around me, and also AI. I do not know why the site is slow. This machine should not experience mutli-second latency doing anything.
The full scope of the problem looks like this:
- Typing in SSH takes multiple seconds.
- Connecting via SSH takes 10+ seconds.
- Requests to the Kiwi Farms take multiple seconds even for rudimentary requests.
- Everything is FUCKING SLOW.
- This problem is INTERMITTENT. Sometimes the site loads instantly.
I was 100% convinced that SNEED was the problem, and using a ZFS raid to host MariaDB would fix it. However, NOTHING is on SNEED besides the idle files from S3. Nothing at all. It is stil slow.
I have changed every single configuration line in nginx, mysql, and php. I have reduced reservations, I have increased reservations. I have completely reconfigured Nothing has helped. Sometimes it does help and it doesn't make sense why.
- Total disk read is less than 300 M/s. Total disk write is under 100M/s. Our disks are rated for gigabits per second.
- Memory usage is about 50% usage and a lot of that is just idle reservation for MySQL and PHP-FPM.
- Network bandwidth consumption almost never goes over 500Mbps which is largely due to how fucking slow the site is. I imagine if the site ran as fast as possible we'd see multiepl Gb of traffic.
- CPUs are basically completely idle.
It's made me schizophrenic. At some point in the last two weeks I started switching every single service I could think from tcp/ip into sockets because I thought maybe it was tcp port usage. Before that I had even moved every service onto its own loopback ip so that it wasn't sharing sockets on just 127.0.0.1.
If the transfer to SeaweedFS doesn't fix this I am going to be forced to assume the issue is motherboard, pci bus, or cpu related and just start pulling shit out of it and putting it on a new mainboard. This would be thousands of dollars of waste.
I *need* this to work because of a serious, fundamental flaw in how XenForo works: XenForo's attachment system is FUCKING GARBAGE written by ANGLOID COCKSUCKERS. The core attachment system transfers files THROUGH PHP. It does not CACHE, it does not use X-Send, it sends chunks of data THROUGH PHP-FPM WORKERS in a way that CAN NOT be accelerated. It cannot be cached. It cannot be dealt with in any sane, reasonable way. It exists specifically to fucking spite me.