Also East coast here, and strangely that 17-second delay is right on the money. The loading indicator will spin on the tab for exactly 17 seconds and then the page loads all at once.
Also East coast here, and strangely that 17-second delay is right on the money. The loading indicator will spin on the tab for exactly 17 seconds and then the page loads all at once.
Sounds pretty OCD to me.
Nintendo Console Codes
Switch (JeffConser): SW-3353-5433-5137 Wii U: Skeldare - 3DS: 1848-1663-9345
PM Me if you add me!
This is the worst it's been since it's started happening. Normally things have been slow for me for an hour every 8 PM to 9 PM on the clock (East Coast), but today it's been going on for a solid hour and a half.
Seemed to start a bit after 5pm PST today and still going as of this post. It's been this way every day around the same time the past few days noticeably worse today than yesterday although that could just be me not paying as close attention yesterday
I counted about 19 seconds each time give or take a second or two.
NNID: delphinidaes Official PA Forums FFXIV:ARR Free Company <GHOST> gitl.enjin.com Join us on Sargatanas!
Edit: huh. It only lasted like 3 minutes today. I'm assuming it was still the forums since it happened right at 8, but maybe it was actually something on my end?
It seems to happen as I delve deeper into the forums. I can load forums.penny-arcade.com alright. Then G&T takes a long time to load. Then the Animal Crossing thread gives me Service Unavailable.
Yeah I fixed the issue earlier today. I feel kind of dumb about it, but also a bit like "wtf sphinx you crazy fucks". So a bit of a mix.
I'm really sorry for the poor performance we've been delivering as a result of this debacle. It's even more annoying that this happened because we tried extremely hard to make it seamless and downtime free by deploying the infrastructure in segments. Anyways, most sincere of apologies.
Basically when you perform a search, you're not really hitting the actual database. Once you have a certain amount of data in your DB you no longer want to be directly searching it. So we do periodic indexes of the data and searches hit that index instead, which lives on its own server. Once it returns some results, we join in the actual data from the database with a vastly more efficient query, much like rendering a single page of a discussion. The actual indexing operation got a lot more intense when we added Advanced Search though, so we had to make some changes to how we manage those indexes and how often we rebuild them. All of the search servers also had to be rebuilt to provide a newer version of indexer and search service because we were leveraging more cutting edge features. This is where the troubles began.
We pretty much never want to be rebuilding from scratch anymore (absolutely murders the DB), so we start with a full index, then do progressive partial "delta" indexes of anything that's happened since that full index was run. Much less hurtey on the DB. Problem is that this delta index starts to become gradually slower and more expensive itself as the time-since-full-index begins to lengthen. So we periodically "merge" the delta onto the end of the full index and reset the time-we-last-full-indexed to now. This was working awesome, and gets us the benefit of a full index, minus the pain of actually doing it.
So I wrote a daemon that runs in the background on all our search servers and does these indexing jobs as and when they're needed. No need to stuff around with crontab each time we add a customer, it all just happens on its own.
Problem is, the indexing service itself apparently decided that every night at 8pm EST was an absolutely perfect time to quietly reindex everything. All indexes. Rebuild from scratch. Go. No mention of this automatic provision anywhere either. It was hidden behind a user on the machine with NO SHELL DEFINED and thus no LOGIN PERMISSION. So when I went trolling around to find out why this index job was happening mysteriously, I DID try to log into that user but was rebuffed, so I figured it was just a daemon user for the service itself, no need to screw around with it any further. WRONG.
So that's that. Indexing service decided that a good non configurable install default would be: "seems like you should reindex everything every night at 8. yep."
Forum load times have gotten worse (for me at least) over the last couple days. Yesterday I got the "over capacity" page a few times (it was probably 3-4 times over the afternoon), and today it's just been taking a while for pages to load (I haven't had the "over capacity" page today).
Wasn't paying enough attention to it yesterday to note the time that it happened, but today it's probably been going on for the last half hour or so.
edit: and it's kind of hit or miss... now the load times seem to be back to normal
I seem to be bouncing back and forth between near-instant load times when clicking the forums button from the main page to 20+ second loads. Occasionally an "over capacity" page, even at off-peak hours.
I've been getting slow loads and a lot of over capicity errors yesterday and today.
Also "The "PostController" object does not have a "xMonitoring" method.|PostController|xMonitoring|"
I've been getting slow loads and a lot of over capicity errors yesterday and today.
Also "The "PostController" object does not have a "xMonitoring" method.|PostController|xMonitoring|"
Posts
Sounds pretty OCD to me.
Switch (JeffConser): SW-3353-5433-5137 Wii U: Skeldare - 3DS: 1848-1663-9345
PM Me if you add me!
Steam
I counted about 19 seconds each time give or take a second or two.
Official PA Forums FFXIV:ARR Free Company <GHOST> gitl.enjin.com Join us on Sargatanas!
Apologies if this ends up multi-posting, the Post Reply has been finicky during these periods of slowness.
3DS: 0963-0539-4405
Official PA Forums FFXIV:ARR Free Company <GHOST> gitl.enjin.com Join us on Sargatanas!
(8, aka slow forum times :P )
Edit: huh. It only lasted like 3 minutes today. I'm assuming it was still the forums since it happened right at 8, but maybe it was actually something on my end?
Either way, seems good now!
3DS Friend Code: 3110-5393-4113
Steam profile
Switch (JeffConser): SW-3353-5433-5137 Wii U: Skeldare - 3DS: 1848-1663-9345
PM Me if you add me!
It seems to happen as I delve deeper into the forums. I can load forums.penny-arcade.com alright. Then G&T takes a long time to load. Then the Animal Crossing thread gives me Service Unavailable.
Blizzard: Pailryder#1101
GoG: https://www.gog.com/u/pailryder
Blizzard: Pailryder#1101
GoG: https://www.gog.com/u/pailryder
Blizzard: Pailryder#1101
GoG: https://www.gog.com/u/pailryder
Twitter Steam
3DS Friend Code: 3110-5393-4113
Steam profile
I'm really sorry for the poor performance we've been delivering as a result of this debacle. It's even more annoying that this happened because we tried extremely hard to make it seamless and downtime free by deploying the infrastructure in segments. Anyways, most sincere of apologies.
Is the problem something you could explain? I'm curious :P
3DS Friend Code: 3110-5393-4113
Steam profile
We pretty much never want to be rebuilding from scratch anymore (absolutely murders the DB), so we start with a full index, then do progressive partial "delta" indexes of anything that's happened since that full index was run. Much less hurtey on the DB. Problem is that this delta index starts to become gradually slower and more expensive itself as the time-since-full-index begins to lengthen. So we periodically "merge" the delta onto the end of the full index and reset the time-we-last-full-indexed to now. This was working awesome, and gets us the benefit of a full index, minus the pain of actually doing it.
So I wrote a daemon that runs in the background on all our search servers and does these indexing jobs as and when they're needed. No need to stuff around with crontab each time we add a customer, it all just happens on its own.
Problem is, the indexing service itself apparently decided that every night at 8pm EST was an absolutely perfect time to quietly reindex everything. All indexes. Rebuild from scratch. Go. No mention of this automatic provision anywhere either. It was hidden behind a user on the machine with NO SHELL DEFINED and thus no LOGIN PERMISSION. So when I went trolling around to find out why this index job was happening mysteriously, I DID try to log into that user but was rebuffed, so I figured it was just a daemon user for the service itself, no need to screw around with it any further. WRONG.
So that's that. Indexing service decided that a good non configurable install default would be: "seems like you should reindex everything every night at 8. yep."
(I hate the fact that we don't really have a good DB admin on staff, but there's no reason for us to have one at our size.)
There are creams to help with that these days.
Is that like some reverse Icy Hot?
Wasn't paying enough attention to it yesterday to note the time that it happened, but today it's probably been going on for the last half hour or so.
edit: and it's kind of hit or miss... now the load times seem to be back to normal
Also "The "PostController" object does not have a "xMonitoring" method.|PostController|xMonitoring|"
Also "The "PostController" object does not have a "xMonitoring" method.|PostController|xMonitoring|"