Tuesday, June 13, 2006
Ah, yes. Database upgrade. The dread phrase guaranteed to send a chill into the hearts of the most experienced user. And for FeedBlitz, D-day is Wednesday, where we expect to be unavailable several hours as we go through this process.
So, what's going on? Why now? Why in the middle of the day?
Here's why. We have stretched the limit of the current system to breaking point. Performance and reliability issues have been cropping up over the last 10 days or so, and while they've been manifesting themselves in different ways, the core problem is the database FeedBlitz is using. The DB takes a great deal of care and feeding now, and limits our ability to meet our continued growth.
So we're changing it. Simple, really.
A little background. FeedBlitz has been running, despite its growth, on the very same single server and database software it started out on when it was first made public in August last year. That makes for high performance and low latency - no network communications issues, for example - but poor scalability. The current database - well suited for fast development but not for high volume, multiple client applications - neither readily nor efficiently enables multiple remote clients, and is size limited. It has other issues too. So, in short, it gives us no room for maneuver any more. Time for it to go.
This upgrade is, in fact, two significant changes rolled into one. We're adding a dedicated high performance database server, and we're upgrading the software to an industrial strength solution. These two items are linked; we need to upgrade the server in order to move the database onto a separate machine. And by doing that, we are able to then add more polling and application and database servers at will, all of which can be done (almost) seamlessly, without having to take the service down again. Runs will go faster as we deploy more hardware, and we'll be prepped to handle publishers with subscriber counts measured in the tens and hundreds of thousands.
So to the size of the maintenance window and its timing. FeedBlitz does most of its work overnight, the time when most significant software upgrades take place. Our "off hours" are actually the middle of the working day, between nightly runs. Which makes things a little more inconvenient, to be sure, but we must have the update secured and settled before the next nightly run. The size of the maintenance window will be long enough to migrate the very latest data and test it before we take the new database live.
Once this upgrade completes, we will then start to roll in some additional polling resources to spread the work (and so shorten) the main nightly update. Other than that, any unplanned maintenance windows should last no more than a few minutes as the system shakes down. If you see any major issues after the service is restored please let us know at the support address.
So, what's going on? Why now? Why in the middle of the day?
Here's why. We have stretched the limit of the current system to breaking point. Performance and reliability issues have been cropping up over the last 10 days or so, and while they've been manifesting themselves in different ways, the core problem is the database FeedBlitz is using. The DB takes a great deal of care and feeding now, and limits our ability to meet our continued growth.
So we're changing it. Simple, really.
A little background. FeedBlitz has been running, despite its growth, on the very same single server and database software it started out on when it was first made public in August last year. That makes for high performance and low latency - no network communications issues, for example - but poor scalability. The current database - well suited for fast development but not for high volume, multiple client applications - neither readily nor efficiently enables multiple remote clients, and is size limited. It has other issues too. So, in short, it gives us no room for maneuver any more. Time for it to go.
This upgrade is, in fact, two significant changes rolled into one. We're adding a dedicated high performance database server, and we're upgrading the software to an industrial strength solution. These two items are linked; we need to upgrade the server in order to move the database onto a separate machine. And by doing that, we are able to then add more polling and application and database servers at will, all of which can be done (almost) seamlessly, without having to take the service down again. Runs will go faster as we deploy more hardware, and we'll be prepped to handle publishers with subscriber counts measured in the tens and hundreds of thousands.
So to the size of the maintenance window and its timing. FeedBlitz does most of its work overnight, the time when most significant software upgrades take place. Our "off hours" are actually the middle of the working day, between nightly runs. Which makes things a little more inconvenient, to be sure, but we must have the update secured and settled before the next nightly run. The size of the maintenance window will be long enough to migrate the very latest data and test it before we take the new database live.
Once this upgrade completes, we will then start to roll in some additional polling resources to spread the work (and so shorten) the main nightly update. Other than that, any unplanned maintenance windows should last no more than a few minutes as the system shakes down. If you see any major issues after the service is restored please let us know at the support address.
|
0 Comments:
Post a Comment
Note: Only a member of this blog may post a comment.
<< Home