I appreciate the diligence in these comments....thanx to those contributing.
I am new to the serving world - only been hosting now for about 1 year. I appreciate Intel's efforts to reduce physical server count, but I wanted to share my own insecurities about such a reduction.
My hosting endeavors only use low power legacy hardware, thanks to MS WinFLP, I continue to keep these boxes out of the landfill. Pentium III is still a diligent CPU and meets the needs of my relatively few clients (<10000).
Given the age of my hardware, I take comfort in having more, rather than less units available - my operations are 24/7, and if a unit goes down for any reason, I have spares in waiting to fill in.
Yes Intel's philosophy that less is more - is good to see, but with less, each piece is more critical to the success of the whole. With more units, a single unit failure has less impact overall.
Additionally, I do believe that the weakest link in hosting/serving remains to be the OS/software deployed. It really doesn't matter if you can reduce unit count 10 to 1, if that 1 unit crashes more often due to OS/software failure. It is inevitable that the crash will be more frequent, as you are running 10X the processes on that single CPU/OS. In my field, uptime is GOLD, and downtime is death. Having more independent units with lower workload each is much more secure picture in my mind - until we truly have a robust operating system - like DOS 5.0, lol!
I mean ok so you renew your servers every 3 years. So what do you do with the old ones? Where do they go apart from ebay or skip? Machines that are perfectly good and may last another 5 years dumped? Ok so now I tell you that in future its going to cost you to get rid of this kit! Britain is a small country and here we are over run with old electronic junk and we are running out of places to put it, so what do we do with it? It’s called recycling and the company that bought the kit (and the manufacturer) will have to pay to get it!! (EU loves these sorts of initiatives)!!
So let’s get innovative. Space is cheap and as most people have said they tend to over spec when they buy servers!! Now we have a great technology called virtualization... see where I'm going... Why not virtualise your desktop solution and run it on the 3 year old servers that you were going to chuck out? The lifetime of your servers will be doubled!!(Working as a pool with shared storage,so what if one breaks!!) And next time you refresh your desktops, replace with thin clients..Which will last much longer than your normal desktops and will be cheaper to recycle!! Since most big desktop projects are going virtual anyway this makes the most sense.
For AMD's point to be true, you have to already running the latest generation socket in order to be able to do the drop-in replacements. If you don't, then you have to replace the server anyways in order to make that even a remote possibility.
For Intel though, that's true also because while you can virtualize systems, and perhaps to even some extent, servers, there's a big push to consolidate multiple systems into a single system.
Even Sun (with their UltraSPARC T-series) pushes that.
But there are "conditions" and environmental factors that need to be in place for both. And I think that that's also important and worthy of mention.
I work in a government environment. We keep our servers until they die; no upgrades or anything new. It can be frustrating to keep adding duct tape and bailing wire, but the bosses say, if it's works, why replace?
Since our revenue is consistent and not profit related, it's hard to justify anything else. I just wish I had some newer and reliable hardware to work with, especially with advances in CPUs and virtualization.
I'm really surprised by some of these articles because they don't have anything to do with the actual business world.
Companies don't care about upgrading "old" hardware. They replace hardware as soon as it has depreciated and financially off the books. Most companies do a 3 year cycle, some do 5. Some do it based off the original maintenance agreements. Thats all that matters.
The cost of a maintenance re-up is generally more expensive than buying new hardware. Especially with big ticket items like storage arrays. EMC/NetApp/etc will give you millions in hardware for free these days as long as you buy the software licenses (which you can get for 70%+ off list and even cheaper with "vendor displacement").
I like the idea of enterprise articles at AnandTech but they really need to be valid for the real world enterprise to be worth anyones time.
"Companies don't care about upgrading "old" hardware."
Well, I admit I provide a "too much techie" point of view. But I don't believe in the CTO's that completely detach themselves from the tech side of things to "align" with the business goals. There are quite a lot of IT sites that write books about the CTO and the business, to a point where you ask yourself what the "T" stands for. Understand that the IT site of Anandtech will focus on the tech site of enterprise IT. I strongly believe we have too few good tech IT sites.
There must be balance between "using technology as best as you can" and trying to understand the business needs.
"EMC/NetApp/etc will give you millions in hardware for free these days as long as you buy the software licenses "
IMHO, that is just marketing. They give you hardware for free because they want a steady income and they want to bind you to their product line. At the end of contract, you are probably paying much more in total (and for software features that you never used) than in the classic way of buying hardware. But I agree that it might make a lot of sense in some situations.
But it doesn't make comparing hardware and thinking about lowering the costs of hardware useless.
I've done server support for over 10 years now, so I feel I should chime in.
It _IS_ worth it to renew support/warranties on servers 'cuz the cost is usually pretty reasonable for the 4th and sometimes even 5th year. After that, however, the cost skyrockets because the manufacturers know that the cost to them for replacing components is typically a LOT higher.
It is highly debatable whether renewing support/warranties is worth it for such things as storage arrays. Companies like EMC (whom I'm most experienced with) purposely include 3 years of support in the initial price because they don't want the customer to know how much that actually costs compared to the hardware and software itself. Then, after that 3 years is up, they can come to the customer and show them what it would cost to continue the coverage. Most of the time it is just as expensive as upgrading or buying a totally new storage array. This is exactly their intent all along. This drives sales, their commissions, makes the company appear to sell more products, and makes their bottom line look good. And they know that they'll have the exact same discussion in another 3 years.
The problem for companies like EMC that have operated like this for years is that their competitors who sell MUCH cheaper products have been improving their quality and performance enough that customers have trouble justifying the expensive EMC purchases versus HP, Equallogic, etc. In fact, I would daresay that the biggest things driving EMC's business nowadays is the fact that they're already in a LOT of companies and larger organizations like to have preferred vendors selected. They often have established, industry heavyweights like EMC already chosen from years back and so alternatives just aren't up for discussion.
IT matters. Maybe some companies outsource the IT completely and neglect or throw a bit of money at the problem.. but web businesses should invest themselves and integrate everything if they can afford it.
I think most web companies like mine appreciate how technology is the integrate part of their business and are ready to inject millions if it helps productivity by a mere 10%.
Power savings are a new concept to me; although I read it often on Anandtech I never felt it until now. Intel says "upgrade now and recoup your investment in 8 months?". Well that gives me something to think about - but we'll probably upgrade the whole servers because that's the IT philosophy - upgrade the whole thing reduces frequency of upgrades (versus smaller upgrades).
A computer after 3 years is kind of like a used car. Even if you replace the engine, you might have to replace the transmission.
I don't know about you, but I prefer the new. Those old transistors and copper/gold molding breaks up with extreme heat and use. If I were to just replace the CPU, what happens in a year or so when the mobo goes? Now I have to find old mobos, only to find there aren't any that meet my specifications. So I have to buy new parts in addition to the sh1tty processor/memory I originally bought.
Servers are a fixed cost, keep them low and manageable. But if you wanna make a big change, control the variable costs. In addition to CPU power usage, it seems like people hardly mention the AC bill too. Using the lower heat generators will reduce this as well - I'm not sure by how much - but there are many little factors like that, which add up. Such as, more space in the room. Less heat may also mean that it may be able to dissipate into the floor/walls, causing a two-fold reduction in AC bill.
----
This being said, I think it's also safe to say that parts seem to last a lot longer then they used to. Most of the parts I deal with seem to extend beyond 3x the life of the warranty with mild overclocking. While CPU utilization is a good argument, processing speed is also a good one. I'd like to see the study on the server processing performance (not CPU utilization) to determine if there is any noticeable difference to the end user and then do a SWOT analysis.
Good points, but in commercial IT shops there's a few other factors in play.
1. Depreciation. That allows us to write off the equipment. That typically occurs over a 3 year period in the US and is based on IRS rules. That's a hit to the "net" line on most financials (i.e., it's amortized over the equipment's life). CFO's generally prefer that.
2. When you're buying/leasing equipment virtually all companies will do it for at least 3 years, including the service contract (which if you're leasing is typically required anyway). All the costs, including maintenance, can then be amortized. CFO's like that.
3. If you junk or sell the equipment before it's depreciated, you typically take an immediate financial hit (unless you sold it for more than the remaining depreciation). CFO's don't like that.
Which means that for most companies (at least in the US), 3 years is pretty much the minimum refresh cycle, and vendors cater to that. After that, it depends...
4. After the equipment is depreciated, a service contract becomes a hit to the "expense" line on most financials, which means it's a hit to EBITA. CFO's generally don't like that.
5. OTOH, if cash/cashflow is more of an issue than how good the financial statement looks that quarter, that same CFO will want to know why you want to spend more cash to replace something that's working, especially if they're looking at a datacenter with fixed costs and much longer amortization period. Unless you're a Really Big shop, those numbers are baked, and power savings are a drop in the bucket compared to the cost.
E.g., You can reduce the rack count by 50%? So what. We're in a 5-year contract for the space; we built out the datacenter and it's going to take 20 years to amortize. You can reduce power consumption by 50%? So what. That'll cost us more in monthly cashflow after the lease upgrades. Not to mention that we won't see that unless the equipment is in service far beyond the point it can be depreciated. Etc., etc., etc. ...
6. That same CFO, and anyone responsible for a P&L, also values predictability. That's why most companies buy their systems with service contracts, may keep those systems in service longer than otherwise, and ultimately pay a higher price than if they replaced it with the latest-and-greatest--they're paying a premium for predictability.
In short, what makes sense is highly variable and depends on a lot of factors. It would do the IT profession well if more people invested time in learning to read a financial statement and understanding the business parameters, rather than simply focusing on speeds and feeds.
I given some counterpoints, but let me thank you for your excellent feedback! This is the kind of discussion that will enlighten the it.anandtech.com community.
Thanks! Glad to be able to constructively contribute to the discussion. To clarify some of the earlier points...
Most CFO's I've encountered have a pretty good understanding of technology, and will work very hard to try and make things work. However, realize that they have a pretty rigid set of very visible metrics they are judged by (much more rigid and visible than most people), and are often end stuck mediating between competing interests. Like most people, they're also trying to execute according to a plan. And while situations and plans change, knowing what you have to juggle with and how much can be juggled without throwing everything out of kilter is important.
That's one reason predictability is important--but that doesn't necessarily mean rigidity. Another reason is that predictability is a first-order indication of whether you know what you're doing and can execute to a plan--whether it's cost, schedule or defects. Where IT can help the CFO is to better understand how to juggle the expenses associated with various parts, and the tradeoffs. That starts with understanding how various parts of the IT budget fit into the financial equation, the tradeoffs, and ultimately how it all shows up in the financial statements.
Much of this depends on a company's financial priorities at any given point--and those priorities are likely to change over time. E.g., at one company EBITDA was the priority; after, net was the priority; after that cashflow was the priority. That was a company with a fairly heavy up-front capital infusion. At another company cashflow was the priority (few investors and not a lot of cash cushion). Those priorities are also typically related to where a company is in its lifecycle; specifically, the exit strategy. For companies looking at acquisition as the exit strategy, how the company is valued will likely make a big difference (revenue? gross profit? operating profit? net profit?).
Whether extending equipment life makes sense depends in part on those priorities. A CFO may be willing to take a hit to net if it improves cashflow. OTOH, a company with good cashflow may be looking to trade some of that for an improvement to gross or net. This is also where virtualization can make a big difference, as the options for extending equipment life are considerably greater. (Whether appropriate is another matter, and is dependent on the organization.) E.g., instead of a bunch of discrete systems (X server, Y server, Z server), you have a pool and can operate more like a utility. Some of the systems kept in service might not be the most efficient, but in many cases they are still cost-effective. (NB: Google's MO is an extreme example of this.)
Depreciation doesn't necessarily make extending the life of equipment unattractive per-se. However, the rules tend to have an influence. E.g., maintenance contracts tend to get more expensive over time not simply due to equipment age, but because of decreasing demand that is arguably a result of those accounting rules; in many cases it is unavailable or prohibitively expensive beyond 5-years. However, if the IRS decided tomorrow the maximum depreciation rate for IT was 5 years, I'd bet you'd see maintenance available for most equipment for at least 7 years. That doesn't mean the equipment isn't useful after that time, but no company I know of has any hardware of software running in a critical capacity that isn't under maintenance--and when maintenance is no longer available, it gets dumped or sent to be a lab rat.
That said, big organizations tend to have more options. They can do self-maintenance. They can negotiate maintenance or lease deals with a lot more options. Most SMB's don't have those options. You're in a 3-year lease for that equipment? Then while you may acquire new equipment, I guarantee it's not going to replace the existing equipment until the lease on the current equipment is up. Unless you're exceptionally strapped for power or space and paying exhorbitant rates, the lease payments (and the net hit) of those systems now sitting unused in the closet will dwarf any savings. (And thus while Intels ROI and 9-in-1 claims are ultimately hollow, even if true, but that's another subject.)
In short, this isn't magic... basic calculations and numbers. However, understanding what those numbers mean to different people, and the priorities and tradeoffs--as in most problems--is the trick. But this is not fundamentally different than many problems engineers deal with every day.
First I admit that I know very little about Corporate financials. But I learn the basics.
"Depreciation. That allows us to write off the equipment. All the costs, including maintenance, can then be amortized. CFO's like that. "
Agreed. But does writing off equipment make extending the life of equipment unattractive? AFAIK, writing off means you like to lower the result of the company and pay less taxes. But there are probably limits to it's usefullness? (In most European countries this is the case IIRC, don't know about the RIS)
"power savings are a drop in the bucket compared to the cost."
Not if you need to install another airco or more power lines because your are hitting some limits :-).
"CFO values predictability"
Ok. But it all sounds like a very static rigid model. Because it looks good in the accountant books, it is really good for the company? Without generalisation, but the CFO should, just as CTO, be there to serve the business goals and not the other way around.
" more people invested time in learning to read a financial statement and understanding the business parameters, rather than simply focusing on speeds and feeds."
True. Some basic knowledge helps. But the same is true for the CFO :-)
Instead of replacing entire units, virtualization makes upgrading existing units more feasible and justifiable.
The configurations of our last cycle was with an eye towards a mid-life CPU/memory upgrade, with rolling upgrades... move the workload off those servers, upgrade them, then put them back in the pool. That is much more difficult and time-consuming without virtualization. With virtualization the lifespan of a unit can also be extended... ok, so it's too old and slow to run our OLTP system, but there may still be workloads we can run on it.
That said, what makes sense depends a lot on other factors, including space and how much other than the CPU/memory is part of the equation. E.g., many IT environments have configurations which look very similar to HPC environments: (1) boxes with little more than CPU, memory and network interfaces; (2) network boxes; (3) SAN boxes. In those environments, the difference between upgrading vs. replacing may be much smaller.
That a Xeon E5504 at $227 with broken HT and broken Turbo Boost and castrated 800Mhz IMC can have the same performance as the Opteron 2384 at $700. You do the math.
AMD foolishly thinks that a 50 dollar cut to selective "channel partners" would tip the balance toward Opteron upgrades. A flat price reduction only works at then low end, making the Opteron 2376 the only CPU worth buying. (175-50 = 125 dollars?). At the high end, Opteron 2384/2387/2389, it is hardly a 5-8% price reduction.
I don't know that a price reduction this small will prevent people from jumping to Nehalem-EP let alone the upgradable 32nm Westmere. There are several misconceptions AMD wants people to believe:
1. Nehalem-EP platform is more expensive.
I say BS. 2S Nehalem-EP board can be found for as little as 250 dollars now, from respectable vendors like Asus and Tyan(Asus Z8NA-D6C, Tyan S7002). AMD 2S Opteron board is above 300 at most vendors place. At the motherboard level, it is about the same.
2. DDR3 ram is more expensive.
Only for 4GB DIMMs. Yes DDR3 density hasn't caught up with DDR2 yet, but one of the design decisions Intel did right is to support Unregistered DDR3 ECC ram or UDIMMs. 2GB DDR3 UDIMMs are selling for 30 dollars, effectively at price parity vs 2GB DDR2 REG ECC ram that the Opteron uses. A 2S Nehalem can support up to 24GB of UDIMMs for as low as $360.(30*12). If you need more ram for Database, get Dunningtons which will get you 128GB ram(4*32) cheap or 256GB ram(8GB*32) if you can pay for it.
3. DDR3 uses more power.
BS. DDR3-800 at the same speed as DDR2-800 uses 15% less power. The extra power at the higher speed allows the DDR3 to scale to 1333Mhz, something DDR2 can't do reliably.
The current pathetic 50 dollar price cut by AMD still doesn't address the fundamental problem that Intel's lowest grade broken Nehalem can be as fast as AMD's highest end Opterons selling for 3 times the price. Even at the same performance, you have to remember that E5504 is a 80W TDP part, while the Opteron 2384 is a 115W TDP part. Even with same performance, E5504 has a 25% advantage over the Opteron 2384 at performance/watt metric. Let alone performance/watt per dollar.
Depends on how often you're upgrading. Working in the public sector, we can't afford to upgrade often, so when time comes, too much has changed. And often, cost/performance is way in replacement's favor. I'm currently replacing 3-year-old fully-RAM populated 2core socket940 rackmounts running VMWare with half the number of half-RAM populated 4core socketF blades. Still keeping my socket604s going with RAM upgrades, though.
You missed the third and most important reason, the warranty. Nothing is more important in a machine room than equipment being under easily manageable warranties. Maybe you can get your hardware vendor to replace bits of the machine and extend the warranty of the old parts, but most likely not. And if you do, you'll end up with machines covered by two or more warranties. That's a big mistake. Full replacements every X number of years keeps machines under carefully and easily managed warranties.
It is a good point. Still, the impact might not be so high depending on your situation. Most warranties are 3 years. So if you extend the life of your server with a CPU/mem upgrade, the warranty is over. However, it is a small risk, as decent manufacs guarantee spare parts for a period of 5 years.
In that case, only if the motherboard dies, you probably won't save much as replacing the motherboard is quite a bit of work and might steer you towards a new server anyway. All other problems like a dead disk or PSU are easily and quickly replaced. So IMHO, it pays off to work for a few years without warrantees (they probably won't cover "normal" wear anyway).
For example, I've supported some small businesses where the IT Dept was competent and even built some servers from scratch. Warranties don't add up to much in those environments, except 4 hour replacement parts, those are pretty nice.
For a larger environment, I wouldn't be singing the same tune. I'd be using vendor supported everything. Ensuring responsibility for a crash falls directly to Dell or HP or [anyone but me].
Very true, we do things in lots of 300+. Nothing could make me replace/upgrade the memory/cpu in 300 servers. Hell, in one project we've got 3 boxes at each of 153 sites (some of which are not overly easy to get to) with a project life span of 8 years. For this sort of project we just buy 15% extra hardware and provide our own warranty.
It's very hard to make upgrades cost effective in that sort of environment. Not to mention the trouble you'll get in if something is down for longer than the predefined limit and you have to admit you cannot blame it on Dell.
But, I can see how a small business with a competent guy might get away with doing in house upgrades. I'd still be very nervous about that guy leaving. A truck number of 1 is bad.
In my former role I was the (very hands-on) IT manager for a company with approx $30m turnover and around 90 employees (4 of which were in IT, including me), and 21 servers.
As our servers tended to be task-specific, we didn't generally upgrade them unless we had a need to. We took a view that over-specifying hardware was the way to go, so we didn't generally rebuild kit unless we were looking for something new. That said, we replaced a number of aging boxes during 4 years I was there, and upgraded 6 of the machines due to performance issues with a VOIP phone system and SQL Server DB. Those were simple single-to-dual CPU upgrades and RAM bumps in the first instance, and a simple RAM bump in the second.
Aside from maybe the AMD Opteron 2xxx/8xxx series as mentioned, the platforms themselves change too quickly and so you cannot get all the bang for our buck that would make upgrading CPUs worthwhile.
Note: This next section excludes Virtualization servers:
Part of this I think comes from the fact that servers often get overspec'd to make sure there is headroom, and then a few years later are not yet so taxed that they even need upgrading. By the time your ready to upgrade the CPU in a machine, you can no longer get them or as already pointed out, you can only upgrade from a 2.5 to a 3.0.
I think the last 2 generations of Xeons were a pretty good example of that. Now if Intel really wants to see people upgrade, then they should continue releasing NEW CPUs for older platforms after new platforms have arrives.
For instance, now that the Xeon 55xx is out, we're barely going to see any further developments on the 54xx series. But if Intel put some of their new knowledge and design into that old platform, you could see either faster chips in the same thermal envelope or similarly spec'd chips with very reduced thermals.
Memory IS in my opinion the single most upgraded component in a server. Memory is dirt cheap right now for DDR2, and DDR3 isnt far behind. Now the caveat to that is bleeding edge memory like 8GB FBDIMMs. An 8GB DDR2 FBDIMM in the HP Server world costs 4x the price of a 4GB one rather than only 2x.
Disks can be upgraded in a server, but I see that the ROI there is even worse than CPU Upgrades unless your making the jump to SSDs. Disk speeds increase at a snails pace compared to other technologies in the server.
Disk Expansion is a completely different animal and happens quite frequently for File/DB servers.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
23 Comments
Back to Article
jtleon - Friday, May 1, 2009 - link
I appreciate the diligence in these comments....thanx to those contributing.I am new to the serving world - only been hosting now for about 1 year. I appreciate Intel's efforts to reduce physical server count, but I wanted to share my own insecurities about such a reduction.
My hosting endeavors only use low power legacy hardware, thanks to MS WinFLP, I continue to keep these boxes out of the landfill. Pentium III is still a diligent CPU and meets the needs of my relatively few clients (<10000).
Given the age of my hardware, I take comfort in having more, rather than less units available - my operations are 24/7, and if a unit goes down for any reason, I have spares in waiting to fill in.
Yes Intel's philosophy that less is more - is good to see, but with less, each piece is more critical to the success of the whole. With more units, a single unit failure has less impact overall.
Additionally, I do believe that the weakest link in hosting/serving remains to be the OS/software deployed. It really doesn't matter if you can reduce unit count 10 to 1, if that 1 unit crashes more often due to OS/software failure. It is inevitable that the crash will be more frequent, as you are running 10X the processes on that single CPU/OS. In my field, uptime is GOLD, and downtime is death. Having more independent units with lower workload each is much more secure picture in my mind - until we truly have a robust operating system - like DOS 5.0, lol!
Thanx again to those contributing.
jtleon
BritishBulldog - Thursday, April 16, 2009 - link
I mean ok so you renew your servers every 3 years. So what do you do with the old ones? Where do they go apart from ebay or skip? Machines that are perfectly good and may last another 5 years dumped? Ok so now I tell you that in future its going to cost you to get rid of this kit! Britain is a small country and here we are over run with old electronic junk and we are running out of places to put it, so what do we do with it? It’s called recycling and the company that bought the kit (and the manufacturer) will have to pay to get it!! (EU loves these sorts of initiatives)!!So let’s get innovative. Space is cheap and as most people have said they tend to over spec when they buy servers!! Now we have a great technology called virtualization... see where I'm going... Why not virtualise your desktop solution and run it on the 3 year old servers that you were going to chuck out? The lifetime of your servers will be doubled!!(Working as a pool with shared storage,so what if one breaks!!) And next time you refresh your desktops, replace with thin clients..Which will last much longer than your normal desktops and will be cheaper to recycle!! Since most big desktop projects are going virtual anyway this makes the most sense.
alpha754293 - Sunday, April 12, 2009 - link
Both AMD and Intel have valid points.For AMD's point to be true, you have to already running the latest generation socket in order to be able to do the drop-in replacements. If you don't, then you have to replace the server anyways in order to make that even a remote possibility.
For Intel though, that's true also because while you can virtualize systems, and perhaps to even some extent, servers, there's a big push to consolidate multiple systems into a single system.
Even Sun (with their UltraSPARC T-series) pushes that.
But there are "conditions" and environmental factors that need to be in place for both. And I think that that's also important and worthy of mention.
ssampier - Sunday, April 12, 2009 - link
I work in a government environment. We keep our servers until they die; no upgrades or anything new. It can be frustrating to keep adding duct tape and bailing wire, but the bosses say, if it's works, why replace?Since our revenue is consistent and not profit related, it's hard to justify anything else. I just wish I had some newer and reliable hardware to work with, especially with advances in CPUs and virtualization.
mlambert - Saturday, April 11, 2009 - link
I'm really surprised by some of these articles because they don't have anything to do with the actual business world.Companies don't care about upgrading "old" hardware. They replace hardware as soon as it has depreciated and financially off the books. Most companies do a 3 year cycle, some do 5. Some do it based off the original maintenance agreements. Thats all that matters.
The cost of a maintenance re-up is generally more expensive than buying new hardware. Especially with big ticket items like storage arrays. EMC/NetApp/etc will give you millions in hardware for free these days as long as you buy the software licenses (which you can get for 70%+ off list and even cheaper with "vendor displacement").
I like the idea of enterprise articles at AnandTech but they really need to be valid for the real world enterprise to be worth anyones time.
JohanAnandtech - Saturday, April 11, 2009 - link
"Companies don't care about upgrading "old" hardware."Well, I admit I provide a "too much techie" point of view. But I don't believe in the CTO's that completely detach themselves from the tech side of things to "align" with the business goals. There are quite a lot of IT sites that write books about the CTO and the business, to a point where you ask yourself what the "T" stands for. Understand that the IT site of Anandtech will focus on the tech site of enterprise IT. I strongly believe we have too few good tech IT sites.
There must be balance between "using technology as best as you can" and trying to understand the business needs.
"EMC/NetApp/etc will give you millions in hardware for free these days as long as you buy the software licenses "
IMHO, that is just marketing. They give you hardware for free because they want a steady income and they want to bind you to their product line. At the end of contract, you are probably paying much more in total (and for software features that you never used) than in the classic way of buying hardware. But I agree that it might make a lot of sense in some situations.
But it doesn't make comparing hardware and thinking about lowering the costs of hardware useless.
Ninevah - Friday, May 22, 2009 - link
I've done server support for over 10 years now, so I feel I should chime in.It _IS_ worth it to renew support/warranties on servers 'cuz the cost is usually pretty reasonable for the 4th and sometimes even 5th year. After that, however, the cost skyrockets because the manufacturers know that the cost to them for replacing components is typically a LOT higher.
It is highly debatable whether renewing support/warranties is worth it for such things as storage arrays. Companies like EMC (whom I'm most experienced with) purposely include 3 years of support in the initial price because they don't want the customer to know how much that actually costs compared to the hardware and software itself. Then, after that 3 years is up, they can come to the customer and show them what it would cost to continue the coverage. Most of the time it is just as expensive as upgrading or buying a totally new storage array. This is exactly their intent all along. This drives sales, their commissions, makes the company appear to sell more products, and makes their bottom line look good. And they know that they'll have the exact same discussion in another 3 years.
The problem for companies like EMC that have operated like this for years is that their competitors who sell MUCH cheaper products have been improving their quality and performance enough that customers have trouble justifying the expensive EMC purchases versus HP, Equallogic, etc. In fact, I would daresay that the biggest things driving EMC's business nowadays is the fact that they're already in a LOT of companies and larger organizations like to have preferred vendors selected. They often have established, industry heavyweights like EMC already chosen from years back and so alternatives just aren't up for discussion.
Loknar - Monday, April 27, 2009 - link
I agree,IT matters. Maybe some companies outsource the IT completely and neglect or throw a bit of money at the problem.. but web businesses should invest themselves and integrate everything if they can afford it.
I think most web companies like mine appreciate how technology is the integrate part of their business and are ready to inject millions if it helps productivity by a mere 10%.
Power savings are a new concept to me; although I read it often on Anandtech I never felt it until now. Intel says "upgrade now and recoup your investment in 8 months?". Well that gives me something to think about - but we'll probably upgrade the whole servers because that's the IT philosophy - upgrade the whole thing reduces frequency of upgrades (versus smaller upgrades).
vol7ron - Thursday, April 9, 2009 - link
A computer after 3 years is kind of like a used car. Even if you replace the engine, you might have to replace the transmission.I don't know about you, but I prefer the new. Those old transistors and copper/gold molding breaks up with extreme heat and use. If I were to just replace the CPU, what happens in a year or so when the mobo goes? Now I have to find old mobos, only to find there aren't any that meet my specifications. So I have to buy new parts in addition to the sh1tty processor/memory I originally bought.
Servers are a fixed cost, keep them low and manageable. But if you wanna make a big change, control the variable costs. In addition to CPU power usage, it seems like people hardly mention the AC bill too. Using the lower heat generators will reduce this as well - I'm not sure by how much - but there are many little factors like that, which add up. Such as, more space in the room. Less heat may also mean that it may be able to dissipate into the floor/walls, causing a two-fold reduction in AC bill.
----
This being said, I think it's also safe to say that parts seem to last a lot longer then they used to. Most of the parts I deal with seem to extend beyond 3x the life of the warranty with mild overclocking. While CPU utilization is a good argument, processing speed is also a good one. I'd like to see the study on the server processing performance (not CPU utilization) to determine if there is any noticeable difference to the end user and then do a SWOT analysis.
has407 - Friday, April 10, 2009 - link
Good points, but in commercial IT shops there's a few other factors in play.1. Depreciation. That allows us to write off the equipment. That typically occurs over a 3 year period in the US and is based on IRS rules. That's a hit to the "net" line on most financials (i.e., it's amortized over the equipment's life). CFO's generally prefer that.
2. When you're buying/leasing equipment virtually all companies will do it for at least 3 years, including the service contract (which if you're leasing is typically required anyway). All the costs, including maintenance, can then be amortized. CFO's like that.
3. If you junk or sell the equipment before it's depreciated, you typically take an immediate financial hit (unless you sold it for more than the remaining depreciation). CFO's don't like that.
Which means that for most companies (at least in the US), 3 years is pretty much the minimum refresh cycle, and vendors cater to that. After that, it depends...
4. After the equipment is depreciated, a service contract becomes a hit to the "expense" line on most financials, which means it's a hit to EBITA. CFO's generally don't like that.
5. OTOH, if cash/cashflow is more of an issue than how good the financial statement looks that quarter, that same CFO will want to know why you want to spend more cash to replace something that's working, especially if they're looking at a datacenter with fixed costs and much longer amortization period. Unless you're a Really Big shop, those numbers are baked, and power savings are a drop in the bucket compared to the cost.
E.g., You can reduce the rack count by 50%? So what. We're in a 5-year contract for the space; we built out the datacenter and it's going to take 20 years to amortize. You can reduce power consumption by 50%? So what. That'll cost us more in monthly cashflow after the lease upgrades. Not to mention that we won't see that unless the equipment is in service far beyond the point it can be depreciated. Etc., etc., etc. ...
6. That same CFO, and anyone responsible for a P&L, also values predictability. That's why most companies buy their systems with service contracts, may keep those systems in service longer than otherwise, and ultimately pay a higher price than if they replaced it with the latest-and-greatest--they're paying a premium for predictability.
In short, what makes sense is highly variable and depends on a lot of factors. It would do the IT profession well if more people invested time in learning to read a financial statement and understanding the business parameters, rather than simply focusing on speeds and feeds.
JohanAnandtech - Saturday, April 11, 2009 - link
I given some counterpoints, but let me thank you for your excellent feedback! This is the kind of discussion that will enlighten the it.anandtech.com community.has407 - Sunday, April 12, 2009 - link
Thanks! Glad to be able to constructively contribute to the discussion. To clarify some of the earlier points...Most CFO's I've encountered have a pretty good understanding of technology, and will work very hard to try and make things work. However, realize that they have a pretty rigid set of very visible metrics they are judged by (much more rigid and visible than most people), and are often end stuck mediating between competing interests. Like most people, they're also trying to execute according to a plan. And while situations and plans change, knowing what you have to juggle with and how much can be juggled without throwing everything out of kilter is important.
That's one reason predictability is important--but that doesn't necessarily mean rigidity. Another reason is that predictability is a first-order indication of whether you know what you're doing and can execute to a plan--whether it's cost, schedule or defects. Where IT can help the CFO is to better understand how to juggle the expenses associated with various parts, and the tradeoffs. That starts with understanding how various parts of the IT budget fit into the financial equation, the tradeoffs, and ultimately how it all shows up in the financial statements.
Much of this depends on a company's financial priorities at any given point--and those priorities are likely to change over time. E.g., at one company EBITDA was the priority; after, net was the priority; after that cashflow was the priority. That was a company with a fairly heavy up-front capital infusion. At another company cashflow was the priority (few investors and not a lot of cash cushion). Those priorities are also typically related to where a company is in its lifecycle; specifically, the exit strategy. For companies looking at acquisition as the exit strategy, how the company is valued will likely make a big difference (revenue? gross profit? operating profit? net profit?).
Whether extending equipment life makes sense depends in part on those priorities. A CFO may be willing to take a hit to net if it improves cashflow. OTOH, a company with good cashflow may be looking to trade some of that for an improvement to gross or net. This is also where virtualization can make a big difference, as the options for extending equipment life are considerably greater. (Whether appropriate is another matter, and is dependent on the organization.) E.g., instead of a bunch of discrete systems (X server, Y server, Z server), you have a pool and can operate more like a utility. Some of the systems kept in service might not be the most efficient, but in many cases they are still cost-effective. (NB: Google's MO is an extreme example of this.)
Depreciation doesn't necessarily make extending the life of equipment unattractive per-se. However, the rules tend to have an influence. E.g., maintenance contracts tend to get more expensive over time not simply due to equipment age, but because of decreasing demand that is arguably a result of those accounting rules; in many cases it is unavailable or prohibitively expensive beyond 5-years. However, if the IRS decided tomorrow the maximum depreciation rate for IT was 5 years, I'd bet you'd see maintenance available for most equipment for at least 7 years. That doesn't mean the equipment isn't useful after that time, but no company I know of has any hardware of software running in a critical capacity that isn't under maintenance--and when maintenance is no longer available, it gets dumped or sent to be a lab rat.
That said, big organizations tend to have more options. They can do self-maintenance. They can negotiate maintenance or lease deals with a lot more options. Most SMB's don't have those options. You're in a 3-year lease for that equipment? Then while you may acquire new equipment, I guarantee it's not going to replace the existing equipment until the lease on the current equipment is up. Unless you're exceptionally strapped for power or space and paying exhorbitant rates, the lease payments (and the net hit) of those systems now sitting unused in the closet will dwarf any savings. (And thus while Intels ROI and 9-in-1 claims are ultimately hollow, even if true, but that's another subject.)
In short, this isn't magic... basic calculations and numbers. However, understanding what those numbers mean to different people, and the priorities and tradeoffs--as in most problems--is the trick. But this is not fundamentally different than many problems engineers deal with every day.
JohanAnandtech - Saturday, April 11, 2009 - link
First I admit that I know very little about Corporate financials. But I learn the basics."Depreciation. That allows us to write off the equipment. All the costs, including maintenance, can then be amortized. CFO's like that. "
Agreed. But does writing off equipment make extending the life of equipment unattractive? AFAIK, writing off means you like to lower the result of the company and pay less taxes. But there are probably limits to it's usefullness? (In most European countries this is the case IIRC, don't know about the RIS)
"power savings are a drop in the bucket compared to the cost."
Not if you need to install another airco or more power lines because your are hitting some limits :-).
"CFO values predictability"
Ok. But it all sounds like a very static rigid model. Because it looks good in the accountant books, it is really good for the company? Without generalisation, but the CFO should, just as CTO, be there to serve the business goals and not the other way around.
" more people invested time in learning to read a financial statement and understanding the business parameters, rather than simply focusing on speeds and feeds."
True. Some basic knowledge helps. But the same is true for the CFO :-)
mlambert - Saturday, April 11, 2009 - link
I should've read your post before replying with a new you. You hit most of the key points fairly accurately.has407 - Thursday, April 9, 2009 - link
Instead of replacing entire units, virtualization makes upgrading existing units more feasible and justifiable.The configurations of our last cycle was with an eye towards a mid-life CPU/memory upgrade, with rolling upgrades... move the workload off those servers, upgrade them, then put them back in the pool. That is much more difficult and time-consuming without virtualization. With virtualization the lifespan of a unit can also be extended... ok, so it's too old and slow to run our OLTP system, but there may still be workloads we can run on it.
That said, what makes sense depends a lot on other factors, including space and how much other than the CPU/memory is part of the equation. E.g., many IT environments have configurations which look very similar to HPC environments: (1) boxes with little more than CPU, memory and network interfaces; (2) network boxes; (3) SAN boxes. In those environments, the difference between upgrading vs. replacing may be much smaller.
tshen83 - Wednesday, April 8, 2009 - link
That a Xeon E5504 at $227 with broken HT and broken Turbo Boost and castrated 800Mhz IMC can have the same performance as the Opteron 2384 at $700. You do the math.AMD foolishly thinks that a 50 dollar cut to selective "channel partners" would tip the balance toward Opteron upgrades. A flat price reduction only works at then low end, making the Opteron 2376 the only CPU worth buying. (175-50 = 125 dollars?). At the high end, Opteron 2384/2387/2389, it is hardly a 5-8% price reduction.
I don't know that a price reduction this small will prevent people from jumping to Nehalem-EP let alone the upgradable 32nm Westmere. There are several misconceptions AMD wants people to believe:
1. Nehalem-EP platform is more expensive.
I say BS. 2S Nehalem-EP board can be found for as little as 250 dollars now, from respectable vendors like Asus and Tyan(Asus Z8NA-D6C, Tyan S7002). AMD 2S Opteron board is above 300 at most vendors place. At the motherboard level, it is about the same.
2. DDR3 ram is more expensive.
Only for 4GB DIMMs. Yes DDR3 density hasn't caught up with DDR2 yet, but one of the design decisions Intel did right is to support Unregistered DDR3 ECC ram or UDIMMs. 2GB DDR3 UDIMMs are selling for 30 dollars, effectively at price parity vs 2GB DDR2 REG ECC ram that the Opteron uses. A 2S Nehalem can support up to 24GB of UDIMMs for as low as $360.(30*12). If you need more ram for Database, get Dunningtons which will get you 128GB ram(4*32) cheap or 256GB ram(8GB*32) if you can pay for it.
3. DDR3 uses more power.
BS. DDR3-800 at the same speed as DDR2-800 uses 15% less power. The extra power at the higher speed allows the DDR3 to scale to 1333Mhz, something DDR2 can't do reliably.
The current pathetic 50 dollar price cut by AMD still doesn't address the fundamental problem that Intel's lowest grade broken Nehalem can be as fast as AMD's highest end Opterons selling for 3 times the price. Even at the same performance, you have to remember that E5504 is a 80W TDP part, while the Opteron 2384 is a 115W TDP part. Even with same performance, E5504 has a 25% advantage over the Opteron 2384 at performance/watt metric. Let alone performance/watt per dollar.
ko1391401 - Tuesday, April 7, 2009 - link
Depends on how often you're upgrading. Working in the public sector, we can't afford to upgrade often, so when time comes, too much has changed. And often, cost/performance is way in replacement's favor. I'm currently replacing 3-year-old fully-RAM populated 2core socket940 rackmounts running VMWare with half the number of half-RAM populated 4core socketF blades. Still keeping my socket604s going with RAM upgrades, though.Rigan - Tuesday, April 7, 2009 - link
You missed the third and most important reason, the warranty. Nothing is more important in a machine room than equipment being under easily manageable warranties. Maybe you can get your hardware vendor to replace bits of the machine and extend the warranty of the old parts, but most likely not. And if you do, you'll end up with machines covered by two or more warranties. That's a big mistake. Full replacements every X number of years keeps machines under carefully and easily managed warranties.JohanAnandtech - Tuesday, April 7, 2009 - link
It is a good point. Still, the impact might not be so high depending on your situation. Most warranties are 3 years. So if you extend the life of your server with a CPU/mem upgrade, the warranty is over. However, it is a small risk, as decent manufacs guarantee spare parts for a period of 5 years.In that case, only if the motherboard dies, you probably won't save much as replacing the motherboard is quite a bit of work and might steer you towards a new server anyway. All other problems like a dead disk or PSU are easily and quickly replaced. So IMHO, it pays off to work for a few years without warrantees (they probably won't cover "normal" wear anyway).
StraightPipe - Wednesday, April 8, 2009 - link
This is entirely dependent on the environment.For example, I've supported some small businesses where the IT Dept was competent and even built some servers from scratch. Warranties don't add up to much in those environments, except 4 hour replacement parts, those are pretty nice.
For a larger environment, I wouldn't be singing the same tune. I'd be using vendor supported everything. Ensuring responsibility for a crash falls directly to Dell or HP or [anyone but me].
Rigan - Wednesday, April 8, 2009 - link
Very true, we do things in lots of 300+. Nothing could make me replace/upgrade the memory/cpu in 300 servers. Hell, in one project we've got 3 boxes at each of 153 sites (some of which are not overly easy to get to) with a project life span of 8 years. For this sort of project we just buy 15% extra hardware and provide our own warranty.It's very hard to make upgrades cost effective in that sort of environment. Not to mention the trouble you'll get in if something is down for longer than the predefined limit and you have to admit you cannot blame it on Dell.
But, I can see how a small business with a competent guy might get away with doing in house upgrades. I'd still be very nervous about that guy leaving. A truck number of 1 is bad.
Rolphus - Tuesday, April 7, 2009 - link
In my former role I was the (very hands-on) IT manager for a company with approx $30m turnover and around 90 employees (4 of which were in IT, including me), and 21 servers.As our servers tended to be task-specific, we didn't generally upgrade them unless we had a need to. We took a view that over-specifying hardware was the way to go, so we didn't generally rebuild kit unless we were looking for something new. That said, we replaced a number of aging boxes during 4 years I was there, and upgraded 6 of the machines due to performance issues with a VOIP phone system and SQL Server DB. Those were simple single-to-dual CPU upgrades and RAM bumps in the first instance, and a simple RAM bump in the second.
Hope that helps...
Casper42 - Tuesday, April 7, 2009 - link
Aside from maybe the AMD Opteron 2xxx/8xxx series as mentioned, the platforms themselves change too quickly and so you cannot get all the bang for our buck that would make upgrading CPUs worthwhile.Note: This next section excludes Virtualization servers:
Part of this I think comes from the fact that servers often get overspec'd to make sure there is headroom, and then a few years later are not yet so taxed that they even need upgrading. By the time your ready to upgrade the CPU in a machine, you can no longer get them or as already pointed out, you can only upgrade from a 2.5 to a 3.0.
I think the last 2 generations of Xeons were a pretty good example of that. Now if Intel really wants to see people upgrade, then they should continue releasing NEW CPUs for older platforms after new platforms have arrives.
For instance, now that the Xeon 55xx is out, we're barely going to see any further developments on the 54xx series. But if Intel put some of their new knowledge and design into that old platform, you could see either faster chips in the same thermal envelope or similarly spec'd chips with very reduced thermals.
Memory IS in my opinion the single most upgraded component in a server. Memory is dirt cheap right now for DDR2, and DDR3 isnt far behind. Now the caveat to that is bleeding edge memory like 8GB FBDIMMs. An 8GB DDR2 FBDIMM in the HP Server world costs 4x the price of a 4GB one rather than only 2x.
Disks can be upgraded in a server, but I see that the ROI there is even worse than CPU Upgrades unless your making the jump to SSDs. Disk speeds increase at a snails pace compared to other technologies in the server.
Disk Expansion is a completely different animal and happens quite frequently for File/DB servers.