Quantcast
Channel: All ProLiant Servers (ML,DL,SL) posts
Viewing all articles
Browse latest Browse all 30225

Re: serious issues with HP ProLiant DL 380 G5

$
0
0

I skimmed the logs you attached but they're probably not going to show anything related to the hardware... you'd want to check the IML for anything the server might have logged.

 

It sounds like you're saying the system has 8 x 72GB drives, and there's 430 GB of space.  Tossing aside the 1024 vs 1000 difference, that says there are 6 drives used for data and 1 for the RAID (well, it's striped across all 7 disks, but you know what I mean).  That 8th drive might be assigned as a spare.

 

When you first turn the system on, all of the drives should light up initially even if the drives are unassigned, so that would be the time to check for drive activity.  If you don't hear any clicking or weird noises and all the lights look good, the drives should be okay.  Enterprise hard drives are thankfully designed to avoid the horrible repeated access requests that can kill performance... if a drive read or write error happens, it may try a couple times but then let the controller take over, which means failing that drive or whatever.   That click-of-death is when a desktop drive has an error and it repeatedly resets the read/write heads over and over again.  Drives tuned for A/V systems are better too (DVR's for instance).

 

If the ILO isn't responding, I wonder if it got scanned by a Heartbleed scanner, and it does't have the fixed firmware (2.25).  That would knock it offline until power is physically removed from the server.

 

I'd try that... remove power, take some time to check the interior for dust or anything that could cause heat issues.  Plug back in, boot up, go into the ILO config and get that setup or at least check it, go into the array config... it's a simple interface from the boot utility, but you can see how the drives are assigned.  And then go into the system setup options as well and check out things there.

 

Gen 5 servers are 6 years old by now so things will happen to them.  Servers tend to run pretty hard in 24/7 environments.  I've still got a handful of them myself but we've retired a lot of them as well.  You reach that point where it seems like every week some drive failed, array battery died, PSU died, some memory module is reporting errors, etc.  Until the motherboard itself dies (and some people are happy to replace even those with used parts sellers on Ebay) .


Viewing all articles
Browse latest Browse all 30225

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>