Also, I think in another thread, someone mentioned that using non-HP branded drives could impact the monitoring of temperatures in certain zones.
Especially in the G8 models where HP actually went out of their way to cripple some things if you use non-HP drives.
Now, I might cheap out and buy non-HP memory sometimes, but I always buy HP drives, but it is a little annoying. I only noticed because on one of my G8's, there's one drive bay that seems buggy... no matter what drive is plugged into it, everything works fine except the controller insists it's not an HP drive.
I'll be replacing the drive backplane on my next site visit (it's already had a motherboard replacement for another issue, so it's probably the drive cage itself causing that). But it's annoying to get an amber warning on the server and the ACU complaining about that drive, just because it thinks (in this case, incorrectly) that it's not a real HP drive.
On G8's that might not be as common because blank drive carriers for the new design might still be hard to get... you're probably running real HP drives. Or maybe the LFF drive carries on the G8 are the same as the older ones, and it's only the SFF carriers that got redesigned?
Whatever the case... check the cooling setting in RBSU as suggested above, and also make sure you have all the drivers up to date for ESX, just in case.
One way to tell if it's an issue with the OS/drivers overriding something is to boot into the RBSU or something and just let it sit there for a bit. If the fans *still* run at 100% even when you're just in the setup utility, then it's the BIOS itself thinking the system is too hot so it cranks up the fans.
Unlike some very old Proliant models that ran the fans at 100% until the OS booted, newer Proliants for years have intelligent fan control even during POST. So let it sit in setup or something, check the fan speeds in ILO and see if that's any different than it is when the OS gets going.