Odd, the revision hostory I was looking at didn't list anything newer. Glancing over the revision history firmware requirments, I see no mention of any minimum firmware level requirements needed before moving to v2.72
Odd, the revision hostory I was looking at didn't list anything newer. Glancing over the revision history firmware requirments, I see no mention of any minimum firmware level requirements needed before moving to v2.72
Just deleting a logical drive from within SSA is normally almost instantaneous. I'd probably reboot and try again. Is this the only array / logical drive on the controller? You might try the clear configuation option unless there is other configuration you'd rather not re-create
You might need to update the System ROM to support the different processor. Looking through the System ROM revision history, you need to be at least running Version:2013.11.12 (20 Nov 2013) as that version added support for Intel Xeon E5-2400 v2 Series processors.
Hello,
I am triyin to instal plesk + Ubuntu in a ProLiant DL380 GEN10 Server.
I have tried to start the intelligence provisioning to access an ISO in a pendrive, but the IP gets stuck in the "Intelligent Provisioning - Waiting for ILO to response" screen.
I've tried before to boot from an USB, but the UEFI is not working, and the Legacy mode neither (it always loads the boot menu. I slect the "Legacy one time boot" option and when rebooting, i get to the menu again, and i get stuck in that loop)
Thank you
"oemhpe_ip6 options set IPV6_Active=no"
wrote:
So do you think that if I upgrade from 1.20 to 2.72, I will not have problems? It's a critical work server ...
From what I see in the release notes, there is no mention of any system ROM required before moving to any of the ones listed. I've made large version leaps on various servers over the years without issue. I have only seen a few cases where there is a required update before moving to later releases and they were well documented. If it makes you feel safer you can do them in order, it is just time consuming.
There are several things you can try to get IP working, but in the end, you are going to do a manual installation of Ubuntu. Either mount the Ubuntu ISO image as virtual media in iLO, or use something like Rufus to create a bootable USB key. Leave the server in UEFI mode, no reason to try to use legacy boot emulation mode
If IP is corrupt, you can download the recovery ISO image and re-install IP. If you don't have an iLO key get a trial key to access virtual media to get IP re-installed.
Hi,
I have the same problem but with DL380e g8, old logo showing. Any help or clue would be helpful and this server is out of warranty. :
I have almost no expsoure to HP servers so please bear with me. I have reinstalled windows server 2012 R2 on a ProLiant DL380p Gen8 that previously had the SP's through 2017.04.0 installed on it. I have already installed "HP System Management Homepage for Windows x64", but it shows almost nothing in comparison with what it showed before the OS reinstall. A message at the bottom of the Management homepage says "Install the latest version of HP Insight Management WBEM Providers and/or HP Insight management Agents from HP Service Pack for ProLiant(HP SPP). Click Here to the latest version of HP SPP." Probably these components came preinstalled on the server when it was first acquired? Should I install both?
Also I think I've configured iLO for port 192.168.4.10, but when I browse to that address there is nothing. Do I need to install "HP Lights-Out Online Configuration Utility for Windows x64 Editions"? iLO was never configured on this server as far as I can tell. Otherwise how else do I access iLO?
Thanks for replying,
For with the same problem, you can find at the following link further information.
https://support.hpe.com/hpsc/doc/public/display?docId=c03911173
Best regards
You were right.
I was given a non compatible version of Ubuntu too, but now with the 18.04 version and using the virtual media all went fine.
Thank you very much
Hello,
I have the below described server configuration. It appears that both NIC cards are mapped to same CPU.
As I need to have one NIC per CPU, can someone please indicate which riser kit is to be ordered for this config?
867958-B21 HPE DL360 Gen10 4LFF CTO Server
870968-L21 HPE DL360 Gen10 Xeon-G 6138 FIO Kit
870968-B21 HPE DL360 Gen10 Xeon-G 6138 Kit
815100-B21 HPE 32GB 2Rx4 PC4-2666V-R Smart Kit
875490-B21 HPE 480GB SATA MU M.2 2280 DS SSD
867978-B21 HPE DL360 Gen10 SATA M.2 2280 Riser Kit
817753-B21 HPE Eth 10/25Gb 2P 640SFP28 Adptr
817749-B21 HPE Eth 10/25Gb 2P 640FLR-SFP28 Adptr
871244-B21 HPE DL360 Gen10 High Perf Fan Kit
865414-B21 HPE 800W FS Plat Ht Plg LH Pwr Sply Kit
Thank you
Similar scenario in my stack, but the problem was that the storage bay was not receiving enough power for all the LFF drives. We stumbled onto this when going through and reseeding all the Hot-swappable LFFs. We unseeded all of them, then reseeded them one at a time. During this process we discovered that the last three caused a shutdown of the system. It didnt' matter which LFF or which bay the last LFFs went into (have a few open ones), once they were plugged in, the system shut down and upon reboot had the error.
This is not 100% True,
It may be true of HPE who dropped the ball massivly on the Gen 10 and did not Certify for Intel vROC and thus missed the boat for the generation but is not true for any vendor who did certify.
NVMe dose attach directly via PCI to the CPU but luckily Intel included a feature in thier chips "vROC" to offer hardware assisted NVMe Raid and it works very well with all Intel NVMe i have tried Optane and 4600.
Thanks to pmetal from me too!
Hi Sir michael, may i know what is the DCHP of iLo?
Dear Community,
In the quickspecs of the DL20 Gen10 it is stated that it can max handle 64GB memory max.
But on several reviews it is written that when Intel will support 128GB ram on its e-2100 xeon CPU's the DL20 Gen10 will support it to.
According to Intel ark the cpu's now support 128GB ram. Does the DL20 Gen10 support it to?
Thanks in advance
Romke
Hi,
we plan to use Dl380Gen10 servers for our new SuSE OpenStack Cloud / SuSE Enterprise Storage (Ceph). As NVMes are getting cheaper and it would eliminate the need for array controllers we think about an all-NVMe setup with 2 NVMEs for boot (SLES) and 2 or more NVMes for Ceph OSD's
I found this HPE document:
https://h20195.www2.hpe.com/v2/getpdf.aspx/4aa6-3464enw.pdf
Q: Can NVMe drives be used for operating system boot purposes?
A: NVMe 2.5" SSDs work in UEFI and legacy modes, but there is no boot support at this time. The drive performance would be best used for
workloads that demand faster data access
Also, Hot Swap seems to supported, but not Hot Add.
Questions:
- with NVMes only software RAID is possible?
- is it now possible to boot SLES from NVMe and a sw raid1? I heard that this should be woking with SES.
- Hot Swap is nice, but a downtime for Hot Add (Expansion) is not so nice, is this still the case?
- what about the bandwith needed for a large number of NVMe devices? What is the bottlenck if we want to put 10+ NVMes in a serve? PCI bus?
Anyone here using (all) NVMes setup with SLES or any other Linux, how is this working is operations? Other things than the mentioned that I should think about?
Old post, but I have 5 x P411 controllers that are stuck in HBAMode.
Does anyone know how to get them back into RAID mode?
It appears that these cards may have come from an Integrity server where the system utilities allow changes to hbamode.
The latest SSA and SSACLI in SP will not allow hbamode to be turned off (unsupported apparently).
Latest firmware 6.64 has been applied, tried everything but still no success.
Any suggestions???