Quantcast
Channel: All ProLiant Servers (ML,DL,SL) posts
Viewing all 30225 articles
Browse latest View live

Re: ML350p Gen8 - Adding a SAS Expander


Re: [DL 360 Gen8] ILO 4 small issue

$
0
0

Disregard - HPE released the 1.50 version today (part of ServicePack 2016-10).

NK

Re: Service Pack for ProLiant (SPP) Version 2016.10.0

$
0
0

Hi, 

Can you share download link?

Re: Service Pack for ProLiant (SPP) Version 2016.10.0

Re: Service Pack for ProLiant (SPP) Version 2016.10.0

ML350 G9 with p440ar supporting 24lff drives.

$
0
0

Hello, we recently purchased an ML350 G9 pn: 776976-s01 and we are pretty confused with all the documentation regarding the raid controllers. This one has a P440ar. In order to have 24LFF SATAIII drives working on it I was told before that we needed to have 2 x 769635 - B21 plus 2 x 726547-B21, but in the documentation it specifies that 769635 - B21 is not compatible with the P440ar card. What should we do? Or does the P440ar already supports the 24LFF drives just installing the Drive cage kits? Do we need to purchase along with the expander cards and the cages a P440 controller?

Thanks.

Re: HP PROLIANT DL 380 G7 RAID 5 PRE FAILURE DISK

$
0
0

Yes, if you add another drive to the system you should be able to configure it as a spare.

End of Life/End of Support

$
0
0

Can anyone give me a reliable source for end of life information of Proliant servers?

I have looked and cannot find a good source for this needed infomation,.

I am looking for EOl dates for:

ProLiant DL360 G7

ProLiant DL380 G7

ProLiant ML350 G6

ProLiant DL380p Gen8

ProLiant DL360p Gen8

ProLiant ML350p Gen8

ProLiant ML350e Gen8 v2

Thanks to anyone who can help!

 


Intelligent Provisioning G8 and G9

$
0
0

Since they have released the new service pack 10_2016 now and also noticing a new Intelligent Provisioning package for G9 460c blade servers, most likely to support the Windows Server 2016 deployments. I didn't notice one for for the G8 blade servers. Could I use the G9 Intelligent Provisioning pacakge to upgrade the G8 servers Intelligent Provisioning? Has anyone done this yet? 

Re: Intelligent Provisioning G8 and G9

$
0
0

No, the IP images are different between Gen8 and Gen9. Version 1.xx is for Gen8, and Version 2.xx is for Gen9

Re: Intelligent Provisioning G8 and G9

$
0
0

Thank Jimmy for the version clarification!

Re: HP DL380 G7 P410i controler Cache disabled

$
0
0

Hi All,

 

thought i would add that it also complains that parity initalization has failed, would this cuase the cache to be permanentley disabled?

 

also is there a way i can diagnose that it realy is a faulty cache module before buying a new one? 

ML10 gen9 uefi bios

$
0
0

Hi,

we have a ML10 gen9 . We have to install windows server 2008 small business. How we can disable uefi bios.

Some website explain to press F9 to enter in System Utility, but can press only: F7 for boot menu, CANC for BIOS  setting.

In BIOS we can't find where disable UEFI bios.

You can help us?

Thanks

OneView how to refresh dns cache

$
0
0

Hi

We monitor our servers with oneview 2.0. After a location change of the servers oneview can't find the ilo address anymore because at the new location the servers ilo has a new dhcp adress.
I have restarted the oneview appliance but it still tries to connect to the old ip address. How can I refresh the ipconfig of the appliance or why does the appliance tries to connect to the ip address instead of the dns address.

I don't want to remove all servers and add all again.

Thanks for an advice.

Ruben

Re: ML10 gen9 uefi bios


Re: Smart Array in BIOS Mode on ML30

$
0
0

I'm afraid that you can use Dynamic Smart Array B140i only in UEFI mode.

What OS are you using?

I was using this controler in AHCI mode on GNU/Linux and make software raid.

My firend encouraged me to try using UEFI mode and I have some problems. Will try to solve them.

ML 30 - Base system device

$
0
0

Hi,

i have problem with one device.

This is hardware ID:

PCI\VEN_103C&DEV_3306&SUBSYS_3381103C&REV_06
PCI\VEN_103C&DEV_3306&SUBSYS_3381103C
PCI\VEN_103C&DEV_3306&CC_088000
PCI\VEN_103C&DEV_3306&CC_0880

When i was searching, everyone sad that this is ILO or Chipset device, but i was installed couple of times this driver and in Device manager this device still dont have driver.

btw. ILO is working fine

 

Re: ML 30 - Base system device

ML30 Gen9 B140i hpdsa CenOS 6.8/RHEL 6.8 problems

$
0
0

Hi there,

For several years I was using GNU/Linux software raid on Dynamic Smart Arrays (B1xi)  on ML110G6 ML110G7 ML310G8 and now on ML30G9 in controller set to AHCI mode.

A firend has encouraged me to try using HP provided driver hpdsa.

As it came out I have to use UEFI mode to use B140i in ML30G9 which is not found by me so great (i don't see real advantages for my use), the BIOS moge was good for me. So I started some testing on CentOS 6.7 installer updated to CentOS 6.8 (CentOS is free port of RHEL, most builds are compatible) and I came on problem appeared:

Hi there,

For several years I was using GNU/Linux software raid on Dynamic Smart Arrays (B1xi)  on ML110G6 ML110G7 ML310G8 and now on ML30G9 in controller set to AHCI mode.

A firend has encouraged me to try using HP provided driver hpdsa.

As it came out I have to use UEFI mode to use B140i in ML30G9 which is not found by me so great (i don't see real advantages for my use), the BIOS moge was good for me. So I started some testing on CentOS 6.7 installer updated to CentOS 6.8 (CentOS is free port of RHEL, most builds are compatible) and I came on problem appeared as:

kernel: DEBUG ASSERT: OS_ReadRegisterUlong(NULL, pHal->hint.ints.host_int_enable) == GLP_SNA_INTERRUPT_DISABLE file=/u1/tbuild/dbe/rhel6/hpdsa/rpm/BUILD/hpdsa-1.2.8/obj/default/.//hpvsa//hal_ibanez/hal_glp.c function=glp_disable_interrupts line=553
kernel: DEBUG_ASSERT: (0 0 0)
kernel: Pid: 398, comm: hpdsa/5 Tainted: P -- ------------ 2.6.32-642.6.1.el6.x86_64 #1
kernel: Call Trace:
kernel: [<ffffffffa00b2172>] ? glp_disable_interrupts+0xa2/0xb0 [hpdsa]
kernel: [<ffffffffa00b39c1>] ? hal_i2c_event_get+0x61/0x70 [hpdsa]
kernel: [<ffffffffa01773b3>] ? detect_aero_i2c_device_api+0x123/0x1040 [hpdsa]
kernel: [<ffffffff8108f040>] ? process_timeout+0x0/0x10
kernel: [<ffffffffa00b4283>] ? hal_i2c_masterWriteRead+0x333/0x600 [hpdsa]
kernel: [<ffffffffa00b5a9c>] ? flush_i2c_request+0xac/0x1a0 [hpdsa]
kernel: [<ffffffffa00b5d03>] ? process_swr_i2c_request+0x173/0x2f0 [hpdsa]
kernel: [<ffffffffa00b63b9>] ? HAL_API_I2C_Direct_Read+0x199/0x3a0 [hpdsa]
kernel: [<ffffffffa00c1a62>] ? i2c_check_drive_install_state+0x312/0x6a0 [hpdsa]
kernel: [<ffffffffa00a4bb8>] ? check_drive_install_state+0x458/0x4c0 [hpdsa]
kernel: [<ffffffffa0124330>] ? OS_interrupt_control+0xb0/0x140 [hpdsa]
kernel: [<ffffffffa01243fb>] ? OS_int_global_enable+0x1b/0x20 [hpdsa]
kernel: [<ffffffffa00dfc35>] ? check_for_ilo_command+0xb5/0x2f0 [hpdsa]
kernel: [<ffffffffa0136782>] ? Bkgnd_Disk_Task+0x1162/0x1420 [hpdsa]
kernel: [<ffffffffa0135620>] ? Bkgnd_Disk_Task+0x0/0x1420 [hpdsa]
kernel: [<ffffffff810a640e>] ? kthread+0x9e/0xc0
kernel: [<ffffffff8100c28a>] ? child_rip+0xa/0x20
kernel: [<ffffffff810a6370>] ? kthread+0x0/0xc0
kernel: [<ffffffff8100c280>] ? child_rip+0x0/0x20

After a little workarround I've diabled the Intel VT-d which fixed the problem. I wonder if there is a way to go with UEFI+B140i+hpdsa+VT-d . Upgrading the system to newest FW from SPP 2016.10.0 does not fix the issue.

 HW:

ML30 Gen9 831068-425, BIOS Version: U23, Release Date: 09/12/2016

Dynamic Smart Array B140i, RAID Stack Version: 4.50, Option ROM Version:4.04-0

iLO Firmware Version 2.50 Sep 23 2016

SW: CentOS 6.8 , kernel-2.6.32-642.el6.x86_64, kmod-hpdsa-1.2.10-110.rhel6u8.x86_64

Can someone thest if this happens in RHEL 6.8?

 

The interesting situation is that this does not happen in CenOS 7 releases. But i'm not interested using systemd in production systems.

Another question is if using UEFI+B140i instead of software raid gives any performance or functional benefits?

Re: Service Pack for ProLiant (SPP) Version 2016.10.0

$
0
0

Thank you for the information, unfortunately nothing new about DSM for MPIO (Neither an OK to use the existing driver nor an updated version) and therefore we still have to wait before we can upgrade to Server 2016 :(

Viewing all 30225 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>