We recently reviewed a dual processor setup using the Gigabyte GA-7PESH1 and a pair of socket 2011 Xeons, with varied reactions from the point of view of a need for ultimate throughput with a simple to intermediate knowledge of NUMA programming.  Today Gigabyte has announced the successors to the GA-7PESH1 and GA-7PESH2 in the form of the 4-way GPU supporting GA-7PESH3 and GA-7PESH4, along with a 3-way 1P model, the GA-6PXSV4.

The main feature Gigabyte wish to promote is the improved memory compatibility; specifically noting that they are the only manufacturer to support a system fully populated with DDR3-1600MHz 16GB RDIMM 1.35V modules. 

The GA-7PESH3 looks more like a normal 2P motherboard in terms of orientation, with each CPU supporting one module per channel and the PCIe layout designed for multi-PCI devices (think 3-way or 4-way dual slot GPUs, or 7-way single slot).  The motherboard also contains an LSI controller for support up to eight SAS 6 Gbps drives, a full 7.1 audio solution and USB 3.0 functionality.

The GA-7PESH4 model gives the full range of memory slots available, similar to the GA-7PESH1, but with an orientation change.  The PCIe layout is clearly a little odd, with one at the top of the IO panel, and the power connectors are also in this area.  The main selling point of the 7PESH4 is the inclusion of four Intel I350 Gigabit Ethernet ports.

The GA-6PSXV4 takes the GA-7PESH4 in a single socket form, with the four Intel I350 Gigabit Ethernet ports, and the socket orientation at right angles to normal channel implementations.  The layout supports 3-way GPU, but also includes a PCIe x1 slot as well as a PCI slot.  Like the GA-7PESH3, we also get a series of USB 3.0 ports on the rear IO.

All three boards are reported to have been designed for airflow design in mind, and each CPU is supported by a 6-phase power delivery (remember, no overclocking on server boards).  We analyzed the Gigabyte management software package in our GA-7PESH1 review, which used the Advocent Server Management Interface for security, monitoring, and remote control, and expect these new models to be relatively similar.

As always, Gigabyte server boards are currently B2B (business-to-business) only, but readers in the US can contact their local Gigabyte server branch here for more information regarding individual pricing.

Comments Locked

12 Comments

View All Comments

  • Jammrock - Tuesday, January 29, 2013 - link

    The GA-7PESH3 is a tower chassis board, the GA-7PESH4 is a rack mount chassis board.

    On a tower chassis the air comes in and then up, lending to the design of the GA-7PESH3. The SAS connectors on the front edge of the board is another give away. On a rack mount the thick SAS cables would block the insertion of the SAS hot swap backplane.

    The GA-7PESH4 looks designed for a rack mount case where the air is pushed through the chassis from front to back. Your typical rack mounted chassis has the HDDs and backplane up front, followed y RAM/CPU(s), and finally the peripherals. If the RAM was oriented differently it would block airflow in a 1U or possibly 2U chassis and cause overheating of the backside components. The location of the SAS connectors is the other give away, being located out of the way of a potential SAS backplane.

    The GA-6PSXV4 is small enough and oriented such that it could be in a tower or a rack mounted chassis.

    Or at least that's how I would interpret it based on my data center experience.
  • IanCutress - Tuesday, January 29, 2013 - link

    Makes sense :) I should intern at a data center for a week or two at some point.
  • Jammrock - Tuesday, January 29, 2013 - link

    Buy some earplugs first :)
  • shogun18 - Tuesday, January 29, 2013 - link

    So close, yet such bad choices. WHY!!! do MB makers keep putting consumer crap on server boards? One 3Gx16 lane PCIe slot is understandable (GPU computing or one of those big fat PCIe-SSD). But the rest should have been 3Gx8 and 2Gx8/2Gx4. Instead they wasted !!2!! entire slots on 2G-1 and PCI?!?! If they were going to put a SAS chip on it (and I wouldn't have - people want to use the PCIe lanes for add-in card eg. Fiber-Channel) then at least spend the extra $10 and use SAS6. It's not 2009.
  • NitroWare - Tuesday, January 29, 2013 - link

    They slots are not pointless.

    The ATX/Tower boards featured in this article are server/workstation.

    Excluding the PESH4 which is clearly rack mount and needs tro be used in a validated chassis with airflow baffles

    You can either use them for server in a server or office environment or as a professional workstation. This is where the audio, usb3 and 'pointless expansion slots' come in.

    Either for multimedia, science or networking.

    As for onbord SAS, not every SI uses the onboard chip despite if its a real SAS or not.

    Some SI assume the onboard is crap/fakeraid and use their standard fleet deployed addin card

    Some need battery backup or a validated solution, mobos may not offer a BBU option

    Some want the abililty of removing the whole card/array subsystem out of a failed or reundant server without thinking about rebuild or compatibily between different subchips or roms.

    On some boards, the non SAS sku might not be avalible in the local channel.
  • NitroWare - Tuesday, January 29, 2013 - link

    "they are the only manufacturer to support a system fully populated with DDR3-1600MHz 16GB RDIMM 1.35V modules. "

    I am not sure of the validity of this comment, as name brand or even some whitebox servers can take 512/768 in a 2P config with the right RDIMMs. Unless they are reffering to 1600 which makes less sense as the higher end Xeon parts are 1600 anyway.
  • IanCutress - Wednesday, January 30, 2013 - link

    Yeah the key point there was the 1600 MHz. Standard usage is to be 1600 MHz when one module per channel, then 1333 MHz when using two modules per channel to maintain signal coherency. I have had a few emails in the past couple of months with some users with HPC usage scenarios that cry out for memory density + bandwidth on the CPU, saying that 2400 MHz in a non-ECC environment is great. Moving towards that on the 2P Xeon/ECC side can only be a good thing.
  • JMC2000 - Tuesday, January 29, 2013 - link

    From the information of Gigabyte's page for the 7PESH3, there are 4 PCI-E 3.0 x16 slots and 3 PCI-E 3.0 x8 slots. Knowing that S2011 chips have 32 integrated PCI-E lanes, set up as x16/x16/x8 or x16/x8/x8/x8, which slots split an x16 connection into dual x8?
  • IanCutress - Wednesday, January 30, 2013 - link

    S2011 technically have 40 PCIe lanes, hence the 16+8+8+8 configuration. For ease of use, manufacturers tend to split the x16 to the slot directly below it. However with two GPUs, there is up to 80 lanes available, meaning that two x16s can come from CPU1 and two x16s can come from CPU2 (with all sorts of variants regarding splitting and the spare 8 lanes).

    I'll see if I can get hold of a block diagram so we know what is what :)

    Ian
  • IanCutress - Wednesday, January 30, 2013 - link

    After taking to GB, the official block diagram is under NDA, but I was told:

    CPU 1 controls PCIe 1-4. 1 and 3 are x16, which drop to x8 if 2 and 4 are populated.

    CPU 2 controls PCIe 5-7. 5 and 7 are x16, and 5 will drop to x8 if 6 is populated.

    So they are only using 32 lanes from each CPU, but with 4-way it is a full x16/x16/x16/x16 without PLX 8747 chips.

Log in

Don't have an account? Sign up now