By Need

By Industry

By Function

Infrastructure Modernization

Back to Basics: Part 1 Memory Configuration in Nehalem and Westmere 2 Socket Servers

One of the joys of working at a company such as GreenPages is the different people and resources I have access to in solving customer problems.  That may sound strange, but there are so many deep technical pieces that touch, or are an integral part of, virtualization and datacenters that very few people can know everything.  I have experts in storage, networking, security, etc. that I can leverage to solve whatever solution I am working on architecting for a client.

Some of the behind-the-scenes experts I have access to are what we call “Pre-sales Tech Support.”  This is not a break/fix team but a group of people that know every connector specification, bit, and byte for the hardware and software that we work with.  When I recommend a server, switch, or application, they make sure it is configured correctly.  They help me get the basics right when I architect any solution.

A perfect example of this is memory configurations in 2 socket  servers.  A server’s performance can be seriously degraded if the memory configuration is not correct.  You cannot simply pick the amount of memory you want and use that, and many administrators who are not aware of this end up with underperforming servers.  Here is how to configure memory properly on Xeon 5500 and later servers:

RDIMMs or Registered memory is more scalable, stable, and has larger capacities and is what we use in all of our servers, regardless of manufacturer.  Dual rank memory has higher bus speeds and are preferred in 2 socket servers over quad rank memory that has bigger capacities but is slower.  There are up to 18 DIMM slots in these servers arranged with 3 memory channels per processor and 3 DIMM slots per channel.  See the diagram below to see how they are arranged.

registered memory

Diagram on the left shows a typical 2 socket processor, each with 3 memory channels (Memory Controller Hub), each channel with 3 DIMM slots.

Diagram on the right shows a typical configuration where the first slots for each channel are populated.  This is a good memory configuration.

Here is the bottom line.  When configuring memory for a server with an Intel Xeon 5500 and up CPU, fully populate each slot in the memory channels evenly.  That way, you do not reduce the bus speed and performance of the entire system.  Typically the first and second slots are populated to reach the desired amount of memory.  Common builds for our servers are:

  • 6x 2G = 12 GB
  • 6x 4GB = 24 GB
  • 6x 8GB = 48 GB
  • 12x 8GB= 96 GB
  • 12 x2GB = 24 GB
  • 12x 4GB =  48 GB

These last 2 configurations are slightly less favored since any memory upgrades would mean ditching this memory and getting 8 or 16 GB DIMMS.

DIMM slot capacity populated speed table

16 GB DIMMs are still expensive, and we do not get asked for them often.  Servers with 4 CPU sockets and servers or blades with memory extension/expansions follow different memory population rules.

It is also important to note the bus speed of the processor, as it has an impact on what memory you need and how the memory is populated.  Here is a link to HP Memory configuration specifications and the same for Dell memory configurations.

All of this complexity and much more are handled on a regular basis by my tech support staff so they make 100% sure that any server, blade, chassis, switch, etc. is configured properly and will provide years of reliable and high performance.  This is just one of the many benefits of working with a partner such as GreenPages.