Move Allocations Between NUMA NodesΒΆ
When using NUMA (cache coherent or non uniform memory access) systems, there are different latencies to parts of the memory. From an application perspective, the memory looks the same, yet especially for high-performance computing it is advantageous to have finer control. malloc() attempts to allocate memory close to your node, but it can make no guarantees. Therefore, Linux provides both a process-level interface for setting NUMA policies with the system utility numactl, and a fine-grained interface with libnuma. These interfaces work on ranges of memory in multiples of the page size, which is the length or unit of address space loaded into a processor cache at once.
A page range may be bound to a NUMA node using the
umpire::strategy::NumaPolicy
. It can therefore also be moved between
NUMA nodes using the umpire::ResourceManager::move()
with a different
allocator. The power of using such an abstraction is that the NUMA node can be
associated with a device, in which case the memory is moved to, for example,
GPU memory.
In this recipe we create an allocation bound to a NUMA node, and move it to another NUMA node.
The complete example is included below:
//////////////////////////////////////////////////////////////////////////////
// Copyright (c) 2016-20, Lawrence Livermore National Security, LLC and Umpire
// project contributors. See the COPYRIGHT file for details.
//
// SPDX-License-Identifier: (MIT)
//////////////////////////////////////////////////////////////////////////////
#include <iostream>
#include "umpire/Allocator.hpp"
#include "umpire/ResourceManager.hpp"
#include "umpire/strategy/NumaPolicy.hpp"
#include "umpire/util/Macros.hpp"
#include "umpire/util/numa.hpp"
#if defined(UMPIRE_ENABLE_CUDA)
#include <cuda_runtime_api.h>
#endif
int main(int, char**)
{
auto& rm = umpire::ResourceManager::getInstance();
const std::size_t alloc_size = 5 * umpire::get_page_size();
// Get a list of the host NUMA nodes (e.g. one per socket)
auto host_nodes = umpire::numa::get_host_nodes();
if (host_nodes.size() < 1) {
UMPIRE_ERROR("No NUMA nodes detected");
}
// Create an allocator on the first NUMA node
auto host_src_alloc = rm.makeAllocator<umpire::strategy::NumaPolicy>(
"host_numa_src_alloc", rm.getAllocator("HOST"), host_nodes[0]);
// Create an allocation on that node
void* src_ptr = host_src_alloc.allocate(alloc_size);
if (host_nodes.size() > 1) {
// Create an allocator on another host NUMA node.
auto host_dst_alloc = rm.makeAllocator<umpire::strategy::NumaPolicy>(
"host_numa_dst_alloc", rm.getAllocator("HOST"), host_nodes[1]);
// Move the memory
void* dst_ptr = rm.move(src_ptr, host_dst_alloc);
// The pointer shouldn't change even though the memory location changes
if (dst_ptr != src_ptr) {
UMPIRE_ERROR("Pointers should match");
}
// Touch it
rm.memset(dst_ptr, 0);
// Verify NUMA node
if (umpire::numa::get_location(dst_ptr) != host_nodes[1]) {
UMPIRE_ERROR("Move was unsuccessful");
}
}
#if defined(UMPIRE_ENABLE_DEVICE)
// Get a list of the device nodes
auto device_nodes = umpire::numa::get_device_nodes();
if (device_nodes.size() > 0) {
// Create an allocator on the first device NUMA node. Note that
// this still requires using the "HOST" allocator. The allocations
// are moved after the address space is reserved.
auto device_alloc = rm.makeAllocator<umpire::strategy::NumaPolicy>(
"device_numa_src_alloc", rm.getAllocator("HOST"), device_nodes[0]);
// Move the memory
void* dst_ptr = rm.move(src_ptr, device_alloc);
// The pointer shouldn't change even though the memory location changes
if (dst_ptr != src_ptr) {
UMPIRE_ERROR("Pointers should match");
}
// Touch it -- this currently uses the host memset operation (thus, copying
// the memory back)
rm.memset(dst_ptr, 0);
// Verify NUMA node
if (umpire::numa::get_location(dst_ptr) != device_nodes[0]) {
UMPIRE_ERROR("Move was unsuccessful");
}
}
#endif
// Clean up by deallocating from the original allocator, since the
// allocation record is still associated with that allocator
host_src_alloc.deallocate(src_ptr);
return 0;
}