2. Add calls to sandbox_hwloc_init() and other hwloc API functions
\endverbatim
Now you can bootstrap, configure, build, and run the sandbox as normal
-- all calls to "sandbox_hwloc_*" will use the embedded hwloc rather
than any system-provided copy of hwloc.
\page faq Frequently Asked Questions
\htmlonly
\endhtmlonly
\section faq1 Concepts
\subsection faq_why I only need binding, why should I use hwloc ?
hwloc is its portable API that works on a variety of operating
systems.
It supports binding of threads, processes and memory buffers
(see \ref hwlocality_cpubinding and \ref hwlocality_membinding).
Even if some features are not supported on some systems,
using hwloc is much easier than reimplementing your own portability layer.
Moreover, hwloc provides knowledge of cores and hardware threads.
It offers easy ways to bind tasks to individual hardware threads,
or to entire multithreaded cores, etc.
See \ref faq_smt.
Most alternative software for binding do not even know whether each
core is single-threaded, multithreaded or hyper-threaded.
They would bind to individual threads without any way to know whether
multiple tasks are in the same physical core.
However, using hwloc comes with an overhead since a topology must
be loaded before gathering information and binding tasks or memory.
Fortunately this overhead may be significantly reduced by filtering
non-interesting information out of the topology.
For instance the following code builds a topology that only contains
Cores (explicitly filtered-in below),
hardware threads (PUs, cannot be filtered-out),
NUMA nodes (cannot be filtered-out),
and the root object (usually a Machine; the root cannot be removed without breaking the tree).
\verbatim
hwloc_topology_t topology;
hwloc_topology_init(&topology);
/* filter everything out */
hwloc_topology_set_all_types_filter(topology, HWLOC_TYPE_FILTER_KEEP_NONE);
/* filter Cores back in */
hwloc_topology_set_type_filter(topology, HWLOC_OBJ_CORE, HWLOC_TYPE_FILTER_KEEP_ALL);
hwloc_topology_load(topology);
\endverbatim
However, one should remember that filtering such objects out removes
locality information from the hwloc tree.
For instance, we do not know anymore which PU is close to which NUMA
node.
This would be useful to applications that explicitly want to
place specific memory buffers close to specific tasks.
Those applications just need to tell hwloc to keep Group objects that
bring structure information:
\verbatim
hwloc_topology_set_type_filter(topology, HWLOC_OBJ_GROUP, HWLOC_TYPE_FILTER_KEEP_STRUCTURE);
\endverbatim
Note that the default configuration is to keep all objects enabled,
except I/Os and instruction caches.
\subsection faq_indexes Should I use logical or physical/OS indexes? and how?
One of the original reasons why hwloc was created is that
physical/OS indexes
(
obj->os_index) are often crazy and unpredictable:
processors numbers are usually
non-contiguous (processors 0 and 1 are not physically close), they vary from
one machine to another, and may even change after a BIOS or system update.
This numbers make task placement hardly portable.
Moreover some objects have no physical/OS numbers (caches), and some objects
have non-unique numbers (core numbers are only unique within a socket).
Physical/OS indexes are only guaranteed to exist and be unique for PU
and NUMA nodes.
hwloc therefore introduces
logical indexes (
obj->logical_index)
which are portable, contiguous and logically ordered
(based on the resource organization in the locality tree).
In general, one should only use logical indexes and just let hwloc do the
internal conversion when really needed (when talking to the OS and hardware).
hwloc developers recommends that users do not use physical/OS indexes
unless they really know what they are doing.
The main reason for still using physical/OS indexes is when interacting with
non-hwloc tools such as numactl or taskset, or when reading hardware information
from raw sources such as /proc/cpuinfo.
lstopo options
-l and
-p may be used to switch between
logical indexes (prefixed with
L#) and physical/OS indexes (
P#).
Converting one into the other may also be achieved with hwloc-calc which may
manipulate either logical or physical indexes as input or output.
See also \ref cli_hwloc_calc.
\verbatim
# Convert PU with physical number 3 into logical number
$ hwloc-calc -I pu --physical-input --logical-output pu:3
5
# Convert a set of NUMA nodes from logical to physical
# (beware that the output order may not match the input order)
$ hwloc-calc -I numa --logical-input --physical-output numa:2-3 numa:7
0,2,5
\endverbatim
\subsection faq_structural hwloc is only a structural model, it ignores performance models, memory bandwidth, etc.?
hwloc is indeed designed to provide applications with a structural model
of the platform. This is an orthogonal approach to describing the
machine with performance models, for instance using memory bandwidth
or latencies measured by benchmarks.
We believe that both approaches are important for helping application
make the most of the hardware.
For instance, on a dual-processor host with four cores each, hwloc
clearly shows which four cores are together.
Latencies between all pairs of cores of the same processor are likely
identical, and also likely lower than the latency between cores of
different processors.
However, the structural model cannot guarantee such implementation
details.
On the other side, performance models would reveal such details
without always clearly identifying which cores are in the same
processor.
The focus of hwloc is mainly of the structural modeling side.
However, hwloc lets user adds performance information to the topology
through distances
(see \ref topoattrs_distances),
memory attributes
(see \ref topoattrs_memattrs)
or even custom annotations (see \ref faq_annotate).
hwloc may also use such distance information for grouping objects
together (see \ref faq_onedim and \ref faq_groups).
\subsection faq_onedim hwloc only has a one-dimensional view of the architecture, it ignores distances?
hwloc places all objects in a tree. Each level is a one-dimensional
view of a set of similar objects. All children of the same object (siblings)
are assumed to be equally interconnected (same distance between any of them),
while the distance between children of different objects (cousins) is supposed
to be larger.
Modern machines exhibit complex hardware interconnects, so this tree
may miss some information about the actual physical distances between objects.
The hwloc topology may therefore be annotated with distance information that
may be used to build a more realistic representation (multi-dimensional)
of each level.
For instance, there can be a distance matrix that representing the latencies
between any pair of NUMA nodes if the BIOS and/or operating system reports them.
For more information about the hwloc distances, see \ref topoattrs_distances.
\subsection faq_groups What are these Group objects in my topology?
hwloc comes with a set of predefined object types (Core, Package, NUMA node, Caches)
that match the vast majority of hardware platforms.
The ::HWLOC_OBJ_GROUP type was designed for cases where this set is not sufficient.
Groups may be used anywhere to add more structure information to the topology,
for instance to show that 2 out of 4 NUMA nodes are actually closer than the others.
When applicable, the
subtype field describes why a Group
was actually added (see also \ref attributes_normal).
hwloc currently uses Groups for the following reasons:
- NUMA parents when memory locality does not match any existing object.
- I/O parents when I/O locality does not match any existing object.
- Distance-based groups made of close objects.
- AMD Bulldozer dual-core compute units (subtype is ComputeUnit, in the x86 backend),
but these objects are usually merged with the L2 caches.
- Intel Extended Topology Enumeration levels (in the x86 backend).
- Windows processor groups (unless they contain a single NUMA node, or a single Package, etc.).
- IBM S/390 "Books" on Linux (subtype is Book).
- AIX unknown hierarchy levels.
hwloc Groups are only kept if no other object has the same
locality information.
It means that a Group containing a single child is merged
into that child.
And a Group is merged into its parent if it is its only child.
For instance a Windows processor group containing a single NUMA node
would be merged with that NUMA node since it already contains the
relevant hierarchy information.
When inserting a custom Group with hwloc_hwloc_topology_insert_group_object(),
this merging may be disabled by setting its
dont_merge attribute.
\subsection faq_asymmetric What happens if my topology is asymmetric?
hwloc supports asymmetric topologies even if most platforms are usually
symmetric. For example, there could be different types of processors
in a single machine, each with different numbers of cores, symmetric
multithreading, or levels of caches.
In practice, asymmetric topologies mostly appear when intermediate groups
are added for I/O affinity: on a 4-package machine, an I/O bus may be
connected to 2 packages. These packages are below an additional Group
object, while the other packages are not (see also \ref faq_groups).
To understand how hwloc manages such cases, one should first remember
the meaning of levels and cousin objects. All objects of the same type
are gathered as horizontal levels with a given depth. They are also
connected through the cousin pointers of the ::hwloc_obj structure.
Object attribute (cache depth and type, group depth) are also taken
in account when gathering objects as horizontal levels.
To be clear: there will be one level for L1i
caches, another level for L1d caches, another one for L2, etc.
If the topology is asymmetric (e.g., if a group is missing above some
processors), a given horizontal level will still exist if there
exist any objects of that type. However, some branches of the overall
tree may not have an object located in that horizontal level. Note
that this specific hole within one horizontal level does not imply
anything for other levels. All objects of the same type are gathered
in horizontal levels even if their parents or children have different
depths and types.
See the diagram in \ref termsanddefs for a graphical representation
of such topologies.
Moreover, it is important to understand that a same parent object may
have children of different types (and therefore, different
depths).
These children are therefore siblings (because they
have the same parent), but they are not cousins (because they
do not belong to the same horizontal level).
\subsection faq_nosmt What happens to my topology if I disable symmetric multithreading, hyper-threading, etc. in the system?
hwloc creates one PU (processing unit) object per hardware thread.
If your machine supports symmetric multithreading, for instance Hyper-Threading,
each Core object may contain multiple PU objects:
\verbatim
$ lstopo -
...
Core L#0
PU L#0 (P#0)
PU L#1 (P#2)
Core L#1
PU L#2 (P#1)
PU L#3 (P#3)
\endverbatim
x86 machines usually offer the ability to disable hyper-threading in the BIOS.
Or it can be disabled on the Linux kernel command-line at boot time,
or later by writing in sysfs virtual files.
If you do so, the hwloc topology structure does not significantly change,
but some PU objects will not appear anymore.
No level will disappear, you will see the same number of Core objects,
but each of them will contain a single PU now.
The PU level does not disappear either
(remember that hwloc topologies always contain a PU level at the bottom of the topology)
even if there is a single PU object per Core parent.
\verbatim
$ lstopo -
...
Core L#0
PU L#0 (P#0)
Core L#1
PU L#1 (P#1)
\endverbatim
\subsection faq_smt How may I ignore symmetric multithreading, hyper-threading, etc. in hwloc?
First, see \ref faq_nosmt for more information about multithreading.
If you need to ignore symmetric multithreading in software,
you should likely manipulate hwloc Core objects directly:
\verbatim
/* get the number of cores */
unsigned nbcores = hwloc_get_nbobjs_by_type(topology, HWLOC_OBJ_CORE);
...
/* get the third core below the first package */
hwloc_obj_t package, core;
package = hwloc_get_obj_by_type(topology, HWLOC_OBJ_PACKAGE, 0);
core = hwloc_get_obj_inside_cpuset_by_type(topology, package->cpuset,
HWLOC_OBJ_CORE, 2);
\endverbatim
Whenever you want to bind a process or thread to a core, make sure you
singlify its cpuset first, so that the task is actually bound to a single
thread within this core (to avoid useless migrations).
\verbatim
/* bind on the second core */
hwloc_obj_t core = hwloc_get_obj_by_type(topology, HWLOC_OBJ_CORE, 1);
hwloc_cpuset_t set = hwloc_bitmap_dup(core->cpuset);
hwloc_bitmap_singlify(set);
hwloc_set_cpubind(topology, set, 0);
hwloc_bitmap_free(set);
\endverbatim
With hwloc-calc or hwloc-bind command-line tools, you may specify that
you only want a single-thread within each core by asking for their first
PU object:
\verbatim
$ hwloc-calc core:4-7
0x0000ff00
$ hwloc-calc core:4-7.pu:0
0x00005500
\endverbatim
When binding a process on the command-line, you may either specify
the exact thread that you want to use, or ask hwloc-bind to singlify
the cpuset before binding
\verbatim
$ hwloc-bind core:3.pu:0 -- echo "hello from first thread on core #3"
hello from first thread on core #3
...
$ hwloc-bind core:3 --single -- echo "hello from a single thread on core #3"
hello from a single thread on core #3
\endverbatim
\htmlonly
\endhtmlonly
\section faq2 Advanced
\subsection faq_xml I do not want hwloc to rediscover my enormous machine topology every time I rerun a process
Although the topology discovery is not expensive on common machines,
its overhead may become significant when multiple processes repeat
the discovery on large machines (for instance when starting one process
per core in a parallel application).
The machine topology usually does not vary much, except if some cores
are stopped/restarted or if the administrator restrictions are modified.
Thus rediscovering the whole topology again and again may look useless.
For this purpose, hwloc offers XML import/export and shared memory features.
XML lets you
save the discovered topology to a file (for instance with the lstopo program)
and reload it later by setting the HWLOC_XMLFILE environment variable.
The HWLOC_THISSYSTEM environment variable should also be set to 1 to
assert that loaded file is really the underlying system.
Loading a XML topology is usually much faster than querying multiple
files or calling multiple functions of the operating system.
It is also possible to manipulate such XML files with the C programming
interface, and the import/export may also be directed to memory buffer
(that may for instance be transmitted between applications through a package).
See also \ref xml.
\note The environment variable HWLOC_THISSYSTEM_ALLOWED_RESOURCES
may be used to load a XML topology that contains the entire machine
and restrict it to the part that is actually available to the current
process (e.g. when Linux Cgroup/Cpuset are used to restrict the set
of resources). See \ref envvar.
Shared-memory topologies consist in one process exposing its topology
in a shared-memory buffer so that other processes (running on the same machine)
may use it directly.
This has the advantage of reducing the memory footprint since a single
topology is stored in physical memory for multiple processes.
However, it requires all processes to map this shared-memory buffer
at the same virtual address, which may be difficult in some cases.
This API is described in \ref hwlocality_shmem.
\subsection faq_multitopo How many topologies may I use in my program?
hwloc lets you manipulate multiple topologies at the same time.
However, these topologies consume memory and system resources
(for instance file descriptors) until they are destroyed.
It is therefore discouraged to open the same topology multiple
times.
Sharing a single topology between threads is easy (see \ref threadsafety)
since the vast majority of accesses are read-only.
If multiple topologies of different (but similar) nodes are needed
in your program, have a look at \ref faq_diff.
\subsection faq_diff How to avoid memory waste when manipulating multiple similar topologies?
hwloc does not share information between topologies.
If multiple similar topologies are loaded in memory, for instance
the topologies of different identical nodes of a cluster,
lots of information will be duplicated.
hwloc/diff.h (see also \ref hwlocality_diff) offers the ability to
compute topology differences, apply or unapply them, or export/import
to/from XML.
However, this feature is limited to basic differences such as attribute changes.
It does not support complex modifications such as adding or removing some objects.
\subsection faq_annotate How do I annotate the topology with private notes?
Each hwloc object contains a userdata field that may be used by
applications to store private pointers. This field is only valid
during the lifetime of these container object and topology.
It becomes invalid as soon the topology is destroyed,
or as soon as the object disappears, for instance when restricting
the topology.
The userdata field is not exported/imported to/from XML by default since
hwloc does not know what it contains.
This behavior may be changed by specifying application-specific callbacks
with hwloc_topology_set_userdata_export_callback()
and hwloc_topology_set_userdata_import_callback().
Each object may also contain some info attributes
(key name and value) that are setup by hwloc during discovery
and that may be extended by the user with
hwloc_obj_add_info() (see also \ref attributes).
Contrary to the userdata field which is unique, multiple info
attributes may exist for each object, even with the same name.
These attributes are always exported to XML.
However, only character strings may be used as key names and values.
It is also possible to insert Misc objects with a custom name
anywhere as a leaf of the topology (see \ref miscobjs).
And Misc objects may have their own userdata and info attributes
just like any other object.
The hwloc-annotate command-line tool may be used for adding
Misc objects and info attributes.
There is also a topology-specific userdata pointer that can be used
to recognize different topologies by storing a custom pointer.
It may be manipulated with hwloc_topology_set_userdata()
and hwloc_topology_get_userdata().
\htmlonly
\endhtmlonly
\section faq3 Caveats
\subsection faq_slow_lstopo Why is hwloc slow?
Building a hwloc topology on a large machine may be slow because
the discovery of hundreds of hardware cores or threads takes time
(especially when reading thousands of sysfs files on Linux).
Ignoring some objects (for instance caches) that aren't useful
to the current application may improve this overhead (see \ref faq_why).
One should also consider using XML (see \ref faq_xml) to work
around such issues.
Additionally, lstopo enables most hwloc objects and discovery flags
by default so that the output topology is as precise as possible
(while hwloc disables many of them by default).
This includes I/O device discovery through PCI libraries as well as external
libraries such as NVML.
To speed up lstopo, you may disable such features with command-line
options such as \--no-io.
When NVIDIA GPU probing is enabled with CUDA or NVML, one should make sure that
the Persistent mode is enabled (with nvidia-smi -pm 1)
to avoid significant GPU initialization overhead.
When AMD GPU discovery is enabled with OpenCL and hwloc is used remotely
over ssh, some spurious round-trips on the network may significantly
increase the discovery time.
Forcing the DISPLAY environment variable to the remote X server
display (usually :0) instead of only setting the COMPUTE
variable may avoid this.
Also remember that these components may be disabled at build-time with
configure flags such as \--disable-opencl, \--disable-cuda or \--disable-nvml,
and at runtime with the environment variable
HWLOC_COMPONENTS=-opencl,-cuda,-nvml
or with hwloc_topology_set_components().
\subsection faq_privileged Does hwloc require privileged access?
hwloc discovers the topology by querying the operating system.
Some minor features may require privileged access to the operation
system.
For instance memory module discovery on Linux is reserved to root,
and the entire PCI discovery on Solaris and BSDs requires access to
some special files that are usually restricted to root
(/dev/pci* or /devices/pci*).
To workaround this limitation, it is recommended to export the
topology as a XML file generated by the administrator (with the
lstopo program) and make it available to all users
(see \ref xml).
It will offer all discovery information to any application without
requiring any privileged access anymore.
Only the necessary hardware characteristics will be exported, no
sensitive information will be disclosed through this XML export.
This XML-based model also has the advantage of speeding up the
discovery because reading a XML topology is usually much faster
than querying the operating system again.
The utility hwloc-dump-hwdata is also involved in gathering
privileged information at boot time and making it available to
non-privileged users (note that this may require a specific SELinux
MLS policy module). However, it only applies to Intel Xeon Phi processors
for now (see \ref faq_knl_dump).
See also HWLOC_DUMPED_HWDATA_DIR in \ref envvar for details
about the location of dumped files.
\subsection faq_os_error What should I do when hwloc reports "operating system" warnings?
When the operating system reports invalid locality information (because
of either software or hardware bugs), hwloc may fail to insert some objects
in the topology because they cannot fit in the already built tree of resources.
If so, hwloc will report a warning like the following.
The object causing this error is ignored, the discovery continues but the
resulting topology will miss some objects and may be asymmetric
(see also \ref faq_asymmetric).
\verbatim
****************************************************************************
* hwloc received invalid information from the operating system.
*
* L3 (cpuset 0x000003f0) intersects with NUMANode (P#0 cpuset 0x0000003f) without inclusion!
* Error occurred in topology.c line 940
*
* Please report this error message to the hwloc user's mailing list,
* along with the files generated by the hwloc-gather-topology script.
*
* hwloc will now ignore this invalid topology information and continue.
****************************************************************************
\endverbatim
These errors are common on large AMD platforms because of BIOS and/or Linux
kernel bugs causing invalid L3 cache information.
In the above example, the hardware reports
a L3 cache that is shared by 2 cores in the first NUMA node and 4 cores
in the second NUMA node. That's wrong, it should actually be shared by all 6
cores in a single NUMA node.
The resulting topology will miss some L3 caches.
If your application does not care about cache sharing, or if you do not plan to
request cache-aware binding in your process launcher, you may likely ignore
this error (and hide it by setting HWLOC_HIDE_ERRORS=1 in your environment).
Some platforms report similar warnings about conflicting Packages and NUMANodes.
On x86 hosts, passing HWLOC_COMPONENTS=x86 in the environment may
workaround some of these issues by switching to a different way to discover the topology.
Upgrading the BIOS and/or the operating system may help.
Otherwise, as explained in the message, reporting this issue to the hwloc developers
(by sending the tarball that is generated by the hwloc-gather-topology script
on this platform) is a good way to make sure that this is a software
(operating system) or hardware bug (BIOS, etc).
See also \ref bugs. Opening an issue on GitHub automatically displays hints
on what information you should provide when reporting such bugs.
\subsection faq_valgrind Why does Valgrind complain about hwloc memory leaks?
If you are debugging your application with Valgrind, you want to
avoid memory leak reports that are caused by hwloc and not by your
program.
hwloc itself is often checked with Valgrind to make sure it does
not leak memory.
However, some global variables in hwloc dependencies are never freed.
For instance libz allocates its global state once at startup and
never frees it so that it may be reused later.
Some libxml2 global state is also never freed because hwloc does not
know whether it can safely ask libxml2 to free it (the application may
also be using libxml2 outside of hwloc).
These unfreed variables cause leak reports in Valgrind.
hwloc installs a Valgrind suppressions file to hide them.
You should pass the following command-line option to Valgrind to use it:
\verbatim
--suppressions=/path/to/hwloc-valgrind.supp
\endverbatim
\htmlonly
\endhtmlonly
\section faq4 Platform-specific
\subsection faq_knl_numa How do I find the local MCDRAM NUMA node on Intel Xeon Phi processor?
Intel Xeon Phi processors introduced a new memory architecture by
possibly having two distinct local memories:
some normal memory (DDR) and some high-bandwidth on-package memory (MCDRAM).
Processors can be configured in various clustering modes to have up to 4 Clusters.
Moreover, each Cluster (quarter, half or whole processor) of the processor may have its own local
parts of the DDR and of the MCDRAM.
This memory and clustering configuration may be probed by looking at MemoryMode
and ClusterMode attributes, see \ref attributes_info_platform and doc/examples/get-knl-modes.c
in the source directory.
Starting with version 2.0, hwloc properly exposes this memory
configuration.
DDR and MCDRAM are attached as two memory children of the same parent,
DDR first, and MCDRAM second if any.
Depending on the processor configuration, that parent may be a Package,
a Cache, or a Group object of type Cluster.
Hence cores may have one or two local NUMA nodes, listed by the core nodeset.
An application may allocate local memory from a core by using that nodeset.
The operating system will actually allocate from the DDR when
possible, or fallback to the MCDRAM.
To allocate specifically on one of these memories,
one should walk up the parent pointers until finding an object with
some memory children.
Looking at these memory children will give the DDR first,
then the MCDRAM if any.
Their nodeset may then be used for allocating or binding memory buffers.
One may also traverse the list of NUMA nodes until finding some whose
cpuset matches the target core or PUs.
The MCDRAM NUMA nodes may be identified thanks to the subtype field
which is set to MCDRAM.
Command-line tools such as hwloc-bind may bind memory on the MCDRAM by
using the hbm keyword. For instance, to bind on the first MCDRAM NUMA node:
\verbatim
$ hwloc-bind --membind --hbm numa:0 -- myprogram
$ hwloc-bind --membind numa:0 -- myprogram
\endverbatim
\subsection faq_knl_dump Why do I need hwloc-dump-hwdata for memory on Intel Xeon Phi processor?
Intel Xeon Phi processors may use the on-package memory (MCDRAM)
as either memory or a memory-side cache
(reported as a L3 cache by hwloc by default,
see HWLOC_KNL_MSCACHE_L3 in \ref envvar).
There are also several clustering modes that significantly affect the memory organization
(see \ref faq_knl_numa for more information about these modes).
Details about these are currently only available to privileged users.
Without them, hwloc relies on a heuristic for guessing the modes.
The hwloc-dump-hwdata utility may be used to dump this privileged binary information
into human-readable and world-accessible files that the hwloc library will later load.
The utility should usually run as root once during boot, in order to update dumped
information (stored under /var/run/hwloc by default) in case the MCDRAM or clustering configuration
changed between reboots.
When SELinux MLS policy is enabled, a specific hwloc policy module may be required
so that all users get access to the dumped files (in /var/run/hwloc by default).
One may use hwloc policy files from the SELinux Reference Policy at
https://github.com/TresysTechnology/refpolicy-contrib
(see also the documentation at https://github.com/TresysTechnology/refpolicy/wiki/GettingStarted).
hwloc-dump-hwdata requires dmi-sysfs kernel module loaded.
The utility is currently unneeded on platforms without Intel Xeon Phi processors.
See HWLOC_DUMPED_HWDATA_DIR in \ref envvar for details
about the location of dumped files.
\subsection faq_bgq How do I build hwloc for BlueGene/Q?
IBM BlueGene/Q machines run a standard Linux on the login/frontend nodes
and a custom CNK (Compute Node Kernel) on the compute nodes.
To discover the topology of a login/frontend node, hwloc should be
configured as usual, without any BlueGene/Q-specific option.
However, one would likely rather discover the topology of the compute nodes
where parallel jobs are actually running.
If so, hwloc must be cross-compiled with the following configuration line:
\verbatim
./configure --host=powerpc64-bgq-linux --disable-shared --enable-static \
CPPFLAGS='-I/bgsys/drivers/ppcfloor -I/bgsys/drivers/ppcfloor/spi/include/kernel/cnk/'
\endverbatim
CPPFLAGS may have to be updated if your platform headers are installed
in a different directory.
\subsection faq_windows How do I build hwloc for Windows?
hwloc releases are available as pre-built ZIPs for Windows on both 32bits and
64bits x86 platforms.
They are built using MSYS2 and MinGW on a Windows host.
Such an environment allows using the Unix-like configure, make
and make install steps without having to tweak too many variables or options.
One may look at contrib/ci.inria.fr/job-3-mingw.sh in the hwloc
repository for an example used for nightly testing.
hwloc releases also contain a basic Microsoft Visual Studio solution
under contrib/windows/.
\subsection faq_netbsd_bind How to get useful topology information on NetBSD?
The NetBSD (and FreeBSD) backend uses x86-specific topology discovery
(through the x86 component).
This implementation requires CPU binding so as to query topology
information from each individual processor.
This means that hwloc cannot find any useful topology information
unless user-level process binding is allowed by the NetBSD kernel.
The security.models.extensions.user_set_cpu_affinity
sysctl variable must be set to 1 to do so.
Otherwise, only the number of processors will be detected.
\subsection faq_aix_bind Why does binding fail on AIX?
The AIX operating system requires specific user capabilities for
attaching processes to resource sets (CAP_NUMA_ATTACH).
Otherwise functions such as hwloc_set_cpubind() fail (return -1 with errno set to EPERM).
This capability must also be inherited (through the additional CAP_PROPAGATE capability)
if you plan to bind a process before forking another process,
for instance with hwloc-bind.
These capabilities may be given by the administrator with:
\verbatim
chuser "capabilities=CAP_PROPAGATE,CAP_NUMA_ATTACH"
\endverbatim
\htmlonly
\endhtmlonly
\section faq5 Compatibility between hwloc versions
\subsection faq_version_api How do I handle API changes?
The hwloc interface is extended with every new major release.
Any application using the hwloc API should be prepared to check at
compile-time whether some features are available in the currently
installed hwloc distribution.
For instance, to check whether the hwloc version is at least 2.0, you should use:
\verbatim
#include
#if HWLOC_API_VERSION >= 0x00020000
...
#endif
\endverbatim
To check for the API of release X.Y.Z at build time,
you may compare ::HWLOC_API_VERSION with (X<<16)+(Y<<8)+Z.
For supporting older releases that do not have HWLOC_OBJ_NUMANODE
and HWLOC_OBJ_PACKAGE yet, you may use:
\verbatim
#include
#if HWLOC_API_VERSION < 0x00010b00
#define HWLOC_OBJ_NUMANODE HWLOC_OBJ_NODE
#define HWLOC_OBJ_PACKAGE HWLOC_OBJ_SOCKET
#endif
\endverbatim
Once a program is built against a hwloc library, it may also dynamically
link with compatible libraries from other hwloc releases.
The version of that runtime library may be queried with hwloc_get_api_version().
See \ref faq_version_abi for using this function for testing ABI compatibility.
\subsection faq_version What is the difference between API and library version numbers?
::HWLOC_API_VERSION is the version of the API.
It changes when functions are added, modified, etc.
However it does not necessarily change from one release to another.
For instance, two releases of the same series (e.g. 2.0.3 and 2.0.4)
usually have the same ::HWLOC_API_VERSION (0x00020000).
However their HWLOC_VERSION strings are different
(\"2.0.3\" and \"2.0.4\" respectively).
\subsection faq_version_abi How do I handle ABI breaks?
The hwloc interface was deeply modified in release 2.0
to fix several issues of the 1.x interface
(see \ref upgrade_to_api_2x and the NEWS file in the source directory for details).
The ABI was broken, which means
applications must be recompiled against the new 2.0 interface.
To check that you are not mixing old/recent headers with a recent/old runtime library,
check the major revision number in the API version:
\verbatim
#include
unsigned version = hwloc_get_api_version();
if ((version >> 16) != (HWLOC_API_VERSION >> 16)) {
fprintf(stderr,
"%s compiled for hwloc API 0x%x but running on library API 0x%x.\n"
"You may need to point LD_LIBRARY_PATH to the right hwloc library.\n"
"Aborting since the new ABI is not backward compatible.\n",
callname, HWLOC_API_VERSION, version);
exit(EXIT_FAILURE);
}
\endverbatim
To specifically detect v2.0 issues:
\verbatim
#include
#if HWLOC_API_VERSION >= 0x00020000
/* headers are recent */
if (hwloc_get_api_version() < 0x20000)
... error out, the hwloc runtime library is older than 2.0 ...
#else
/* headers are pre-2.0 */
if (hwloc_get_api_version() >= 0x20000)
... error out, the hwloc runtime library is more recent than 2.0 ...
#endif
\endverbatim
In theory, library sonames prevent linking with incompatible libraries.
However custom hwloc installations or improperly configured build environments
may still lead to such issues.
Hence running one of the above (cheap) checks before initializing hwloc topology
may be useful.
\subsection faq_version_xml Are XML topology files compatible between hwloc releases?
XML topology files are forward-compatible:
a XML file may be loaded by a hwloc library that is more recent
than the hwloc release that exported that file.
However, hwloc XMLs are not always backward-compatible:
Topologies exported by hwloc 2.x cannot be imported by 1.x by default
(see \ref upgrade_to_api_2x_xml for working around such issues).
There are also some corner cases where backward compatibility
is not guaranteed because of changes between major releases
(for instance 1.11 XMLs could not be imported in 1.10).
XMLs are exchanged at runtime between some components of the HPC software stack
(for instance the resource managers and MPI processes).
Building all these components on the same (cluster-wide)
hwloc installation is a good way to avoid such incompatibilities.
\subsection faq_version_synthetic Are synthetic strings compatible between hwloc releases?
Synthetic strings (see \ref synthetic) are forward-compatible:
a synthetic string generated by a release may be imported by future hwloc libraries.
However they are often not backward-compatible because new details may have been
added to synthetic descriptions in recent releases.
Some flags may be given to hwloc_topology_export_synthetic() to avoid such details
and stay backward compatible.
\subsection faq_version_shmem Is it possible to share a shared-memory topology between different hwloc releases?
Shared-memory topologies (see \ref hwlocality_shmem) have strong
requirements on compatibility between hwloc libraries.
Adopting a shared-memory topology fails
if it was exported by a non-compatible hwloc release.
Releases with same major revision are usually compatible
(e.g. hwloc 2.0.4 may adopt a topology exported by 2.0.3)
but different major revisions may be incompatible
(e.g. hwloc 2.1.0 cannot adopt from 2.0.x).
Topologies are shared at runtime between some components of the HPC software stack
(for instance the resource managers and MPI processes).
Building all these components on the same (system-wide) hwloc installation
is a good way to avoid such incompatibilities.
\page upgrade_to_api_2x Upgrading to the hwloc 2.0 API
\htmlonly
\endhtmlonly
See \ref faq5 for detecting the hwloc version that you are compiling
and/or running against.
\htmlonly
\endhtmlonly
\section upgrade_to_api_2x_memory New Organization of NUMA nodes and Memory
\subsection upgrade_to_api_2x_memory_children Memory children
In hwloc v1.x, NUMA nodes were inside the tree, for instance Packages
contained 2 NUMA nodes which contained a L3 and several cache.
Starting with hwloc v2.0, NUMA nodes are not in the main tree anymore.
They are attached under objects as
Memory Children on the side
of normal children.
This memory children list starts at
obj->memory_first_child
and its size is
obj->memory_arity
.
Hence there can now exist two local NUMA nodes,
for instance on Intel Xeon Phi processors.
The normal list of children (starting at
obj->first_child
,
ending at
obj->last_child
, of size
obj->arity
,
and available as the array
obj->children
)
now only contains CPU-side objects:
PUs, Cores, Packages, Caches, Groups, Machine and System.
hwloc_get_next_child() may still be used to iterate over all children of all lists.
Hence the CPU-side hierarchy is built using normal children,
while memory is attached to that hierarchy depending on its affinity.
\subsection upgrade_to_api_2x_memory_examples Examples
- a UMA machine with 2 packages and a single NUMA node is now modeled
as a "Machine" object with two "Package" children
and one "NUMANode" memory children (displayed first in lstopo below):
\verbatim
Machine (1024MB total)
NUMANode L#0 (P#0 1024MB)
Package L#0
Core L#0 + PU L#0 (P#0)
Core L#1 + PU L#1 (P#1)
Package L#1
Core L#2 + PU L#2 (P#2)
Core L#3 + PU L#3 (P#3)
\endverbatim
- a machine with 2 packages with one NUMA node and 2 cores in each is now:
\verbatim
Machine (2048MB total)
Package L#0
NUMANode L#0 (P#0 1024MB)
Core L#0 + PU L#0 (P#0)
Core L#1 + PU L#1 (P#1)
Package L#1
NUMANode L#1 (P#1 1024MB)
Core L#2 + PU L#2 (P#2)
Core L#3 + PU L#3 (P#3)
\endverbatim
- if there are two NUMA nodes per package, a Group object may be added to keep
cores together with their local NUMA node:
\verbatim
Machine (4096MB total)
Package L#0
Group0 L#0
NUMANode L#0 (P#0 1024MB)
Core L#0 + PU L#0 (P#0)
Core L#1 + PU L#1 (P#1)
Group0 L#1
NUMANode L#1 (P#1 1024MB)
Core L#2 + PU L#2 (P#2)
Core L#3 + PU L#3 (P#3)
Package L#1
[...]
\endverbatim
- if the platform has L3 caches whose localities are identical to NUMA nodes, Groups aren't needed:
\verbatim
Machine (4096MB total)
Package L#0
L3 L#0 (16MB)
NUMANode L#0 (P#0 1024MB)
Core L#0 + PU L#0 (P#0)
Core L#1 + PU L#1 (P#1)
L3 L#1 (16MB)
NUMANode L#1 (P#1 1024MB)
Core L#2 + PU L#2 (P#2)
Core L#3 + PU L#3 (P#3)
Package L#1
[...]
\endverbatim
\subsection upgrade_to_api_2x_numa_level NUMA level and depth
NUMA nodes are not in "main" tree of normal objects anymore.
Hence, they don't have a meaningful depth anymore (like I/O and Misc objects).
They have a virtual (negative) depth (::HWLOC_TYPE_DEPTH_NUMANODE)
so that functions manipulating depths and level still work,
and so that we can still iterate over the level of NUMA nodes just like for any other level.
For instance we can still use lines such as
\verbatim
int depth = hwloc_get_type_depth(topology, HWLOC_OBJ_NUMANODE);
hwloc_obj_t obj = hwloc_get_obj_by_type(topology, HWLOC_OBJ_NUMANODE, 4);
hwloc_obj_t node = hwloc_get_next_obj_by_depth(topology, HWLOC_TYPE_DEPTH_NUMANODE, prev);
\endverbatim
The NUMA depth should not be compared with others.
An unmodified code that still compares NUMA and Package depths
(to find out whether Packages contain NUMA or the contrary)
would now always assume Packages contain NUMA (because the NUMA depth is negative).
However, the depth of the Normal parents of NUMA nodes may be used instead.
In the last example above, NUMA nodes are attached to L3 caches,
hence one may compare the depth of Packages and L3 to find out
that NUMA nodes are contained in Packages.
This depth of parents may be retrieved with hwloc_get_memory_parents_depth().
However, this function may return ::HWLOC_TYPE_DEPTH_MULTIPLE
on future platforms if NUMA nodes are attached to different levels.
\subsection upgrade_to_api_2x_memory_find Finding Local NUMA nodes and looking at Children and Parents
Applications that walked up/down to find NUMANode parent/children must
now be updated.
Instead of looking directly for a NUMA node, one should now look for
an object that has some memory children.
NUMA node(s) will be attached there.
For instance, when looking for a NUMA node above a given core
core:
\verbatim
hwloc_obj_t parent = core->parent;
while (parent && !parent->memory_arity)
parent = parent->parent; /* no memory child, walk up */
if (parent)
/* use parent->memory_first_child (and its siblings if there are multiple local NUMA nodes) */
\endverbatim
The list of local NUMA nodes (usually a single one) is also described
by the
nodeset attribute of each object (which contains the
physical indexes of these nodes).
Iterating over the NUMA level is also an easy way to find local NUMA nodes:
\verbatim
hwloc_obj_t tmp = NULL;
while ((tmp = hwloc_get_next_obj_by_type(topology, HWLOC_OBJ_NUMANODE, tmp)) != NULL) {
if (hwloc_bitmap_isset(obj->nodeset, tmp->os_index))
/* tmp is a NUMA node local to obj, use it */
}
\endverbatim
Similarly finding objects that are close to a given NUMA nodes
should be updated too.
Instead of looking at the NUMA node parents/children, one should
now find a Normal parent above that NUMA node, and then look
at its parents/children as usual:
\verbatim
hwloc_obj_t tmp = obj->parent;
while (hwloc_obj_type_is_memory(tmp))
tmp = tmp->parent;
/* now use tmp instead of obj */
\endverbatim
To avoid such hwloc v2.x-specific and NUMA-specific cases in the code,
a
generic lookup for any kind of object, including NUMA nodes,
might also be implemented by iterating over a level.
For instance finding an object of type
type which either
contains or is included in object
obj can be
performed by traversing the level of that type and comparing CPU sets:
\verbatim
hwloc_obj_t tmp = NULL;
while ((tmp = hwloc_get_next_obj_by_type(topology, type, tmp)) != NULL) {
if (hwloc_bitmap_intersects(tmp->cpuset, obj->cpuset))
/* tmp matches, use it */
}
\endverbatim
This generic lookup works whenever type or obj
are Normal or Memory objects since both have CPU sets.
Moreover, it is compatible with the hwloc v1.x API.
\htmlonly
\endhtmlonly
\section upgrade_to_api_2x_children 4 Kinds of Objects and Children
\subsection upgrade_to_api_2x_io_misc_children I/O and Misc children
I/O children are not in the main object children list anymore either.
They are in the list starting at
obj->io_first_child
and its size is
obj->io_arity
.
Misc children are not in the main object children list anymore.
They are in the list starting at
obj->misc_first_child
and its size is
obj->misc_arity
.
See hwloc_obj for details about children lists.
hwloc_get_next_child() may still be used to iterate over all children of all lists.
\subsection upgrade_to_api_2x_kinds_subsec Kinds of objects
Given the above, objects may now be of 4 kinds:
- Normal (everything not listed below, including Machine, Package, Core, PU, CPU Caches, etc);
- Memory (currently NUMA nodes or Memory-side Caches), attached to parents as Memory children;
- I/O (Bridges, PCI and OS devices), attached to parents as I/O children;
- Misc objects, attached to parents as Misc children.
See hwloc_obj for details about children lists.
For a given object type, the kind may be found with hwloc_obj_type_is_normal(),
hwloc_obj_type_is_memory(), hwloc_obj_type_is_normal(),
or comparing with ::HWLOC_OBJ_MISC.
Normal and Memory objects have (non-NULL) CPU sets and nodesets,
while I/O and Misc objects don't have any sets (they are NULL).
\htmlonly
\endhtmlonly
\section upgrade_to_api_2x_cache HWLOC_OBJ_CACHE replaced
Instead of a single HWLOC_OBJ_CACHE, there are now 8 types
::HWLOC_OBJ_L1CACHE, ..., ::HWLOC_OBJ_L5CACHE,
::HWLOC_OBJ_L1ICACHE, ..., ::HWLOC_OBJ_L3ICACHE.
Cache object attributes are unchanged.
hwloc_get_cache_type_depth() is not needed to disambiguate cache types anymore
since new types can be passed to hwloc_get_type_depth()
without ever getting ::HWLOC_TYPE_DEPTH_MULTIPLE anymore.
hwloc_obj_type_is_cache(), hwloc_obj_type_is_dcache() and hwloc_obj_type_is_icache()
may be used to check whether a given type is a cache, data/unified cache or instruction cache.
\htmlonly
\endhtmlonly
\section upgrade_to_api_2x_allowed allowed_cpuset and allowed_nodeset only in the main topology
Objects do not have allowed_cpuset
and allowed_nodeset
anymore.
They are only available for the entire topology using
hwloc_topology_get_allowed_cpuset() and hwloc_topology_get_allowed_nodeset().
As usual, those are only needed when the INCLUDE_DISALLOWED topology flag is given,
which means disallowed objects are kept in the topology.
If so, one may find out whether some PUs inside an object is allowed by checking
\verbatim
hwloc_bitmap_intersects(obj->cpuset, hwloc_topology_get_allowed_cpuset(topology))
\endverbatim
Replace cpusets with nodesets for NUMA nodes.
To find out which ones, replace intersects() with and() to get the actual intersection.
\htmlonly
\endhtmlonly
\section upgrade_to_api_2x_depth Object depths are now signed int
obj->depth
as well as depths given to functions
such as hwloc_get_obj_by_depth() or returned by hwloc_topology_get_depth() are now
signed int.
Other depth such as cache-specific depth attribute are still unsigned.
\htmlonly
\endhtmlonly
\section upgrade_to_api_2x_memory_attrs Memory attributes become NUMANode-specific
Memory attributes such as obj->memory.local_memory
are now only available in NUMANode-specific attributes
in obj->attr->numanode.local_memory
.
obj->memory.total_memory
is available
in all objects as obj->total_memory
.
See hwloc_obj_attr_u::hwloc_numanode_attr_s and hwloc_obj for details.
\htmlonly
\endhtmlonly
\section upgrade_to_api_2x_config Topology configuration changes
The old ignoring API as well as several configuration flags
are replaced with the new filtering API,
see hwloc_topology_set_type_filter() and its variants,
and ::hwloc_type_filter_e for details.
-
hwloc_topology_ignore_type(), hwloc_topology_ignore_type_keep_structure()
and hwloc_topology_ignore_all_keep_structure() are respectively superseded by
\verbatim
hwloc_topology_set_type_filter(topology, type, HWLOC_TYPE_FILTER_KEEP_NONE);
hwloc_topology_set_type_filter(topology, type, HWLOC_TYPE_FILTER_KEEP_STRUCTURE);
hwloc_topology_set_all_types_filter(topology, HWLOC_TYPE_FILTER_KEEP_STRUCTURE);
\endverbatim
Also, the meaning of KEEP_STRUCTURE has changed (only entire levels may be ignored, instead of single objects), the old behavior is not available anymore.
-
HWLOC_TOPOLOGY_FLAG_ICACHES is superseded by
\verbatim
hwloc_topology_set_icache_types_filter(topology, HWLOC_TYPE_FILTER_KEEP_ALL);
\endverbatim
-
HWLOC_TOPOLOGY_FLAG_WHOLE_IO, HWLOC_TOPOLOGY_FLAG_IO_DEVICES and HWLOC_TOPOLOGY_FLAG_IO_BRIDGES replaced.
To keep all I/O devices (PCI, Bridges, and OS devices), use:
\verbatim
hwloc_topology_set_io_types_filter(topology, HWLOC_TYPE_FILTER_KEEP_ALL);
\endverbatim
To only keep important devices (Bridges with children, common PCI devices and OS devices):
\verbatim
hwloc_topology_set_io_types_filter(topology, HWLOC_TYPE_FILTER_KEEP_IMPORTANT);
\endverbatim
\htmlonly
\endhtmlonly
\section upgrade_to_api_2x_xml XML changes
2.0 XML files are not compatible with 1.x
2.0 can load 1.x files, but only NUMA distances are imported. Other distance matrices are ignored
(they were never used by default anyway).
2.0 can export 1.x-compatible files, but only distances attached to the root object are exported
(i.e. distances that cover the entire machine).
Other distance matrices are dropped (they were never used by default anyway).
Users are advised to negociate hwloc versions between exporter and importer:
If the importer isn't 2.x, the exporter should export to 1.x.
Otherwise, things should work by default.
Hence hwloc_topology_export_xml() and hwloc_topology_export_xmlbuffer() have a new flags argument.
to force a hwloc-1.x-compatible XML export.
-
If both always support 2.0, don't pass any flag.
-
When the importer uses hwloc 1.x, export with ::HWLOC_TOPOLOGY_EXPORT_XML_FLAG_V1.
Otherwise the importer will fail to import.
-
When the exporter uses hwloc 1.x, it cannot pass any flag,
and a 2.0 importer can import without problem.
\verbatim
#if HWLOC_API_VERSION >= 0x20000
if (need 1.x compatible XML export)
hwloc_topology_export_xml(...., HWLOC_TOPOLOGY_EXPORT_XML_FLAG_V1);
else /* need 2.x compatible XML export */
hwloc_topology_export_xml(...., 0);
#else
hwloc_topology_export_xml(....);
#endif
\endverbatim
Additionally, hwloc_topology_diff_load_xml(), hwloc_topology_diff_load_xmlbuffer(),
hwloc_topology_diff_export_xml(), hwloc_topology_diff_export_xmlbuffer()
and hwloc_topology_diff_destroy() lost the topology argument:
The first argument (topology) isn't needed anymore.
\htmlonly
\endhtmlonly
\section upgrade_to_api_2x_distances Distances API totally rewritten
The new distances API is in hwloc/distances.h.
Distances are not accessible directly from objects anymore.
One should first call hwloc_distances_get() (or a variant)
to retrieve distances (possibly with one call to get the
number of available distances structures, and another call
to actually get them).
Then it may consult these structures, and finally release them.
The set of object involved in a distances structure is specified
by an array of objects, it may not always cover the entire machine or so.
\htmlonly
\endhtmlonly
\section upgrade_to_api_2x_return Return values of functions
Bitmap functions (and a couple other functions) can return errors (in theory).
Most bitmap functions may have to reallocate the internal bitmap storage.
In v1.x, they would silently crash if realloc failed.
In v2.0, they now return an int that can be negative on error.
However, the preallocated storage is 512 bits,
hence realloc will not even be used unless you run
hwloc on machines with larger PU or NUMAnode indexes.
hwloc_obj_add_info(), hwloc_cpuset_from_nodeset() and hwloc_cpuset_from_nodeset()
also return an int, which would be -1 in case of allocation errors.
\htmlonly
\endhtmlonly
\section upgrade_to_api_2x_misc Misc API changes
-
hwloc_type_sscanf() extends hwloc_obj_type_sscanf()
by passing a union hwloc_obj_attr_u which may receive
Cache, Group, Bridge or OS device attributes.
-
hwloc_type_sscanf_as_depth() is also added to
directly return the corresponding level depth within a topology.
-
hwloc_topology_insert_misc_object_by_cpuset() is replaced
with hwloc_topology_alloc_group_object() and hwloc_topology_insert_group_object().
-
hwloc_topology_insert_misc_object_by_parent() is replaced
with hwloc_topology_insert_misc_object().
\htmlonly
\endhtmlonly
\section upgrade_to_api_2x_removals API removals and deprecations
-
HWLOC_OBJ_SYSTEM removed:
The root object is always ::HWLOC_OBJ_MACHINE
-
*_membind_nodeset() memory binding interfaces deprecated:
One should use the variant without _nodeset suffix and pass the ::HWLOC_MEMBIND_BYNODESET flag.
-
HWLOC_MEMBIND_REPLICATE removed:
no supported operating system supports it anymore.
-
hwloc_obj_snprintf() removed because it was long-deprecated
by hwloc_obj_type_snprintf() and hwloc_obj_attr_snprintf().
-
hwloc_obj_type_sscanf() deprecated, hwloc_obj_type_of_string() removed.
-
hwloc_cpuset_from/to_nodeset_strict() deprecated:
Now useless since all topologies are NUMA. Use the variant without the _strict suffix
-
hwloc_distribute() and hwloc_distributev() removed,
deprecated by hwloc_distrib().
-
The Custom interface (hwloc_topology_set_custom(), etc.)
was removed, as well as the corresponding command-line tools (hwloc-assembler, etc.).
Topologies always start with object with valid cpusets and nodesets.
-
obj->online_cpuset
removed:
Offline PUs are simply listed in the complete_cpuset
as previously.
-
obj->os_level
removed.
*/