Overview
The osd_memory_target
parameter in Ceph sets a soft limit for the memory usage of Object Storage Daemons (OSDs). It helps manage how much memory each OSD uses, but it doesn’t enforce a strict cap. Under certain conditions, OSDs can exceed the configured target.
Recommended Values
- 8 GiB per OSD for spinning HDDs or SSDs.
- Up to 12 GiB per OSD for NVMe drives, which handle higher IOPS and can benefit from more memory.
- Make sure the host has enough memory for the operating system and other Ceph components like Monitor and Manager daemons.
- When dealing with mixed storage types, allocate memory targets proportional to the performance of the drives.
Why OSDs May Exceed osd_memory_target
-
Ceph adjusts memory usage dynamically. There might come a situation where ceph has freed up the memory but the linux kernel has not claimed it. 1
-
This is also the
bluestore_cache_autotune
parameter, which is set totrue
by default. If it is disabled, Ceph will ignoreosd_memory_target
and will adjust OSD memory usage according tobluestore_cache_size_hdd
orbluestore_cache_size_ssd
.23 -
If
osd_memory_target_autotune
is set to true (by default,false
on croit),cephadm
will dynamically adjust theosd_memory_target
depending on the system memory available. 4
Configuration
Monitoring
- Use tools like
ceph daemon osd.<ID> perf dump
to check memory usage for individual OSDs. - Monitor overall system memory with commands like
htop
orfree -h
to ensure that memory usage stays within limits.
Adjusting osd_memory_target
- Set the memory target with:
Example: To set 8 GiB:ceph config set osd osd_memory_target <desired_value_in_bytes>
ceph config set osd osd_memory_target 8589934592