24 Matching Annotations
  1. Sep 2024
    1. static inline unsigned int calc_slab_order(unsigned int size, unsigned int min_order, unsigned int max_order, unsigned int fract_leftover) { unsigned int order; for (order = min_order; order <= max_order; order++) { unsigned int slab_size = (unsigned int)PAGE_SIZE << order; unsigned int rem; rem = slab_size % size; if (rem <= slab_size / fract_leftover) break; } return order; }

      Code to choose how many pages to allocate for a new slab to minimize wasted space from the remainder

    1. if (folio) { pgoff_t start; rcu_read_lock(); start = page_cache_next_miss(ractl->mapping, index + 1, max_pages); rcu_read_unlock(); if (!start || start - index > max_pages) return; ra->start = start; ra->size = start - index; /* old async_size */ ra->size += req_size; ra->size = get_next_ra_size(ra, max_pages); ra->async_size = ra->size; goto readit;

      Read ahead policy for marked folios

  2. Aug 2024
    1. if (hugetlb_cgroup_disabled())

      This probably not an interesting policy decision for ldos. It is a feature flag for the running OS. But if cgroups were decided by policy then this flag would be controlled by the cgroup decision.

    1. static long get_nr_to_scan(struct lruvec *lruvec, struct scan_control *sc, bool can_swap)

      figure out how many pages to scan.

    2. static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc) {

      sets up the struct scan_control. Most of the value come from elsewhere but this function seems to bring it all together.

    1. ksm_thread_pages_to_scan

      how many pages to scan. This is what many of the other config values are trying to get at.

    2. function to compute a EWA for the number of pages to scan. Uses many of the config parameters from sysfs

  3. Jul 2024
    1. static void scan_time_advisor(void)

      This function is calculating the pages to scan based on a other metrics such as cpu consumed. Seems like a good place for a learned algorithm.

  4. Jun 2024
    1. /* * The larger the object size is, the more slabs we want on the partial * list to avoid pounding the page allocator excessively. */ s->min_partial = min_t(unsigned long, MAX_PARTIAL, ilog2(s->size) / 2); s->min_partial = max_t(unsigned long, MIN_PARTIAL, s->min_partial);

      A policy decision about how often we may have to go to the page allocator.

    2. /* * calculate_sizes() determines the order and the distribution of data within * a slab object. */ 5111 5112 5113 5114 5115 5116 5117 5118 5119 5120 5121 5122 5123 5124 5125 5126 5127 5128 5129 5130 5131 5132 5133 5134 5135 5136 5137 5138 5139 5140 5141 5142 5143 5144 5145 5146 5147 5148 5149 5150 5151 5152 5153 5154 5155 5156 5157 5158 5159 5160 5161 5162 5163 5164 5165 5166 5167 5168 5169 5170 5171 5172 5173 5174 5175 5176 5177 5178 5179 5180 static int calculate_sizes(struct kmem_cache *s) {

      computes a several values for the allocator based on the size and flags of the allocator being created.

    3. #ifndef CONFIG_SLUB_TINY static inline int alloc_kmem_cache_cpus(struct kmem_cache *s)

      Depending on the CONFIG_SLUB_TINY should ther be an active slab for each CPU?

    4. static inline int calculate_order(unsigned int size) { unsigned int order; unsigned int min_objects; unsigned int max_objects; unsigned int min_order; min_objects = slub_min_objects; if (!min_objects) {

      calculate the order (power of two number of pages) that each slab in this allocator should have.

    5. max_objects = max(order_objects(slub_max_order, size), 1U); min_objects = min(min_objects, max_objects); min_order = max_t(unsigned int, slub_min_order, get_order(min_objects * size)); if (order_objects(min_order, size) > MAX_OBJS_PER_PAGE) return get_order(size * MAX_OBJS_PER_PAGE) - 1;

      Policy calculation

    6. s->flags & __OBJECT_POISON

      apply policy

    7. s->flags & SLAB_RED_ZONE)

      debug plicy

    8. if (s->flags & __CMPXCHG_DOUBLE) {

      Config fastpath?

    9. if (s->flags & __CMPXCHG_DOUBLE) {

      Fast path config

    10. set the number of slabs per cpu

    1. int calculate_normal_threshold(struct zone *zone) { int threshold; int mem; /* memory in 128 MB units */ /* * The threshold scales with the number of processors and the amount * of memory per zone. More memory means that we can defer updates for * longer, more processors could lead to more contention. * fls() is used to have a cheap way of logarithmic scaling. * * Some sample thresholds: * * Threshold Processors (fls) Zonesize fls(mem)+1 * ------------------------------------------------------------------ * 8 1 1 0.9-1 GB 4 * 16 2 2 0.9-1 GB 4 * 20 2 2 1-2 GB 5 * 24 2 2 2-4 GB 6 * 28 2 2 4-8 GB 7 * 32 2 2 8-16 GB 8 * 4 2 2 <128M 1 * 30 4 3 2-4 GB 5 * 48 4 3 8-16 GB 8 * 32 8 4 1-2 GB 4 * 32 8 4 0.9-1GB 4 * 10 16 5 <128M 1 * 40 16 5 900M 4 * 70 64 7 2-4 GB 5 * 84 64 7 4-8 GB 6 * 108 512 9 4-8 GB 6 * 125 1024 10 8-16 GB 8 * 125 1024 10 16-32 GB 9 */ mem = zone_managed_pages(zone) >> (27 - PAGE_SHIFT); threshold = 2 * fls(num_online_cpus()) * (1 + fls(mem)); /* * Maximum threshold is 125 */ threshold = min(125, threshold); return threshold; }

      a "magic" formula for computing the amount of memory per zone.

  5. May 2024
    1. mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;

      Policy application: choosing map size for page usage?