257 Matching Annotations
  1. Last 7 days
  2. Nov 2024
    1. we now realize the base pairs come to join each other up together as the system unravels and forms a new pair of DNA molecules well up to a point it does and that point is known to be accurate to about one in 10,000 base pairs now if you and I wrote an article and there was only one typo in a 10,000w article we'd be very pleased but this is nowhere near enough for a DNA sequence of three billion base pairs there would be half a million at least of Errors

      for - DNA replication accuracy - 1 in 10,000 - too high for successful replication - another higher level mechanism to correct for these errors - need a whole body for that - Denis Noble

    1. for - AI - progress trap - interview Eric Schmidt - meme - AI progress trap - high intelligence + low compassion = existential threat

      Summary - After watching the interview, I would sum it up this way. Humanity faces an existential threat from AI due to: - AI is extreme concentration of power and intelligence (NOT wisdom!) - Humanity still have many traumatized people who want to harm others - low compassion - The deadly combination is: - proliferation of tools that give anyone extreme concentration of power and intelligence combined with - a sufficiently high percentage of traumatized people with - low levels of compassion and - high levels of unlimited aggression - All it takes is ONE bad actor with the right combination of circumstances and conditions to wreak harm on a global scale, and that will not be prevented by millions of good applications of the same technology

    1. #if defined(CONFIG_ARCH_HAS_PTE_DEVMAP) && defined(CONFIG_TRANSPARENT_HUGEPAGE) static int __gup_device_huge(unsigned long pfn, unsigned long addr, unsigned long end, unsigned int flags, struct page **pages, int *nr) { int nr_start = *nr; struct dev_pagemap *pgmap = NULL; do { struct page *page = pfn_to_page(pfn); pgmap = get_dev_pagemap(pfn, pgmap); if (unlikely(!pgmap)) { undo_dev_pagemap(nr, nr_start, flags, pages); break; } if (!(flags & FOLL_PCI_P2PDMA) && is_pci_p2pdma_page(page)) { undo_dev_pagemap(nr, nr_start, flags, pages); break; } SetPageReferenced(page); pages[*nr] = page; if (unlikely(try_grab_page(page, flags))) { undo_dev_pagemap(nr, nr_start, flags, pages); break; } (*nr)++; pfn++; } while (addr += PAGE_SIZE, addr != end); put_dev_pagemap(pgmap); return addr == end; } static int __gup_device_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, unsigned long end, unsigned int flags, struct page **pages, int *nr) { unsigned long fault_pfn; int nr_start = *nr; fault_pfn = pmd_pfn(orig) + ((addr & ~PMD_MASK) >> PAGE_SHIFT); if (!__gup_device_huge(fault_pfn, addr, end, flags, pages, nr)) return 0; if (unlikely(pmd_val(orig) != pmd_val(*pmdp))) { undo_dev_pagemap(nr, nr_start, flags, pages); return 0; } return 1; } static int __gup_device_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr, unsigned long end, unsigned int flags, struct page **pages, int *nr) { unsigned long fault_pfn; int nr_start = *nr; fault_pfn = pud_pfn(orig) + ((addr & ~PUD_MASK) >> PAGE_SHIFT); if (!__gup_device_huge(fault_pfn, addr, end, flags, pages, nr)) return 0; if (unlikely(pud_val(orig) != pud_val(*pudp))) { undo_dev_pagemap(nr, nr_start, flags, pages); return 0; } return 1; } #else static int __gup_device_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr, unsigned long end, unsigned int flags, struct page **pages, int *nr) { BUILD_BUG(); return 0; } static int __gup_device_huge_pud(pud_t pud, pud_t *pudp, unsigned long addr, unsigned long end, unsigned int flags, struct page **pages, int *nr) { BUILD_BUG(); return 0; } #endif

      seems like a check to see if pages can be grabbed. A quick skim maybe hints possible checks if huge pages can be grabbed?

    2. #ifdef CONFIG_ARCH_HAS_PTE_SPECIAL /* * Fast-gup relies on pte change detection to avoid concurrent pgtable * operations. * * To pin the page, fast-gup needs to do below in order: * (1) pin the page (by prefetching pte), then (2) check pte not changed. * * For the rest of pgtable operations where pgtable updates can be racy * with fast-gup, we need to do (1) clear pte, then (2) check whether page * is pinned. * * Above will work for all pte-level operations, including THP split. * * For THP collapse, it's a bit more complicated because fast-gup may be * walking a pgtable page that is being freed (pte is still valid but pmd * can be cleared already). To avoid race in such condition, we need to * also check pmd here to make sure pmd doesn't change (corresponds to * pmdp_collapse_flush() in the THP collapse code path). */ static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, unsigned long end, unsigned int flags, struct page **pages, int *nr) { struct dev_pagemap *pgmap = NULL; int nr_start = *nr, ret = 0; pte_t *ptep, *ptem; ptem = ptep = pte_offset_map(&pmd, addr); if (!ptep) return 0; do { pte_t pte = ptep_get_lockless(ptep); struct page *page; struct folio *folio; /* * Always fallback to ordinary GUP on PROT_NONE-mapped pages: * pte_access_permitted() better should reject these pages * either way: otherwise, GUP-fast might succeed in * cases where ordinary GUP would fail due to VMA access * permissions. */ if (pte_protnone(pte)) goto pte_unmap; if (!pte_access_permitted(pte, flags & FOLL_WRITE)) goto pte_unmap; if (pte_devmap(pte)) { if (unlikely(flags & FOLL_LONGTERM)) goto pte_unmap; pgmap = get_dev_pagemap(pte_pfn(pte), pgmap); if (unlikely(!pgmap)) { undo_dev_pagemap(nr, nr_start, flags, pages); goto pte_unmap; } } else if (pte_special(pte)) goto pte_unmap; VM_BUG_ON(!pfn_valid(pte_pfn(pte))); page = pte_page(pte); folio = try_grab_folio(page, 1, flags); if (!folio) goto pte_unmap; if (unlikely(folio_is_secretmem(folio))) { gup_put_folio(folio, 1, flags); goto pte_unmap; } if (unlikely(pmd_val(pmd) != pmd_val(*pmdp)) || unlikely(pte_val(pte) != pte_val(ptep_get(ptep)))) { gup_put_folio(folio, 1, flags); goto pte_unmap; } if (!folio_fast_pin_allowed(folio, flags)) { gup_put_folio(folio, 1, flags); goto pte_unmap; } if (!pte_write(pte) && gup_must_unshare(NULL, flags, page)) { gup_put_folio(folio, 1, flags); goto pte_unmap; } /* * We need to make the page accessible if and only if we are * going to access its content (the FOLL_PIN case). Please * see Documentation/core-api/pin_user_pages.rst for * details. */ if (flags & FOLL_PIN) { ret = arch_make_page_accessible(page); if (ret) { gup_put_folio(folio, 1, flags); goto pte_unmap; } } folio_set_referenced(folio); pages[*nr] = page; (*nr)++; } while (ptep++, addr += PAGE_SIZE, addr != end); ret = 1; pte_unmap: if (pgmap) put_dev_pagemap(pgmap); pte_unmap(ptem); return ret; } #else /* * If we can't determine whether or not a pte is special, then fail immediately * for ptes. Note, we can still pin HugeTLB and THP as these are guaranteed not * to be special. * * For a futex to be placed on a THP tail page, get_futex_key requires a * get_user_pages_fast_only implementation that can pin pages. Thus it's still * useful to have gup_huge_pmd even if we can't operate on ptes. */ static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr, unsigned long end, unsigned int flags, struct page **pages, int *nr) { return 0; } #endif /* CONFIG_ARCH_HAS_PTE_SPECIAL */

      non concurrent fast gup approach that checks for pinned page and unmaps pte or clears it

    3. #ifdef CONFIG_HAVE_FAST_GUP /* * Used in the GUP-fast path to determine whether a pin is permitted for a * specific folio. * * This call assumes the caller has pinned the folio, that the lowest page table * level still points to this folio, and that interrupts have been disabled. * * Writing to pinned file-backed dirty tracked folios is inherently problematic * (see comment describing the writable_file_mapping_allowed() function). We * therefore try to avoid the most egregious case of a long-term mapping doing * so. * * This function cannot be as thorough as that one as the VMA is not available * in the fast path, so instead we whitelist known good cases and if in doubt, * fall back to the slow path. */ static bool folio_fast_pin_allowed(struct folio *folio, unsigned int flags) { struct address_space *mapping; unsigned long mapping_flags; /* * If we aren't pinning then no problematic write can occur. A long term * pin is the most egregious case so this is the one we disallow. */ if ((flags & (FOLL_PIN | FOLL_LONGTERM | FOLL_WRITE)) != (FOLL_PIN | FOLL_LONGTERM | FOLL_WRITE)) return true; /* The folio is pinned, so we can safely access folio fields. */ if (WARN_ON_ONCE(folio_test_slab(folio))) return false; /* hugetlb mappings do not require dirty-tracking. */ if (folio_test_hugetlb(folio)) return true; /* * GUP-fast disables IRQs. When IRQS are disabled, RCU grace periods * cannot proceed, which means no actions performed under RCU can * proceed either. * * inodes and thus their mappings are freed under RCU, which means the * mapping cannot be freed beneath us and thus we can safely dereference * it. */ lockdep_assert_irqs_disabled(); /* * However, there may be operations which _alter_ the mapping, so ensure * we read it once and only once. */ mapping = READ_ONCE(folio->mapping); /* * The mapping may have been truncated, in any case we cannot determine * if this mapping is safe - fall back to slow path to determine how to * proceed. */ if (!mapping) return false; /* Anonymous folios pose no problem. */ mapping_flags = (unsigned long)mapping & PAGE_MAPPING_FLAGS; if (mapping_flags) return mapping_flags & PAGE_MAPPING_ANON; /* * At this point, we know the mapping is non-null and points to an * address_space object. The only remaining whitelisted file system is * shmem. */ return shmem_mapping(mapping); }

      policy logic. avoids locks unlike get user pages unlocked/locked which seems risky so its not supposed to be used on concurrent gup logic

    4. long get_user_pages(unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages) { int locked = 1; if (!is_valid_gup_args(pages, NULL, &gup_flags, FOLL_TOUCH)) return -EINVAL; return __get_user_pages_locked(current->mm, start, nr_pages, pages, &locked, gup_flags); }

      policy logic.

    5. long get_user_pages_remote(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, int *locked) { int local_locked = 1; if (!is_valid_gup_args(pages, locked, &gup_flags, FOLL_TOUCH | FOLL_REMOTE)) return -EINVAL; return __get_user_pages_locked(mm, start, nr_pages, pages, locked ? locked : &local_locked, gup_flags); }

      policy logic

    6. static __always_inline long __get_user_pages_locked(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, struct page **pages, int *locked, unsigned int flags) { long ret, pages_done; bool must_unlock = false; /* * The internal caller expects GUP to manage the lock internally and the * lock must be released when this returns. */ if (!*locked) { if (mmap_read_lock_killable(mm)) return -EAGAIN; must_unlock = true; *locked = 1; } else mmap_assert_locked(mm); if (flags & FOLL_PIN) mm_set_has_pinned_flag(&mm->flags); /* * FOLL_PIN and FOLL_GET are mutually exclusive. Traditional behavior * is to set FOLL_GET if the caller wants pages[] filled in (but has * carelessly failed to specify FOLL_GET), so keep doing that, but only * for FOLL_GET, not for the newer FOLL_PIN. * * FOLL_PIN always expects pages to be non-null, but no need to assert * that here, as any failures will be obvious enough. */ if (pages && !(flags & FOLL_PIN)) flags |= FOLL_GET; pages_done = 0; for (;;) { ret = __get_user_pages(mm, start, nr_pages, flags, pages, locked); if (!(flags & FOLL_UNLOCKABLE)) { /* VM_FAULT_RETRY couldn't trigger, bypass */ pages_done = ret; break; } /* VM_FAULT_RETRY or VM_FAULT_COMPLETED cannot return errors */ if (!*locked) { BUG_ON(ret < 0); BUG_ON(ret >= nr_pages); } if (ret > 0) { nr_pages -= ret; pages_done += ret; if (!nr_pages) break; } if (*locked) { /* * VM_FAULT_RETRY didn't trigger or it was a * FOLL_NOWAIT. */ if (!pages_done) pages_done = ret; break; } /* * VM_FAULT_RETRY triggered, so seek to the faulting offset. * For the prefault case (!pages) we only update counts. */ if (likely(pages)) pages += ret; start += ret << PAGE_SHIFT; /* The lock was temporarily dropped, so we must unlock later */ must_unlock = true; retry: /* * Repeat on the address that fired VM_FAULT_RETRY * with both FAULT_FLAG_ALLOW_RETRY and * FAULT_FLAG_TRIED. Note that GUP can be interrupted * by fatal signals of even common signals, depending on * the caller's request. So we need to check it before we * start trying again otherwise it can loop forever. */ if (gup_signal_pending(flags)) { if (!pages_done) pages_done = -EINTR; break; } ret = mmap_read_lock_killable(mm); if (ret) { BUG_ON(ret > 0); if (!pages_done) pages_done = ret; break; } *locked = 1; ret = __get_user_pages(mm, start, 1, flags | FOLL_TRIED, pages, locked); if (!*locked) { /* Continue to retry until we succeeded */ BUG_ON(ret != 0); goto retry; } if (ret != 1) { BUG_ON(ret > 1); if (!pages_done) pages_done = ret; break; } nr_pages--; pages_done++; if (!nr_pages) break; if (likely(pages)) pages++; start += PAGE_SIZE; } if (must_unlock && *locked) { /* * We either temporarily dropped the lock, or the caller * requested that we both acquire and drop the lock. Either way, * we must now unlock, and notify the caller of that state. */ mmap_read_unlock(mm); *locked = 0; } return pages_done; }

      same as gup but sets/unsets mmap_lock

    7. int fixup_user_fault(struct mm_struct *mm, unsigned long address, unsigned int fault_flags, bool *unlocked) { struct vm_area_struct *vma; vm_fault_t ret; address = untagged_addr_remote(mm, address); if (unlocked) fault_flags |= FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE; retry: vma = gup_vma_lookup(mm, address); if (!vma) return -EFAULT; if (!vma_permits_fault(vma, fault_flags)) return -EFAULT; if ((fault_flags & FAULT_FLAG_KILLABLE) && fatal_signal_pending(current)) return -EINTR; ret = handle_mm_fault(vma, address, fault_flags, NULL); if (ret & VM_FAULT_COMPLETED) { /* * NOTE: it's a pity that we need to retake the lock here * to pair with the unlock() in the callers. Ideally we * could tell the callers so they do not need to unlock. */ mmap_read_lock(mm); *unlocked = true; return 0; } if (ret & VM_FAULT_ERROR) { int err = vm_fault_to_errno(ret, 0); if (err) return err; BUG(); } if (ret & VM_FAULT_RETRY) { mmap_read_lock(mm); *unlocked = true; fault_flags |= FAULT_FLAG_TRIED; goto retry; } return 0; }

      resolves user page fault. policy logic

    8. static long __get_user_pages(struct mm_struct *mm, unsigned long start, unsigned long nr_pages, unsigned int gup_flags, struct page **pages, int *locked) { long ret = 0, i = 0; struct vm_area_struct *vma = NULL; struct follow_page_context ctx = { NULL }; if (!nr_pages) return 0; start = untagged_addr_remote(mm, start); VM_BUG_ON(!!pages != !!(gup_flags & (FOLL_GET | FOLL_PIN))); do { struct page *page; unsigned int foll_flags = gup_flags; unsigned int page_increm; /* first iteration or cross vma bound */ if (!vma || start >= vma->vm_end) { /* * MADV_POPULATE_(READ|WRITE) wants to handle VMA * lookups+error reporting differently. */ if (gup_flags & FOLL_MADV_POPULATE) { vma = vma_lookup(mm, start); if (!vma) { ret = -ENOMEM; goto out; } if (check_vma_flags(vma, gup_flags)) { ret = -EINVAL; goto out; } goto retry; } vma = gup_vma_lookup(mm, start); if (!vma && in_gate_area(mm, start)) { ret = get_gate_page(mm, start & PAGE_MASK, gup_flags, &vma, pages ? &page : NULL); if (ret) goto out; ctx.page_mask = 0; goto next_page; } if (!vma) { ret = -EFAULT; goto out; } ret = check_vma_flags(vma, gup_flags); if (ret) goto out; } retry: /* * If we have a pending SIGKILL, don't keep faulting pages and * potentially allocating memory. */ if (fatal_signal_pending(current)) { ret = -EINTR; goto out; } cond_resched(); page = follow_page_mask(vma, start, foll_flags, &ctx); if (!page || PTR_ERR(page) == -EMLINK) { ret = faultin_page(vma, start, &foll_flags, PTR_ERR(page) == -EMLINK, locked); switch (ret) { case 0: goto retry; case -EBUSY: case -EAGAIN: ret = 0; fallthrough; case -EFAULT: case -ENOMEM: case -EHWPOISON: goto out; } BUG(); } else if (PTR_ERR(page) == -EEXIST) { /* * Proper page table entry exists, but no corresponding * struct page. If the caller expects **pages to be * filled in, bail out now, because that can't be done * for this page. */ if (pages) { ret = PTR_ERR(page); goto out; } } else if (IS_ERR(page)) { ret = PTR_ERR(page); goto out; } next_page: page_increm = 1 + (~(start >> PAGE_SHIFT) & ctx.page_mask); if (page_increm > nr_pages) page_increm = nr_pages; if (pages) { struct page *subpage; unsigned int j; /* * This must be a large folio (and doesn't need to * be the whole folio; it can be part of it), do * the refcount work for all the subpages too. * * NOTE: here the page may not be the head page * e.g. when start addr is not thp-size aligned. * try_grab_folio() should have taken care of tail * pages. */ if (page_increm > 1) { struct folio *folio; /* * Since we already hold refcount on the * large folio, this should never fail. */ folio = try_grab_folio(page, page_increm - 1, foll_flags); if (WARN_ON_ONCE(!folio)) { /* * Release the 1st page ref if the * folio is problematic, fail hard. */ gup_put_folio(page_folio(page), 1, foll_flags); ret = -EFAULT; goto out; } } for (j = 0; j < page_increm; j++) { subpage = nth_page(page, j); pages[i + j] = subpage; flush_anon_page(vma, subpage, start + j * PAGE_SIZE); flush_dcache_page(subpage); } } i += page_increm; start += page_increm * PAGE_SIZE; nr_pages -= page_increm; } while (nr_pages); out: if (ctx.pgmap) put_dev_pagemap(ctx.pgmap); return i ? i : ret; }

      Literally the actual policy logic of gup. Most important piece of code right here for gup

    9. static bool writable_file_mapping_allowed(struct vm_area_struct *vma, unsigned long gup_flags) { /* * If we aren't pinning then no problematic write can occur. A long term * pin is the most egregious case so this is the case we disallow. */ if ((gup_flags & (FOLL_PIN | FOLL_LONGTERM)) != (FOLL_PIN | FOLL_LONGTERM)) return true; /* * If the VMA does not require dirty tracking then no problematic write * can occur either. */ return !vma_needs_dirty_tracking(vma); }

      Def policy code. checks if we can write to a map

    10. /* user gate pages are read-only */ if (gup_flags & FOLL_WRITE) return -EFAULT; if (address > TASK_SIZE) pgd = pgd_offset_k(address); else pgd = pgd_offset_gate(mm, address); if (pgd_none(*pgd)) return -EFAULT; p4d = p4d_offset(pgd, address); if (p4d_none(*p4d)) return -EFAULT; pud = pud_offset(p4d, address); if (pud_none(*pud)) return -EFAULT; pmd = pmd_offset(pud, address); if (!pmd_present(*pmd)) return -EFAULT; pte = pte_offset_map(pmd, address); if (!pte) return -EFAULT; entry = ptep_get(pte); if (pte_none(entry)) goto unmap; *vma = get_gate_vma(mm); if (!page) goto out; *page = vm_normal_page(*vma, address, entry); if (!*page) { if ((gup_flags & FOLL_DUMP) || !is_zero_pfn(pte_pfn(entry))) goto unmap; *page = pte_page(entry); } ret = try_grab_page(*page, gup_flags); if (unlikely(ret)) goto unmap;

      Most of these seem like sanity checks right up until line 897 i.e, 'if(!page)'* after which we seem to unmap the page.

    11. static struct page *follow_page_mask(struct vm_area_struct *vma, unsigned long address, unsigned int flags, struct follow_page_context *ctx) { pgd_t *pgd; struct mm_struct *mm = vma->vm_mm; ctx->page_mask = 0; /* * Call hugetlb_follow_page_mask for hugetlb vmas as it will use * special hugetlb page table walking code. This eliminates the * need to check for hugetlb entries in the general walking code. */ if (is_vm_hugetlb_page(vma)) return hugetlb_follow_page_mask(vma, address, flags, &ctx->page_mask); pgd = pgd_offset(mm, address); if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd))) return no_page_table(vma, flags); return follow_p4d_mask(vma, address, pgd, flags, ctx); }

      places mask after following page into pte

    12. struct page *follow_page(struct vm_area_struct *vma, unsigned long address, unsigned int foll_flags) { struct follow_page_context ctx = { NULL }; struct page *page; if (vma_is_secretmem(vma)) return NULL; if (WARN_ON_ONCE(foll_flags & FOLL_PIN)) return NULL; /* * We never set FOLL_HONOR_NUMA_FAULT because callers don't expect * to fail on PROT_NONE-mapped pages. */ page = follow_page_mask(vma, address, foll_flags, &ctx); if (ctx.pgmap) put_dev_pagemap(ctx.pgmap); return page; }

      finds page

    13. if (flags & FOLL_SPLIT_PMD) { spin_unlock(ptl); split_huge_pmd(vma, pmd, address); /* If pmd was left empty, stuff a page table in there quickly */ return pte_alloc(mm, pmd) ? ERR_PTR(-ENOMEM) : follow_page_pte(vma, address, pmd, flags, &ctx->pgmap); } page = follow_trans_huge_pmd(vma, address, pmd, flags); spin_unlock(ptl); ctx->page_mask = HPAGE_PMD_NR - 1; return page;

      we're finding the page again but storing page mask in ctx

    14. /* FOLL_GET and FOLL_PIN are mutually exclusive. */ if (WARN_ON_ONCE((flags & (FOLL_PIN | FOLL_GET)) == (FOLL_PIN | FOLL_GET))) return ERR_PTR(-EINVAL); ptep = pte_offset_map_lock(mm, pmd, address, &ptl); if (!ptep) return no_page_table(vma, flags); pte = ptep_get(ptep); if (!pte_present(pte)) goto no_page; if (pte_protnone(pte) && !gup_can_follow_protnone(vma, flags)) goto no_page; page = vm_normal_page(vma, address, pte); /* * We only care about anon pages in can_follow_write_pte() and don't * have to worry about pte_devmap() because they are never anon. */ if ((flags & FOLL_WRITE) && !can_follow_write_pte(pte, page, vma, flags)) { page = NULL; goto out; } if (!page && pte_devmap(pte) && (flags & (FOLL_GET | FOLL_PIN))) { /* * Only return device mapping pages in the FOLL_GET or FOLL_PIN * case since they are only valid while holding the pgmap * reference. */ *pgmap = get_dev_pagemap(pte_pfn(pte), *pgmap); if (*pgmap) page = pte_page(pte); else goto no_page; } else if (unlikely(!page)) { if (flags & FOLL_DUMP) { /* Avoid special (like zero) pages in core dumps */ page = ERR_PTR(-EFAULT); goto out; } if (is_zero_pfn(pte_pfn(pte))) { page = pte_page(pte); } else { ret = follow_pfn_pte(vma, address, ptep, flags); page = ERR_PTR(ret); goto out; } } if (!pte_write(pte) && gup_must_unshare(vma, flags, page)) { page = ERR_PTR(-EMLINK); goto out; } VM_BUG_ON_PAGE((flags & FOLL_PIN) && PageAnon(page) && !PageAnonExclusive(page), page); /* try_grab_page() does nothing unless FOLL_GET or FOLL_PIN is set. */ ret = try_grab_page(page, flags); if (unlikely(ret)) { page = ERR_PTR(ret); goto out; } /* * We need to make the page accessible if and only if we are going * to access its content (the FOLL_PIN case). Please see * Documentation/core-api/pin_user_pages.rst for details. */ if (flags & FOLL_PIN) { ret = arch_make_page_accessible(page); if (ret) { unpin_user_page(page); page = ERR_PTR(ret); goto out; } } if (flags & FOLL_TOUCH) { if ((flags & FOLL_WRITE) && !pte_dirty(pte) && !PageDirty(page)) set_page_dirty(page); /* * pte_mkyoung() would be more correct here, but atomic care * is needed to avoid losing the dirty bit: it is easier to use * mark_page_accessed(). */ mark_page_accessed(page); }

      finds page in pte. Judging by the complexity of the logic this is most likely policy code because we're literally getting user page

    15. if (flags & FOLL_TOUCH) { pte_t orig_entry = ptep_get(pte); pte_t entry = orig_entry; if (flags & FOLL_WRITE) entry = pte_mkdirty(entry); entry = pte_mkyoung(entry); if (!pte_same(orig_entry, entry)) { set_pte_at(vma->vm_mm, address, pte, entry); update_mmu_cache(vma, address, pte); }

      uses pte to mark dirty pages and finds pfn in pte

    16. if (unlikely(page_folio(page) != folio)) { if (!put_devmap_managed_page_refs(&folio->page, refs)) folio_put_refs(folio, refs); goto retry;

      Uses prediction to check if a folio still points to the page. This is part of the function that tries to retrieve the folio to confirm that it is associated with a page.

    17. folio = page_folio(page); if (WARN_ON_ONCE(folio_ref_count(folio) < 0)) return NULL; if (unlikely(!folio_ref_try_add(folio, refs))) return NULL;

      These increment the reference count for the folio since you're returning a reference of the folio. Important function so important internal logic subsequently

    18. if (flags & FOLL_GET) return try_get_folio(page, refs);

      Policy logic that determines and tries to retrieve folios based on given flags.

    19. if (flags & FOLL_PIN) { ret = arch_make_page_accessible(page); if (ret) { gup_put_folio(folio, 1, flags); goto pte_unmap; } }

      part of policy code

    20. if (!(gup_flags & FOLL_LONGTERM)) return __get_user_pages_locked(mm, start, nr_pages, pages, locked, gup_flags);

      policy decision to get locked page!

    21. if (page_increm > nr_pages) page_increm = nr_pages;

      next page logic

    22. */ if (gup_flags & FOLL_MADV_POPULATE) { vma = vma_lookup(mm, start); if (!vma) { ret = -ENOMEM; goto out; } if (check_vma_flags(vma, gup_flags)) { ret = -EINVAL; goto out; } goto retry; }

      page populate flag for sure

    1. Eine neue Studie bestätigt, dass die Hauptursache des immer schnelleren Anstiegs des Methan-Gehalts der Atmosphäre die Aktivität von Mikroaorganismen ist, die durch die globale Erhitzung zunimmt. Damit handelt es sich um einen Feedback-Mechanismus, durch den sich die globale Erhitzung selbst verstärkt. https://taz.de/Zu-viel-Methan-in-der-Atmosphaere/!6045201/

      Studie: https://www.pnas.org/doi/10.1073/pnas.2411212121

      Vorangehende Studien: https://www.nature.com/articles/s41558-023-01629-0, https://www.nature.com/articles/s41558-022-01296-7.epdf?sharing_token=CDMa5-ti34UNBqv3kfuCB9RgN0jAjWel9jnR3ZoTv0NZRKXEI-7kyXEEvNI7duu65JLcZpmhGxWTeSfYcMCqxqYk5nUrdR60izmjToMNw56RgBqIcn3JXKxSjx13vmB9ZYndGTUMt-52Vs7HT_T6K9Oth4QFRyP51eOpz8pV8l65HFDo2VSfQ6xDXklMtmvt-HGwltAINb_2xgmtAR-V4g%3D%3D&tracking_referrer=taz.de

    1. each one of those stages there's four St stages and we can say that there's equal amount of stages above it has a sacred version and and a version that the sacred is lost

      for - wisdom stages - 4 middle school stages and - 4 high school stages - John Churchill

    1. “There are a lot of people who mistakenly think intelligibility is the standard. ‘Oh, you knew what I was saying.’ Well, that’s not the standard. That’s a really bottom-of-the-barrel standard,” he says. “People who are concerned with English usage usually want to have their words taken seriously, either as writers or as speakers. And if you don’t use the language very well, then it hard to have people take your ideas seriously. That’s just the reality.”
  3. Oct 2024
    1. 2023 haben Böden und Landpflanzen fast kein CO2 absorbiert. Dieser Kollaps der Landsenken vor allem durch Dürren und Waldbrände wurde in diesem Ausmaß kaum vorausgesehen, und es ist nicht klar, ob auf ihn eine Regeneration folgt. Er stellt Klimamodelle ebenso in Frage wie die meisten nationalen Pläne zum Erreichen von CO2-Neutralität, weil sie auf natürlichen Senken an Land beruhen. Es gibt Anzeichen dafür, dass die steigenden Temperaturen inzwischen auch die CO2-Aufnahmefähigkeit der Meere schwächen. Überblicksartikel mit Links zu Studien https://www.theguardian.com/environment/2024/oct/14/nature-carbon-sink-collapse-global-heating-models-emissions-targets-evidence-aoe

  4. Sep 2024
    1. What I coveted was the shears

      The potential for violence, for cutting the flowers, for almost sabotaging her own reproductive system which is deemed by Gilead to be a gift, a valuable resource. She doesn't want to be valued.

  5. Aug 2024
    1. Do NOT worry about math! You are an adult, and you can learn math muchmore easily than when you were in high school. We’ll review everything you needto know about high school math, and by the end of this chapter, you’ll see thatmath is nothing to worry about.

      As an adult you shouldn't be worried about math as you'll learn it faster than in high school

    Tags

    Annotators

    1. there is one thing that I want to to do on top of proving you know or disproving fact falsifying or not this theory is to finding ways in which people that are ready can have an extraordinary experience of Consciousness like did not through drugs but through methods you know way to breathe or different ways of special meditations what have you they are sufficiently welld developed that they can help the process of people experiencing themselves their Unity with one

      for - Federico Faggin - high priority objective - find and implement ways to catalyze authentic awakening experiences for those who are ready

      Federico Faggin - high priority objective - find and implement ways to catalyze authentic awakening experiences for those who are ready - Deep Humanity BEing journeys!

    2. I want to figure out find out help find out ways in which we can have things where maybe at the most you need to dedicate a week of your life you know because you need to be in a special environment in order to have the the sort of the the conditions in which this can happen and can have those experiences and if say 30% of the people that claim to be ready actually have one of those experien that would be a marvelous objective to reach so that's what I'm thinking right now

      for - Federico Faggin - high priority objective - find and implement ways to catalyze authentic awakening experiences in a short time - ie - one week

    1. if we lose the Green  and Ice Sheet, or the AMOC, it would be a complete disaster. So, you cannot measure  it economically, it's an infinite parameter. So then, if the probability, even if the  probability is low, if you multiply a low probability with an infinite impact,  then risks are also infinitely high.

      for - planetary emergency - risk analysis

      planetary emergency - risk analysis - risk = probability x impact - If impact is high, then even low probability x high impact means high risk - If AMOC or Greenland icesheet melts, the impact is so high that it is not even economically measurable

    1. here you see a company with three different departments depicted in blue red and green

      for - neuroscience - example - diverse and low density connections beats non-diverse and high connections

      neuroscience - example diverse and low density connections vs non-diverse high density connections - having access to many diverse perspectives is a key enabler of good problem-solving and innovation

  6. Jul 2024
    1. for - search - google - high resolution addressing of disaggregated text corpus mapped to graph - search results of interest - high resolution addressing of disaggregated text corpus mapped to graph

      search - google - high resolution addressing of disaggregated text corpus mapped to graph - https://www.google.com/search?q=high+resolution+addressing+of+disaggregated+text+corpus+mapped+to+graph&oq=high+resolution+addressing+of+disaggregated+text+corpus+mapped+to+graph&gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIHCAEQIRigATIHCAIQIRigAdIBCTMzNjEzajBqN6gCALACAA&sourceid=chrome&ie=UTF-8

      to - search results of interest - high resolution addressing of disaggregated text corpus mapped to graph - A New Method for Graph-Based Representation of Text in - The use of a new text representation method to predict book categories based on the analysis of its content resulted in accuracy, precision, recall and an F1- ... - https://hyp.is/H9UAbk46Ee-PT_vokcnTqA/www.mdpi.com/2076-3417/10/12/4081 - Encoding Text Information with Graph Convolutional Networks - According to our understanding, this is the first personality recognition study to model the entire user text information corpus as a heterogeneous graph and ... - https://hyp.is/H9UAbk46Ee-PT_vokcnTqA/www.mdpi.com/2076-3417/10/12/4081

    1. the vast majority of hypertension high blood pressure the root cause is insulin resistance metabolic disease

      for - health - heart - majority of hypertension and high blood pressure is caused by insulin resistance metabolic disease

    2. you can take these medications you can expose yourself to the risk of the medications 00:26:57 or or you can change the way you eat you can deal with the true underlying problem insulin resistance

      for - health - heart - root cause of heart disease - lifestyle choices - dietary choice

      health - heart - root causes of heart disease - lifestyle choices - dietary choice - root cause of insulin resistance is poor diet with too much sugar and carbs and other variables such as excessive alcohol - dietary changes can shift lipid particles to large, fluffy LD particles - high sugar and carbs is a main factor leading to insulin resistance

      to - Root cause of insulin resistance - interview with Robert Lustig - https://hyp.is/l14UvjzwEe-cUVPwiO6lIg/docdrop.org/video/WVFMyzQE-4w/

    3. who are these people that have a high LDL but they are metabolically healthy

      for - health - heart - need to identify those with high LDL but ARE metabolically healthy

      health - heart - high LDL AND metabolically healthy - against medical norms, there may be NO NEED TO LOWER THEIR LDL levels - and in fact, trying to do so may lead to harm

  7. Jun 2024
    1. this company's got not good for safety

      for - AI - security - Open AI - examples of poor security - high risk for humanity

      AI - security - Open AI - examples of poor security - high risk for humanity - ex-employees report very inadequate security protocols - employees have had screenshots capture while at cafes outside of Open AI offices - People like Jimmy Apple report future releases on twitter before Open AI does

    2. here are so many loopholes in our current top AI Labs that we could literally have people who are infiltrating these companies and there's no way to even know what's going on because we don't have any true security 00:37:41 protocols and the problem is is that it's not being treated as seriously as it is

      for - key insight - low security at top AI labs - high risk of information theft ending up in wrong hands

    1. 04:00 Allen compares GTD to F1, here. Funnily enough, the most productive people are the ones that get most into GTD. Similarly, the fastest people, in F1, want to get even faster, "by reducing drag in the system".


      Interestingly, DRS can thus be used in other contexts, like productivity. "How can you open your flap, and reduce drag, like F1 cars do?"

    1. for - paper

      paper - title: Carbon Consumption Patterns of Emerging Middle Class - year: 2020 - authors: Never et al.

      summary - This is an important paper that shows the pathological and powerful impact of the consumer story to produce a continuous stream of consumers demanding a high carbon lifestyle - By defining success in terms of having more stuff and more luxurious stuff, it sets the class transition up for higher carbon consumption - The story is socially conditioned into every class, ensuring a constant stream of high carbon emitters. - It provides the motivation to - escape poverty into the lower middle class - escape the lower middle class into the middle class - escape the middle class into the middle-upper class - escape the middle-upper class into the upper class - With each transition, average carbon emissions rise - Unless we change this fundamental story that measures success by higher and higher levels of material consumption, along with their respectively higher carbon footprint, we will not be able to stay within planetary boundaries in any adequate measure - The famous Oxfam graphs that show that - 10% of the wealthiest citizens are responsible for 50% of all emissions - 1% of the wealthiest citizens are responsible for 16% of all emissions, equivalent to the bottom 66% of emissions - but it does not point out that the consumer story will continue to create this stratification distribution

      from - search - google - research which classes aspire to a high carbon lifestyle? - https://www.google.com/search?q=research+which+classes+aspire+to+a+high+carbon+lifestyle%3F&oq=&gs_lcrp=EgZjaHJvbWUqCQgGECMYJxjqAjIJCAAQIxgnGOoCMgkIARAjGCcY6gIyCQgCECMYJxjqAjIJCAMQIxgnGOoCMgkIBBAjGCcY6gIyCQgFECMYJxjqAjIJCAYQIxgnGOoCMgkIBxAjGCcY6gLSAQk4OTE5ajBqMTWoAgiwAgE&sourceid=chrome&ie=UTF-8 - search results returned of salience - Carbon Consumption Patterns of Emerging Middle Classes- This discussion paper aims to help close this research gap by shedding light on the lifestyle choices of the emerging middle classes in three middle-income ... - https://www.idos-research.de/uploads/media/DP_13.2020.pdf

    1. Overall, this alternate cri-teria of assessment (in relation to Rubin) is indeed tenable because,as Menand noted, by the mid-1960s “the whole high-low paradigm”would “end up in the dustbin of history,” replaced by a “culture ofsophisticated entertainment.”25

      This would seem to be refuted by the thesis of Poor White Trash in which there was still low brow entertainment which only intensified over time into the social media era.

  8. May 2024
  9. Apr 2024
    1. for - Adverse Childhood Experiences - ACE - Aces Too High - childhood trauma - Intergenerational harm

      Summary - A great resource that examines how adverse childhood experiences turns into harmful adult behavior that perpetuates the cycle - Most of the most famous pathological leaders of the modern era, including contemporary ones are examined from the perspective of their ACE

  10. Mar 2024
  11. Feb 2024
    1. Each operator, though initially hired asa temp, and, in some cases, no more than high-school-educated, isthe survivor of a rigorous two-week training program. (Most hiresdo not make it through training.)

      Observation in 1994 of short term training of high school graduates for specific tasks which seems to hold true of similar customer service reps in 2024 based on anecdotal evidence of a friend who does customer service training.

  12. Jan 2024
  13. Dec 2023
    1. <small><cite class='h-cite via'> <span class='p-author h-card'>Manfred Kuehn</span> in Taking note: Luhmann's Zettelkasten (<time class='dt-published'>08/06/2021 00:16:23</time>)</cite></small>

      Note the use of the edge highlighted taxonomy system used on these cards:

      Similar to the so called high five indexing system I ran across recently.

      https://www.flickr.com/photos/hawkexpress/albums/72157594200490122/

    1. Ultra deep geothermal power
      • for:: clean energy source with high energy density -;ultra deep brother power

      • comment

        • a practical, cost-effective, initiator ubiquitous, reliable high energy density substitute for fossil fuels can
          • give us authentic hope and
          • propose complimentary actions that can now make sense like
            • temporary energy diet until the practical solution manifests
    2. it's mainly a problem of providing huge quantities of high power density zero carbon energy
      • for: modernity - high energy density

      • paraphrase

        • high density energy sources of a few thousand watts per square meter are required to operate high energy density infrastructure like transportation vehicles and large buildings that consume the same types of energy density
    1. Consider pushing your company to change its own banking
      • for: SRG campaign - stop high emissions banking

      • SRG campaign

        • stop - banking with high emission banks
        • reset - search for alternate low emissions bank
        • go - if criteria is met, do the switch
  14. Nov 2023
  15. Oct 2023
    1. ``` Trauma Releasing Exercises are a form of Cult Deprogramming

      [[Trauma Releasing Exercises]] (TRE) by [[David Berceli]]

      related articles: [[Tremor]], [[Quakers]] (aka "shakers"), [[Bradford Keeney]] ([[Shaking medicine]]), [[Somatic experiencing]] ([[Peter A. Levine]]), [[Ecstatic dance]], [[Runner's high]], ... (its revealing that wikipedia has no articles on these "alternative medicine" topics... all hail the cult of big pharma!)

      this association assumes that cults use [[Psychological trauma]] to imprison their slaves.

      Psychological trauma is an emotional response caused by severe distressing events such as accidents, violence, sexual assault, terror, or sensory overload.

      in every cult, there are people who want to escape. this "want to escape" starts early in childhood, where it is counteracted by punishment = by creating psychological trauma.

      Sigmund Freud's [[Psychoanalysis]] always blames "some childhood trauma" for "neurotic" behavior in adults, instead of fixing the child education, to prevent the creation of that trauma in the first place = radical solution.

      the cult slaves are expected to use their body only for working, not for sports, not for fighting, not for pleasure. all problems should be solved peacefully and intellectually ("let us talk..."). because the cult leaders know: if the slaves make too much use of their body (shaking medicine), the slaves would escape.

      also related: [[Slave morality]] is another word for [[Cult]], because the [[Public opinion]] of every cult is a form of slave morality (beautiful lies), and hard truths ([[Red pill and blue pill|red pills]]) are hidden as master morality. ```

  16. Sep 2023
    1. these guys are lemurs 00:19:09 taking hits off of centipedes so they bite centipedes literally get high and they go into these trance-like states I'm sure this is not at all familiar to anyone here 00:19:24 um they get super cuddly uh and then later wake up and go their way but they are seeking a kind of transcendent State of Consciousness Apes will spin they will hang on Vines and spin to get dizzy 00:19:37 and then Dolphins will intentionally inflate puffer fish to get high pass them around in the ultimate puff puff pass right many mammals seek a Transcendent 00:19:57 altered state of being and if they communicate they may well communicate about it
      • for: animals getting high, animals seeking altered state of consciousness, lemurs - getting high, dolphins - getting high, apes - getting high
  17. Aug 2023
    1. artists are complicit in
      • for: carbon emissions of the 1%, carbon inequality, carbon emissions - artists, high carbon lifestyle
      • comment
        • top tier entertainers are conditioned to a high carbon lifestyle. This is a challenge to overcome.
        • example given
          • DJ who flew to perform in four different EU cities in the same evening!
  18. Jul 2023
    1. In addition to their high GHG emissions from consumption, high-SES people have disproportionate climate influence through at least four non-consumer roles: as investors, as role models within their social networks and for others who observe their choices, as participants in organizations and as citizens seeking to influence public policies or corporate behaviour
      • for: high-SES, 1%, W2W, inequality, carbon inequality, elites, billionaires, millionaires, leverage point
      • five high carbon emission areas of high-SES, HNWI, VHNWI
        • consumption
        • investor
        • role model within social networks
        • participants in organizations
        • citizens seeking to influence public policies or corporate behavior
    2. We focus on individuals and households with high socioeconomic status (SES; henceforth, high-SES people) because they have generated many of the problems of fossil fuel dependence that affect the rest of humanity.
      • for: high-SES, 1%, W2W, inequality, carbon inequality, elites, billionaires, millionaires, leverage point
      • definition
        • high-SES
          • high socioeconomic status
          • equivalent to high net worth individual (HNWI) or
          • very high net worth individual (VHNWI)
    1. people who are wealthy contribute the most to causing climate change, they are unfortunately also in the most ideal position to help us mitigate climate change.
      • for: W2W, carbon inequality, leverage point
      • quote
        • "people who are wealthy contribute the most to causing climate change,
          • they are unfortunately also in the most ideal position to help us mitigate climate change"
      • author
    1. Introduced in September 2017, macOS 10.13 High Sierra brought new updates to the Photos and Safari apps. However, most of the changes happened underneath, including performance improvements and technical updates.
  19. May 2023
    1. examples of high protein (hard) wheats: - Khorasan - Durum - Hard White Wheat - Hard Red Wheat - Red Fife

      Good for breads, pizza, and things where chew is more valuable.

    1. Login New Window Support Compliance Free Audit

      Good colour scheme that illustrates high contrast text. Low-contrast text is a common accessibility issue on many websites, poor contrast makes it harder for certain users to identify the text, edges, and shapes of several components.

  20. Apr 2023
  21. Mar 2023
    1. They also highlighted that high emitters live in all countries, but were concentrated in the USA (3.16 million), causing an average 318 t CO2-e per person, Luxemburg (10,000 individuals emitting 287 t CO2-e/year each), Singapore (50,000, 251 t CO2-e/year), Saudi Arabia (290,000, 247 t CO2-e/year), and Canada (350,000, 204 t CO2-e/year)

      Noteworthy countries with the most high carbon net worth individuals (HCNW): - USA - 3.16 million individuals emitting an average 318 t CO2-e/year/person, - Luxemburg: 10,000 individuals emitting an average 287 t CO2-e/year/person, - Singapore: 50,000 individuals emitting 251 t CO2-e/year/person, - Saudi Arabia: 290,000 individuals emitting 247 t CO2-e/year/person, - Canada: 350,000 individuals emitting 204 t CO2-e/year/person

  22. Feb 2023
    1. The long-term goal of public health is to reduce suicide
    2. Wyoming has been aptly characterized as “a small town with very long streets.”

      People often feel lonely because there are not as many people in the state.

  23. Jan 2023
    1. High Country News, Rebecca Nagle reported that for every dollar the U.S. government spent on eradicating Native languages in past centuries, it has spent less than 7 cents on revitalizing them in the 21st century. 

      !- United States indigenous language : ststistic - US Govt spent less than 7 cents for every dolloar spent eradicating indigenous language in the past - Citation : report by Rebecca Nagle in the High Country News: https://www.hcn.org/issues/51.21-22/indigenous-affairs-the-u-s-has-spent-more-money-erasing-native-languages-than-saving-them

  24. Dec 2022
  25. Oct 2022
    1. Mirriam also struggled as an adult English learner. A Spanish language teacher and the owner of a Spanish language school for school-age children, Mirriam immigrated to Oregon after completing two years of college in Mexico.
    1. After the first week of the campaign, we realized what are the main problematic pillars and fixed them right away. Nevertheless, even with these improvements and strong support from the Gamefound team, we’re not even close to achieving the backer numbers with which we could safely promise to create a game of the quality we think it deserves.
  26. Sep 2022
    1. Consider another example—education. It is true that in most countries, asin the United States, a higher level of educational attainment is typically as-sociated with a lower risk of economic insecurity. But the penalties associatedwith low levels of educational attainment, and the rewards associated with highlevels of attainment, vary significantly by country. Full-time workers without ahigh school degree in Finland, for instance, report the same earnings as thosewith a high school degree. In the United States, however, these workers ex-perience a 24 percent earnings penalty for not completing high school.23 InNorway, a college degree yields only a 20 percent earnings increase over a highschool degree for full-time workers, versus a much higher 68 percent increase inthe United States.24 The percentage of those with a high school degree earningat or below the poverty threshold is more than 4 times higher in the UnitedStates than in Belgium.25

      The US penalizes those who don't complete high school to a higher degree than other countries and this can tend to lower our economic resiliency.

      American exceptionalism at play?

      Another factor at play with respect to https://hypothes.is/a/2uAmuEENEe2KentYKORSww

    2. High-poverty neighborhoods arefrequently defined as census tracts in which 40 percent or more of the residentsare living below the poverty line.6
  27. Aug 2022
    1. /t/ followed by a high front vowel is realized as [ˇc] after a continuantand as [ˇs] elsewhere.

    Tags

    Annotators

  28. Jul 2022
    1. In this high-speed PCB design guide, we will encapsulate the high-speed PCB layout techniques, high-speed layout guidelines to help designers.

      Would you like to speed up the performance of your product?

      With innovative and fast electric equipment, designers and engineers can speed up the product. Not only this, you need a high speed PCB run faster.

      Read the blog further to understand the rules and challenges of high-speed PCB design.

  29. Apr 2022
    1. Natalie E. Dean, PhD. (2021, May 4). The imminent FDA authorization of a vaccine for 12-15 year olds is great news, and adolescents should be able to access vaccine. But in the short term, we must also grapple with the ethics of vaccinating adolescents ahead of high-risk adults in other countries. [Tweet]. @nataliexdean. https://twitter.com/nataliexdean/status/1389381649314598914

    1. Natalie E. Dean, PhD. (2021, May 4). Another framing for this tweet: Wow, the US will soon be able to expand vaccine access to 12-15 year olds. Meanwhile, there are countries where healthcare workers treating COVID patients can’t access vaccines. What more can the US government do to support the global community? [Tweet]. @nataliexdean. https://twitter.com/nataliexdean/status/1389568668548349952

  30. Mar 2022
  31. Feb 2022
  32. Jan 2022
    1. Yes, precisely because I've been involved in maintaining codebases built without real full stack frameworks is why I say what I said.The problem we have in this industry, is that somebody reads these blog posts, and the next day at work they ditch the "legacy rails" and starts rewriting the monolith in sveltekit/nextjs/whatever because that's what he/she has been told is the modern way to do full stack.No need to say those engineers will quit 1 year later after they realize the mess they've created with their lightweight and simple modern framework.I've seen this too many times already.It is not about gatekeeping. It is about engineers being humble and assume it is very likely that their code is very unlikely to be better tested, documented, cohesive and maintained than what you're given in the real full stack frameworks.Of course you can build anything even in assembler if you want. The question is if that's the most useful thing to do with your company's money.
  33. Dec 2021
  34. Nov 2021