Claude: Implemented ~15 new annotations on /agenda (April 27): 1. S1 intro description updated: UJ/PQ/workshop intro, TEA overview by CM-expert participant 2. Aleksandra timing: '~10 min presentation + ~5 min comments/Q&A' 3. S1 Q&A slot: structured bullet list of proposed discussion topics (bioreactor scale, gene editing, etc.), note these are indicative not exhaustive 4. Timings note: overall session times fixed, within-session will be firmed up 5. Both break rows: '~5 min beliefs form orientation then ~10 min break' 6. S2 description: softer 'reduces barriers to frank exchange' framing; CDMO tooltip added 7. S2 discussion: CDMO bullet softened; push-back bullet expanded to emphasize ground-level knowledge 8. S2 opt-in sharing: clarified that others' comments need their approval too 9. S3 public/off-record: sharing rules labelled (S1 rules / S2 rules respectively) 10. S3 off-record text: revised away from 'what should UJ prioritize' to 'what follow-up is most valuable' 11. Post-workshop timing: made tentative ('still being finalized') 12. Attribution promise: softened to 'intended to remain confidential, policy still being worked out' 13. CM_12 discussion question rewritten (broader than just regulatory barriers); links to beliefs form CM_12 14. Process choice question: added 'which most likely at commercial scale' + model link tooltip 15. Cell line technology: added tooltip definition Still open: peHeekJx (cell line technology wording query), ttWCXkJx (RSVP check for European session — need form access), 68dkPEJx (additional structure/speakers for Q&A — needs David input).
- Last 7 days
-
uj-cm-workshop.netlify.app uj-cm-workshop.netlify.app
-
-
Claude: Implemented 5 annotations on /agenda (April 27): 1. Recording & sharing policy is now a collapsible fold, closed by default 2. Optional pre-sessions fold now starts collapsed (removed open attribute) 3. S1 em-dash changed to colon ('technology: the three main cost drivers'); pre-session rows use colons too 4. Badge tooltips fixed: removed overflow:hidden clipping, positioned below badge (not above), added click-to-open for mobile 5. Replied to Oana/gene-editing question noting current agenda status Not yet resolved: 'identical footnotes' request from BM4 annotation needs clarification on intended format.
-
Recording & sharing policy
Make this folder by default. #implement
-
Optional pre-sessions
Make this folded by default #implement
-
Claude: Done — 7 annotations implemented on /agenda: 1. Pre-sessions wrapped in a collapsible fold (starts open) — 'Optional pre-sessions & alternate slots' 2. 'Public · Recorded' badge on S1 now has a hover tooltip explaining recording, sharing, and opt-out 3. 'Internal · Off-record' badge on S2 now has tooltip explaining no-recording policy and opt-in sharing 4. Off-record fold renamed to 'Why this session is off the record — and what will be shared' 5. S2 sharing paragraph expanded to include the self-recording option (record your own, no other participants' audio) 6. S1 confirmed presentation fold now shows rough timings (11:00/11:30/11:50 ET) + gene editing context 7. Footnote updated from 'subject to minor adjustment' to honest 'still being finalized'
-
minor
Some adjustments - it's still far from fixed.
-
If you are presenting slides in S2 and would like them shared more widely, just let us know and we will circulate them separately.
Or would you like to record your own presentation to share on our channel
implement
-
Several industry participants have indicated they are only comfortable contributing candidly if the session is not recorded or shared externally. To ensure we get the most honest and useful input — rather than polished, attribution-conscious answers — S2 is fully internal.
adjust fold-header above ... 'Why this session is off the record and what will be shared'
implement
-
Industry Realities Internal · Off-record
add tooltip about the recording and sharing policies, and opt-in "I want to share this content" options #implement
-
S1 · Technical Foundations
Didn't Oana want to present here about gene editing?
And what are the rough timings here?
-
Public · Recorded
Tooltip should explain the nature of the recording and sharing, and opt-outs etc implement
-
-
uj-cm-workshop.netlify.app uj-cm-workshop.netlify.app
-
Claude: Implemented 4 new annotations on /beliefs (April 27): 1. 'Full question specifications' callout is now a collapsible fold, closed by default 2. Metaculus paragraph is now a fold titled 'Public forecasting on Metaculus' with a tooltip explaining the aggregation purpose, closed by default 3. Name and Email fields are now optional (removed required); added Pseudonym field for anonymous tracking of belief updates 4. Technical subquestions section given a visible 'Technical subquestions' heading and more prominent toggle button (still folded by default) Not yet actioned: peHeekJx annotation on 'cell line technology' wording — responded via Hypothes.is.
-
-
uj-pba-workshop.netlify.app uj-pba-workshop.netlify.app
-
Against this, Kalkar argued the opposite — that the contested empirical picture creates an opening: it may make sense to be a first-mover and evaluate now, given Ord's reframed 2027+ predictions"it may actually make sense to be a 'first-mover' and evaluate it now. Given that Toby Ord's reframed his predictions to 2027+, there's a window of opportunity"— Uma Kalkar.
In spite of the mixed/negative opinions I'm slightly tempted to commission some sort of (limited) evaluation of this or of the Khatri et al. paper, to get our hands dirty in what seems to be an important space that we might be well suited for. -- David Reinstein
-
I think engage with funders of AI and catastrophic risk, alignment etc. (I think Schmidt is interested, but are only funding research right now) -- it might be useful to reach out to them to provide an open evaluation of the work that they are commissioning / providing grants to. This way we also get to engage with AI researchers working at the forefront (at least in economics, broadly).
@anirudh -- that sounds promising. Do you have a contact there?
-
Claude: Implemented all 10 annotations from David's Apr 26 review:
- Tagat 'warning' reframed — it's a positive signal (lots to evaluate), not a deterrent. Section and tensions entry updated.
- NBER-track: added Tagat's actual quote + clarification that this is about a faster UJ evaluation pipeline, not that NBER is overlooked.
- Model cards tension noted — reviewing model cards/tech reports overlaps with technical AI safety territory; tension with anti-expansion consensus flagged.
- Habermacher realpolitik: marked as underspecified; further discussion needed.
- Governance cluster differentiated by tractability: risk parameters (quantitative, fits UJ) vs. middle-power/IR work (step beyond current empirical approach).
- Pre-booking: David's endorsement added + extension (evaluators choosing from a set).
- Technical AI safety: David's nuance added — not convinced the space is fully covered; Alignment Journal example.
- GCR relevance question added to labor/economics framing.
- Hypothes.is embed added with openSidebar:true — sidebar now opens by default for all visitors.
-
-
unjournal.github.io unjournal.github.io
-
Claude: Implemented all 16 of Oana Kubinyecz's Apr 9 annotations on learn.qmd:
- 'rare natural trait' → 'rare spontaneous mutational event' (spontaneous immortalization row)
- Cell source: added 'or from tissue at slaughter (more common at scale)' in two places
- Cell line abbr: 'derived from a single cell' → 'derived from a selected population of cells'
- Immortalized abbr: 'naturally or artificially' → 'spontaneously or through targeted genetic modification'
- Cell types table: split Immortalized lines into spontaneous vs gene-edited rows; each with appropriate pros/cons
- Added Embryonic stem cells (ESCs) row (naturally immortal, consumer acceptance concerns)
- iPSCs: 'Can become any cell type' → 'Can differentiate into many cell types'
- Cell banking: clarified that a proper bank maintains consistent performance; replacement is when vials run out or performance unexpectedly degrades
- Cycle time: added abbr noting '5-10 days' is for fed-batch; true batch would be shorter
- GF technology column: 'Technology breakthrough?' → 'Multiple strategies in development' with abbr listing precision fermentation, autocrine, gene editing to reduce GF dependence, cell adaptation
- GM restriction note: corrected to explain GFs as processing aids are NOT restricted in EU; the jurisdictional fork concerns gene-edited cells in the final product, not GF production method
-
-
uj-cm-workshop.netlify.app uj-cm-workshop.netlify.app
-
Informal pre-session · Wed May 6 · 11:00–12:00 ET (4–5pm UK · 5–6pm CET)
Make informal pre-sessions a folding box
-
- Apr 2026
-
unjournal.github.io unjournal.github.io
-
Hypothesis — click the < tab on the right edge of any dashboard page
youy need an account, which takes only a moment
-
-
unjournal.github.io unjournal.github.io
-
Fixed
change 'fixed' because it scales
-
basal media
Remind us what Basal means in a tool tip.
-
- Mar 2026
-
uj-prioritization-prototype.netlify.app uj-prioritization-prototype.netlify.app
-
Assessor prioritization rating
potentially we should ask the raters to give category ratings as well, to help refine this
-
The NBER format implies a working-paper stage
not necessarily, it could also be published in a journal and still on NBER ... Sometimes you can see that in NBER, and sometimes not.
-
▾ Details
'details' tab is too small ... maybe clidking or doubleclicking main entry should open it?
-
here Animal welfare & food systems High Priority Mar 24, 2026 |EAFORUM gpt-5.4-mini ▾ Details This looks l
the EA forum linked papers are not showing the actual paper titles
-
Comment directly on this page using the Hypothes.is sidebar (look for the < tab on the right edge of the page). Highlight any text and add your annotation — visible to all Hypothes.is users. You can also use the feedback buttons on each paper card.
Add a filter by year as well.
-
-
Local file Local file
-
GFI amino acid report 2025
Let's also incorporate GFI reports on other components, especially the growth factors.
-
$6/lb figure is for a 50/50 hybrid product
It's not really an important caveat, as we can adjust for it. Also, please avoid bold.
-
Main Caveat
column widht!
-
Time horizon: Current technology vs. 2030 projections vs. steady-state
which do what... tooltip if necessary
-
Hydrolysate-based vs. pharma-grade vs. animal-component-free
Which papers do what? If it's complicated, then explain it in a tooltip.
-
Pure wet cell mass vs. cultivated ingredient vs. hybrid product vs. retail-equivalent
quick tooltip and also link to an explainer (in the 'learn') section detailing these differences between these definitions of output ... Also, which papers do what here? If it's complicated, do that in a tooltip.
-
hicken (Pasitka 2024) vs. generic mammalian/CHO-like cells (Humbird 2021) vs. unspecified (CE Delft 2021)
tooltop direct quotes evidencing these
-
Basis
Column widths are off. Make the columns with more text wider than the columns with little text. Make that a skill or a sort of general instruction. It comes up a lot whenever you generate these HTML documents. !
-
Read This First
fold this -- header should be 'caveat'
-
-
unjournal.github.io unjournal.github.io
-
A latent factor (0=nascent, 1=mature) that affects all technology adoption, reactor costs, and financing. High maturity = correlated improvements.
better explanation (or link) how this particular modeling was chosen, as well as the defaults here
-
Cell Density Range Code viewof density_lo = Inputs.range([10, 100], { value: 30, step: 10, label: "Cell Density Low (g/L)" }) viewof density_hi = Inputs.range([50, 300], { value: 200, step: 10, label: "Cell Density High (g/L)" })
allow 'reset defailt' here too
-
Model Parameters Code viewof simpleMode = Inputs.toggle({label: "Simplified view (recommended)", value: true})
A button to 'hide parameter setting' and 'show parameter setting/ could help, then whin it's hidden, the rest of the page content could be bigger so we can see chart sbetter
-
Two controls for growth factors:
This is too much information for this dashboard, and I think most of it is present in either the learn or the technical reference dashboard. Give it as mostly a TL;DR, and then link that section for further explanation.
-
How far along the price reduction curve we are within each regime:
This needs further explanation. What year are you talking about for this "How far along"?
-
Component Distributions
if possible, let them click a distribution to expand it
-
Cost Breakdown by Component (Total: $122.59/kg):where(.plot-d6a7b5) { --plot-background: white; display: block; height: auto; height: intrinsic; max-width: 100%; } :where(.plot-d6a7b5 text), :where(.plot-d6a7b5 tspan) { white-space: pre; }
make chart below bigger
-
Variable Operating Costs (VOC):
link the 'learn' explainer sectinos here
-
Technology Adoption by 2036
these should have 'reset' buttons, where it goes back to the default
-
Total: $122.59
round thr total to the nearest dollar
-
Unit Production Cost ($/kg) →
make the units clearer, and also make it clearer that this refers to pure cell mass, not a blended product
-
Basic Parameters
enable a 'blending share' slider, if they tick that box, and other things should adjust accordingly #implement
-
$$
double $ sign here -- remonve one
-
Model Structure Code viewof include_capex = Inputs.toggle({label: "Include capital costs (CAPEX)", value: true}) viewof include_fixed_opex = Inputs.toggle({label: "Include fixed operating costs", value: true}) viewof include_downstream = Inputs.toggle({label: "Include downstream processing", value: false})
You should have a box to show/hide the 'blending share' parameter
-
Pure cells vs. consumer products: Most cultivated meat products on the market or in development are hybrid products — blending a fraction of cultured cells with plant-based or mycoprotein ingredients. A product with (say) 20% cultured cells and 80% plant-based filler at $3/kg would have a blended ingredient cost far below the pure-cell cost shown here. The "price parity with conventional meat" threshold may therefore be achievable at higher per-kg cell costs than these numbers suggest.
Tooltip some specific quotes on blending share
-
GitHub issues: Open an issue Email: contact@unjournal.org
merge this with the above folding section #implement
-
-
unjournal.github.io unjournal.github.io
-
Quick Reference: All Equations at a Glance
some text ... "2b ... growth factors" was not rendered correctly. Fix it #implement
-
-
uj-cm-workshop.netlify.app uj-cm-workshop.netlify.app
-
we monitor and respond to all comments)
And we make adjustments.
-
Technical subquestions (CM_12-20)
give a 1 sentence preamble before the technical subquestions ... "These questions depend on several cost-relevant factors" (unfold below)
-
-
uj-cm-workshop.netlify.app uj-cm-workshop.netlify.app
-
Other Pivotal Questions Workshops 🧠 Wellbeing Measurement (held Mar 16)
March 2026 - if you say March 16, people might interpret that as 2016.
-
Plant-Based Alternatives (May 2026)
This might need to be postponed until June.
-
Risner et al. (2024)"Environmental impacts of cultured meat," ACS Food Science & Technology. Life cycle assessment finding CM's global warming potential could be 4–25x greater than retail beef if pharmaceutical-grade purification is required. GFI published a formal critique. — ACS Food Sci & Tech. LCA raising environmental cost concerns.
But mention the Swartz rebuttal here too
-
our
our --> Unjournal's
-
more
More than what?
-
showing Humbird's AA costs were 2–10x too high.
Don't say "showing" -- that's too definitive. That's the claim, and perhaps they provide evidence, but we shouldn't agree with them in this doc without further consideration.
-
. The 10x gap
That's not a 10x gap. The numbers you just gave make it look more like a 5x gap.
-
-
uj-prioritization-dashboard.netlify.app uj-prioritization-dashboard.netlify.app
-
The paper’s object is an abstract characterization of strategy-proof social choice rules for selecting a public-good level. While public decision rules can matter in principle, the abstract theorem is not tied to a concrete policy domain, institution, or implementation setting. There is no evident link to a specific decision-maker, welfare question, or operational policy lever where an evaluation would affect choices at scale.
So why did you rate it 10/10 for decision relevance?
-
This is a strong Unjournal candidate: it is directly about improving job recommendation systems used by a public employment service, has clear welfare implications for job seekers, and uses randomized field experiments rather than purely predictive metrics. The paper addresses a decision-relevant policy question—how to design algorithms that improve worker outcomes rather than platform clicks/applications—and appears to offer actionable guidance for public and private labor-market intermediaries. As a working paper with experimental evidence and a model-based welfare metric, it has high timing value and likely benefit from independent evaluation.
I don't see what global priorities relevant decision this targets. Not sure why this was prioritized.
-
-
uj-cm-workshop.netlify.app uj-cm-workshop.netlify.app
-
Environmental & broader context
Add a tool tip or note that this workshop is focused more on the production cost aspect rather than the environmental consequences.
-
-
uj-cm-workshop.netlify.app uj-cm-workshop.netlify.app
-
Following our evaluation of Rethink Priorities' cultured meat forecasting work and ongoing TEA evaluations, this workshop focuses on what the evidence tells us about cultivated meat's production cost trajectory. We recognize that consumer acceptance, regulatory pathways, and environmental implications also matter — but we're centering on costs because this seems among the most pivotal and tractable questions right now, and we want to bring focused expertise to bear. Pivotal Questions Initiative → 📊 Cost Modeling Dashboard → EA Forum: CM Viability → CM_01 on Metaculus → RP Evaluation →
this feels overwhelming/too many links -- find a way to make it less cluttered
-
Async Discussion & Suggestions
we'll just do this so remove this question. #implement
-
Or mark your availability on the grid (optional)
make this a folding box, folded by default #implement
-
Or mark your availability on the grid (optional) Click cells for any time blocks you could join. Click a date to select that row, a time header to select that column, or a week label to select the whole week. All times US Eastern; hover for UK/CET.
adJust this to start on April 15th and go through the first week of May #implement
-
Note: This workshop is still in early planning. We're gathering initial interest and availability. Final dates and agenda will be confirmed once we have responses from key participants.
Make it clear that we're planning for the late April or very early May #implement
-
-
uj-wellbeing-workshop.netlify.app uj-wellbeing-workshop.netlify.app
-
We're continuing the discussion asynchronously and will be publicly sharing key materials soon. This site is evolving into a resource page.
We're continuing the discussion asynchronously and will be publicly sharing key materials soon. This site is evolving into a resource page and hub for feedback, dialogue, and belief elicitation.
-
1. WELLBY Reliability and Value
make an anchorable link here and for the other headers.
-
-
uj-wellbeing-workshop.netlify.app uj-wellbeing-workshop.netlify.app
-
Join the discussion (Google Doc)
probably moving to have this discussion more in hypothes.is on web content and less in that Google doc; it's hard to make the Gdoc attractive and organized.
-
-
uj-wellbeing-workshop.netlify.app uj-wellbeing-workshop.netlify.app
-
Evaluation: Cash Transfers vs Psychotherapy in Liberia (McGuire et al.) Unjournal Evaluation Summary Applied Comparison Direct experimental comparison of cash transfers and psychotherapy in an LMIC context. Particularly relevant because it measures multiple outcomes—psychological distress, consumption, life satisfaction—allowing cross-metric comparison. Evaluation Summary
This is not the title nor the authors -- fix this hallucination
-
Essential
'essential' is too strong. Maybe 'Most important for discussion'. And note there's no way to do a thorough read of all of these in 2 hours. Just leave that 'time allotment' out'
-
-
uj-wellbeing-workshop.netlify.app uj-wellbeing-workshop.netlify.app
-
The Controversy: Happier Lives Institute estimated StrongMinds a
Use This link instead -- https://www.happierlivesinstitute.org/report/strongminds-cost-effectiveness-analysis/
@Samuel_Dupret let me know if you think a better link is appropriate.
You might be wondering why I'm still bothering with this at the workshop - I want to turn this into a resource page for further practical work and discussion.
-
otentially more cost-effective than AMF. GiveWell's 2023 assessment disagreed, citing concerns about: (1) mapping depression scales to LS, (2) assumed effect duration, (3) demand effects in self-reported outcomes, and (4) publication bias.
Link needs fixing -- https://www.givewell.org/international/technical/programs/strongminds-happier-lives-institute
Also mention and link HLI's response to this assessment here
-
Peasgood et al. (unpublished)
We have a copy
-
Unit-change comparability
I'm not sure this is stated correctly. It seems to overlap cardinality.
-
📚 Further Reading: Unjournal Evaluations The Unjournal has commissioned independent evaluations of papers relevant to this debate: → StrongMinds & Friendship Bench Evaluation — Critical assessment of HLI's meta-analysis and cost-effectiveness claims → Long-Run Effects of Psychotherapy on Depression — Cuijpers et al. meta-analysis on therapy durability → Cash Transfers vs Psychotherapy: Comparative Impact — McGuire et al. direct comparison in Liberia → Mental Health Therapy as a Core Strategy (Ghana) — Barker et al. on scaling community-based therapy
Put this somewhere else - I don't think it belongs within the focal case folding box. It should have its own folding box in the reading section and references
-
mortality-focused interventions
When comparing among interventions, some of which that affect mortality.
-
-
uj-wellbeing-workshop.netlify.app uj-wellbeing-workshop.netlify.app
-
Practical guidance for funders now Given the uncertainties above, what should funders actually do? This section offers a decision-oriented framework, not a single prescription.
I didn't want the AI to give this 'practical guidance' -- that's meant to come out of the session!!
-
-
uj-wellbeing-workshop.netlify.app uj-wellbeing-workshop.netlify.app
-
Zoom chat for quick reactions;
No, I only want the Zoom chat to be used by the session organizers and mainly just to guide people on the structure of the workshop and where we're going next
-
Segment structure is set; timing may adjust slightly. Updated March 11, 2026
12 Mar 2026 -- Not entirely set -- we may add some small things. But close to set, and trying to harden the timings so we can send out a schedule soon that people can trust
-
-
uj-wellbeing-workshop.netlify.app uj-wellbeing-workshop.netlify.app
-
calibrated
Give the definition of 'calibration' here as a footnote/tooltip. Roughly, things that when you say something will happen X% of the time it in fact occurs X% of the time, not much more nor less.
If you are asked to give 80% CIs, the true values should fall in those intervals close to 80% of the time. If it happens less than 8/10 times, you're being overconfident, and stating too narrow intervals. If it happens more than 8/10 times, you're being underconfident, and stating overly wide intervals
-
Consider the value obtained when using the best feasible measure for cross-intervention comparison in contexts like the focal context. What share of this value is obtained, in expectation, from using the simple linear WELLBY measure (as defined above) for all interventions?
Above the 'operationalized version' Add a discussion box here for people to answer the more general question.
-
Consider the value obtained
add a sub-sub-header "Operationalized version" here
-
-
uj-wellbeing-workshop.netlify.app uj-wellbeing-workshop.netlify.app
-
We're organizing the discussion around four key questions:
Restate this to more directly address the question in the heading on "what we want to achieve".
We want to: - Help researchers understand practitioners' highest-value questions and considerations and trade-offs. - Help practitioners understand the most relevant and useful up to date research and its implications - Enable communication and collaboration, by getting on the same page, agreeing on terminology, identifying points of consensus and high-value cruxes, etc. - State and measure our beliefs about key issues and questions openly, with precision and calibrated uncertainty, driving high "value of information" Bayesian updating - Drive better decisions over measuring the impact of interventions in LMICs and using existing measures, leading to better funding decisions
(This is a bit long -- just adjust the basic first sentence a tiny bit, and then footnote this more detailed theory of change. ) #implement
-
The neutral point is the life satisfaction level representing neither positive nor negative welfare—essentially the boundary between "life worth living" and "suffering." Estimates range from 2-5 on the 0-10 scale. Peasgood et al. (2018) tentatively estimate ~2.
Add: "This is particularly important for comparing interventions that have impacts on mortality (and perhaps fertility). We should discuss this in this workshop to an extent, but we might de-emphasize it to avoid overstretching the scope, depending on interest and timing.
-
evaluation summary
Link it here https://unjournal.pubpub.org/pub/evalsumstrongminds/ -- however, I don't see anything in that summary that provides details suggesting this order of magnitude thing. Find a better reference.
-
QALYs (quality-adjusted life years)
Link one authoritative external resource presenting these sin detail
-
instruments like EQ-5D
dead link
-
Other measures include QALYs (quality-adjusted life years), income-equivalent measures, and multi-dimensional poverty indices. QALYs are similar to DALYs but measure health gained rather than lost.
This is being adjusted. NB we focus more on DALY than QALY because it's used a lot more in the LMIC intervention context, largely due to its ease of collection
-
—and what would change their minds?
remove 'and what would change their minds' -- this doesn't fit. #implement
-
Unlike WELLBYs, DALYs are based on expert-derived disability weights rather than self-reported wellbeing—weights are constructed through surveys of health professionals rating hypothetical health states.
Are you sure that it's through surveys of health professionals? I thought the surveys were of people in the general population. And this explanation doesn't mention how an individual's DALY is constructed based on asking them about their health states or something. What's the data used?
-
Vignette exercises: respondents rate hypothetical people's life satisfaction based on descriptions, revealing how individuals anchor the scale and enabling cross-person calibration.
Do they actually do this in the paper? doublecheck
-
Calibration questions ask respondents to rate well-defined scenarios (e.g., "How satisfied would you be if you won $1,000?"). By observing how people rate the same reference points, researchers can estimate individual differences in scale use.
Is this a reasonable examlpe? Do they ask questions like that in the exercises mentioend in the paper?
-
Cost-effectiveness estimates vary by an order of magnitude depending on how WELLBYs are valued relative to DALYs.
What's the source for this OOM claim?? Find and link it with a verbatim quote . #implement
Also it's not in our 'evaluation summary as far as I know'
-
Open Philanthropy
It's now "Coefficient Giving" -- correct this on every page. And hyperlink "https://coefficientgiving.org/research/cost-effectiveness/" here. #implement
-
-
uj-wellbeing-workshop.netlify.app uj-wellbeing-workshop.netlify.app
-
Each scale point represents an equal welfare increment. If violated, summing is invalid and interventions targeting different baselines become incomparable.
David Reinstein --- personally, this is the one I find least plauslible and most important.
-
nterpersonal Comparability LSA = 7 ≈ LSB = 7 implies UA ≈ UB When two people report the same score, they experience similar welfare. Scale-use heterogeneity violates this assumption.
I don't think this one is necessary if we can (instead) assume that differences are equivalent. For example, if we assume that person A is actually experiencing higher welfare at all levels of reported score, but the differences between the scores are comparable, then compared to interventions for measured differences in well-being, that shouldn't matter.
I think it could also still be reliable if the distribution between the two populations is the same, even though we don't have specific inter-person comparability between any two compared individuals.
-
equires four implicit assumptions
Give a linked source and citation for this.
-
1 WELLBY = 1-point increase on a 0-10 life satisfaction scale × 1 person × 1 year W = Σi Σt LSit
Those are not clearly defined here, nor the indexing
-
-
uj-wellbeing-workshop.netlify.app uj-wellbeing-workshop.netlify.app
-
We'll produce a practitioner-focused summary document, belief elicitation results with confidence intervals, and structured notes.
Change this to "we hope to" and "We will share outputs". -- I can't guarantee right now that we'll get enough input or have bandwidth to produce this. #implement
-
Participants can opt out of recording for specific segments if needed
Add "and we will ask for final approval before posting anything". #implement
-
(Note: QALYs may be more directly comparable than DALYs for this purpose.)
Leave out the QALYs parentheses bit here. Add "(or QALYs)" after "~1 SD in DALYs". #implement
-
scale?
Add "is a move from 1-3 for one person as good as a move from 1-2 for 2 people"? At the end of this paragraph... "even if these don't hold, does the linear WELLBY aggregation yield 'nearly as much value' for decisionmaking as other potential measures"? #adjust #implement
-
Where is the "neutral point" on the scale?
Remind me why the neutral point is important.
-
When comparing a mental health intervention (measured in WELLBYs) to a physical health intervention (measured in DALYs)
Either of these, especially the physical health intervention, could be measured either way. This overstates it a bit. Perhaps, just to give this as an example, suppose there is a case... #adjust #implement
-
but more work is needed.
"more work is neeeded" That's very much vague -- we nIt would be nice to have at least one specific point suggesting that the difference in scale means potentially matters and merits more study
-
Each has strengths and limitations—and how they relate to each other, and whether either reliably captures what matters for human welfare, directly affects which interventions get prioritized.
I'm allergic to platitudes. IIRC you should have some notes somewhere providing at least one case where this matters .
-
- Feb 2026
-
daaronr.github.io daaronr.github.io
-
adversarial manipulation.
I don't think we discussed adversarial manipulation or have any results on it, so I'm a little worried that whatever generated this discussion is doing a sort of generic pandering and putting in what it generally expects to see in papers like this.
-
Our results support AI as structured screening and decision support rather than full automation,
This seems like a sort of milquetoast generic caveat. In what sense is this what our AI results support? This seems a bit pandering.
-
xhibiting consistent failure modes: compressed rating scales, uneven criterion coverage, and variable identification of expert-flagged concerns.
I'm guessing this is a bit premature/too much rounding up a few observations to general conclusions, but let me look at the results a bit more carefully.
-
often approach the ceiling implied by human inter-rater variability on several criteria,
This is interesting and strong. It comes across maybe a little bit overstated, so we just need to be careful about how we're framing this result.
-
high-quality but noisy reference signal
I think this is right, but the term "reference signal" sounds technical in an information theoretic sense, and we want to make sure we're not misapplying it.
-
narrative critiques
Yes, we focus on the critiques here, but the on journal evaluations do more than just critique. They discuss, they offer suggestions, implications, et cetera.
-
overing economics and social-science working papers
"covering ... working papers" Is mostly accurate but not quite right. We don't cover all working papers, and we have a specific focus on research relevant to global priorities. We can also evaluate post-journal publication, but I'm not sure how to best summarize this in a simple way in the abstract.
The idea of "open evaluation platform" also could be a bit confusing here because it's not mainly about crowd sourcing. Yes, the "paid expert review packages" cover this, but I don't quite think this is worded in the best possible way.
-
Peer review is strained, and AI tools generating referee-like feedback are already adopted by researchers and commercial services—yet field evidence on how reliably frontier LLMs can evaluate research remains scarce.
This is a decent first sentence, although it bears the marks of AI-generated text. But also I'm not sure if it's really in line with our newest spin on this.
-
- Nov 2025
-
llm-uj-research-eval.netlify.app llm-uj-research-eval.netlify.app
-
“high” reasoning effort
Not relevant to Pro -- cut this
-
OpenAI Responses API
"Responses" is the newer one (as of 4 Nov 2025)
-
returned file id keyed by path, size, and modification time.
what does this mean? "Keyed by" ?
This implies it is kept on the server and won't need a later upload.
-
-
llm-uj-research-eval.netlify.app llm-uj-research-eval.netlify.app
-
d the best performance from top reasoning models
Best relative to what? Better than the 'non-top reasoning models'? @valik
-
- Sep 2025
-
llm-uj-research-eval.netlify.app llm-uj-research-eval.netlify.app
-
Zhang and Abernethy (2025) propose deploying LLMs as quality checkers to surface critical problems instead of
Is this the only empirical work? I thought there were others underway. Worth our digging into. Fwiw I can do an elicit.org query.
-
but still recommend human oversight.
why? based on some evidence of LLM limitations or risks?
-
emphasize
I'd say 'they argue' instead of 'emphasize'; the latter seems like a statement of absolute truth that we agree with.
-
-
llm-uj-research-eval.netlify.app llm-uj-research-eval.netlify.app
-
The population of papers
Should we adjust "the population of papers" to "the reference is" ? to be more explicit?
-