- Jul 2023
-
forum.effectivealtruism.org forum.effectivealtruism.org
-
For the first time in human history
Very small comment ... 'for the first time in human history' tends to come across as overblown whenever people use it. At least it has that connotation
-
The idea that the quality of a society should be judged by the happiness of its people is an old idea, stretching back at least to the Enlightenment, if not Aristotle.
Your previous "Research Agenda and Context" sent a lot of time defining and arguing for this. I don't see that here (for better or for worse).
-
[This post contains the Happier Lives Institute's research agenda for next 18 month. After a foreword, we give a brief summary of our plans, then go into more depth]
Typo -- '18 month'
-
- May 2023
-
evalresearch.weebly.com evalresearch.weebly.comReport1
-
Making referee payments or charity donations: Three-quarters of our respondents said that referees would do a better job if they were better rewarded for their effort. Among them, about 75% indicated that referees should be paid for timely completion of the report. This payment could take many forms e.g., a donation to a charity or research fund.
unjournal
-
-
daaronr.github.io daaronr.github.io
-
OftW pre-giving-tuesday-email upselling split test (considering ‘impact vs emotion’) c
PUT THE TAKEAWAYS HERE!!!
-
-
daaronr.github.io daaronr.github.io
-
Phase 2: EAMTT – Bringing together and engaging Academics, EA orgs, and marketers
Jack - single biggest barrier (stated) is "you should give where you live" ...
Move people who are somewhat aligned?
Effektiv Spenden ... some people are drawn
Which segment to appeal to?
-
-
willemsleegers.com willemsleegers.com
-
The output shows we need to set a prior on sigma, the Intercept, and on the male coefficient.
I'm trying to interpret the output. So it's suggesting a 'flat' (uniform?) prior on
b male
(or also on anotherb
? -- but what isb
?) and a students' T distribution for the intercept and for sigma, maybe with the latter being truncated?Why does it make these particular choices of distributions?
And it doesn't seem to be saying anything about the distribution of the outcome around it's mean, correct?
-
- Apr 2023
-
charity-elections.netlify.app charity-elections.netlify.app
-
Post-event (complete) response rates, ratings (1-6), by school
something clearly went wrong here, but I think it's fixed now. Will push the results again
-
-
willemsleegers.com willemsleegers.com
-
Add
library(readr) url <- "https://raw.githubusercontent.com/rmcelreath/rethinking/master/data/Howell1.csv" read_delim(url, delim = ";")
to help people plug and play
-
- Mar 2023
-
www.metacausal.com www.metacausal.com
-
we can change the question to “What is the probability that this intervention is better than 1x (i.e. cash transfers)?” We can set a critical value for that threshold (e.g. we accept programs that we are 90% sure are better than cash transfers). As above, that value comes straight out of the distribution from our PSA: it’s simply the proportion of outcome results from our PSA-generated distribution which are ≥1.
But this would require some assumptions over the underlying distribution of effectiveness
-
as a preview of what might happen with a full accounting of uncertainty. Code, data, and modified workbook are available.
This seems to be done in the file
sensitivity analysis.R
, pulling parameters from the linked Gsheet -
Looked for a handful of key parameters pertaining to the overall effectiveness of the program and prevalence of the issue being addressed Traced those parameter values back to the original data, and located the statistical sampling uncertainty provided
Focused on the statistical uncertainty only
-
we see that there is a substantial bias in the programs that we select. Programs that we select have a large positive bias on average.
the standard 'winners' curse'
-
n the first tab (“True vs false rejections”), we see the distribution of programs we accept and reject compared with whether or not they were truly better or worse than cash transfers. As expected, we generate many false rejections, due largely to the decision threshold of 3 being a “hedge” of sorts. More importantly, we observe false positives (i.e. programs that got “lucky”).
- "Selected" contains only where estimated CE > 3
- "Rejected" are all other programs (estimated CE<3)
-
-
-
nce this set of matched communities has been generated generalised linear mixed models (e.g., multilevel models) will be used to assess changes in outcomes before and after the intervention, at different time periods, while controlling for other variables including whether the area is a control or intervention area.
somewhat non-specific
-
-
forum.effectivealtruism.org forum.effectivealtruism.org
-
All we need to do is change the units of the calculation and see if the result changes because of it. If it does, the calculation violates scale invariance, and for some reason the result depends on the units of measure that are used to calculate it.
that is awesome!
-
-
squiggle-language.com squiggle-language.com
-
Major Future Additions
What about multivariate/correlated distributions? Or is there an easy compositional way to do this that I'm overlooking? Like maybe a 'shared random variable' that feeds into two distributions? But I'm not sure if that can be done in the current system, because ... can the 'draws from one distribution' be carried over as inputs into the 'draws from another distribution'?
-
Static / sensitivity analysis Guesstimate has Sensitivity analysis that's pretty useful. This could be quite feasible to add, though it will likely require some thinking.
Yes!
-
Right now Squiggle mostly works with probability distributions only, but it should also work smoothly with probabilities.
not sure what thi means
-
-
squiggle-language.com squiggle-language.com
-
Gallery
do any of these allow correlations between the elements we are uncertain about? (I guess in principle, correlated variables those could be combined into a distribution of some function of these variables, but that seems like part of the work Squiggle is meant to do)
-
-
squiggle-language.com squiggle-language.com
-
Some distribution operations (like horizontal shift) return an unnormalized distriibution.
explain what this means
-
distriibution
typo
-
Second argument to SampleSet.fromDist must be a number.
??
-
Recall the three formats of distributions. We can force any distribution into SampleSet format
Don't say 'recall' because this only comes up later!
-
For every point on the x-axis, operate the corresponding points in the y axis of the pdf.
Explain better how this differs from adding the distributions.
A comparison like
uniform(3,4) - uniform(0,1)
vs
uniform(3,4) .- uniform(0,1)
Could be helpful.
Also note the false intuition 'the distribution of the difference between draws uniform distributions should be uniformly distributed' can be checked by thinking about and plotting
uniform(0,1) - uniform(0,1)
However, that distribution should be triangular, and the simulated distribution in your plot looks somewhat far from this. Why not make that an analytical computation?
-
Pointwise operations are done with PointSetDist internals rather than SampleSetDist internals.
I have no idea what this means
-
TODO: this isn't in the new interpreter/parser yet.
It seems to work in the playground though
-
A projection over a stretched x-axis.
for consistency with the above, you should characterize this mathematically
-
-
squiggle-language.com squiggle-language.com
-
Samples are converted into PDF shapes automatically using kernel density estimation and an approximated bandwidth. Eventually Squiggle will allow for more specificity.
I thought Kernels can smooth things. Above it seems like a linear interpolation
-
mixture(1,2,normal(5,2)), the first two arguments will get converted into point mass distributions with values at 1 and 2.
and it gives 1/3 mass to each of 1, 2, and the distribution
-
mixture(pointMass(1),pointMass(2),pointMass(5,2)).
this throws an error in the Playground
-
Array of Distributions Input
Not sure what this is doing
-
-
squiggle-language.com squiggle-language.com
-
Most functions are namespaced under their respective types to keep functionality distinct. Certain popular functions are usable without their namespaces.
not sure what point you are trying to make here. Note the first one crashes the playground
-
For example,
In the playground, the first entry
a = List.upTo(0, 5000) |> SampleSet.fromList
Throws "This page crashed. Minimum discrete weight must be an integer
Try again"
-
Squiggle dictionaries work similarly to Python dictionaries. API.
OK these just store collections of things?
-
-
squiggle-language.com squiggle-language.com
-
Example
I don't get what these are supposed to do. This snippet throws an error when I try it in the playground:
Error merge is not defined Stack trace: <top> at line 19, column 12
-
-
squiggle-language.com squiggle-language.com
-
mixture
this needs more documentation perhaps?
-
If both values are above zero, a lognormal distribution is used. If not, a normal distribution is used.
This should be highlighted elsewhere!
-
-
forum.effectivealtruism.org forum.effectivealtruism.org
-
[5].
Not sure these footnotes line up
-
Say that $2B to $20B, or 10x to 100x the amount that Open Philanthropy has already spent, would have a 1 to 10% chance of succeeding at that goal [5].
What is this benchmarked against? If I had said a 1-3% chance or a 10-50% chance, would that have seemed equally plausible?
-
[0]. This number is $138.8 different than the $138.8M given in Open Philanthropy's website, which is probably not up to date with their grants database.
What does this mean? the two 138.8's here suggest a typo
-
For completeness, I do estimate the impacts of a standout intervention as well.
This means for some 'great intervention' ... best in class or something
-
cost = 2B to 20B
Another huge wild guess? But should the cost really vary? Shouldn't this just be done for a particular level of cost?
Also, I guess the prob. of success is likely to be related to the amount spent
-
probabilityOfSuccess = 0.01 to 0.1 # 1% to 10%.
Huge wild guess, and probably should be correlated to the acceleration and reduction in prison pop terms?
-
counterfactualAccelerationInYears = 5 to 50
huge wild guess
-
-
www.givewell.org www.givewell.org
-
We guess that implementation challenges would limit effectiveness and funding opportunities. As a result, we do not anticipate doing further research on this program in the near future.
What is the model (a VOI model?) for what to focus GW attention on?
-
-
www.givewell.org www.givewell.org
-
we estimate that this net distribution will reduce the number of deaths each year within this population from 12 to about 11.4
how does this number 'depend' on the 12.0 used above?
-
Step Four: 12 of those people are expected to die every year of any cause In order to estimate how many lives these nets might save, we first need to know how many people in this population would have died without the protection of the nets. The mortality rates and population demographics in Guinea suggest that about twelve out of 1,431 people would have died per year of any cause (including malaria).8
It's not clear how this would be included in the equation. Show the equation
-
-
forum.effectivealtruism.org forum.effectivealtruism.org
-
skeptical that a 4- to 8-week program like StrongMinds would have benefits that persist far beyond a year.
is this a reasonable justification for a skeptical prior?
-
t conclusions.
the table below should be better formatted
-
they seem unintuitive to us and further influence our belief that StrongMinds is less cost-effective than HLI’s estimates.
But this seems a bit overly driven by priors/double-counting
-
-
forum.effectivealtruism.org forum.effectivealtruism.org
-
This post provides an overview and analysis of the Doing Good Better book giveaway through Effective Altruism New Zealand (EANZ). The analysis covers data collected from survey responses between 05-Jan-17 and 17-Dec-19, for which there were a total of 298 responses, with appreciable variance in the amount of the survey which was completed. This analysis was initially completed around Jan 2020 so any reference to "to date" refers to then.
Hypothes.is comments are different -- that's the functionality I was looking for, more or less
-
-
jg-sponsorship.netlify.app jg-sponsorship.netlify.app
-
We found that Fundraising Pages which received the £5 donation raised £118 more (on average) than the pages in the control group.
We should try to replicate for effective pages
-
- Feb 2023
-
replicats.research.unimelb.edu.au replicats.research.unimelb.edu.au
-
ngle claim published in a paper to evaluating the credibility of published papers more holistically. In phase 2, which began in 2021, 200 "bushel" papers were evaluated holistically. Participants working in IDEA groups evaluated the seven credibility signals:
I'm a little unclear on what this is. Is there a concise explanation of a 'bushel' or of how this is 'holistic'?
-
-
bookdown.org bookdown.org
-
drop_na(ends_with("_s"))
ends_with("perc")
?
-
-
rethinkpriorities.github.io rethinkpriorities.github.io
-
‘BOTEC’: Back of the envelope calculations are central to RP’s work
relevance: see e.g., this private thread: https://rethinkpriorities.slack.com/archives/C04N8T10XC0/p1675265493647089
-
- Jan 2023
-
www.nber.org www.nber.org
-
A NATIONWIDE TWITTER EXPERIMENT PROMOTING VACCINATION IN INDONESIA
test comment
-
-
www.nber.org www.nber.org
-
SECTOR
test a note
-
-
osf.io osf.io
-
somemustuseIDEAprotocol,butmostcanuseasingleroundofelicitation.Whatthey allhaveincommonisthatthey mathematically aggregatejudgments abouttheprobability ofsomeeventorsubjectivedegrees ofbelief,intoasingle,value.
Will this work for continuous outcomes like the Unjournal is currently asking for?
-
-
willemsleegers.com willemsleegers.com
-
Interestingly, this also means that the prior for σ is now dependent on the prior for the slope, because
come back to this, we might be able to put this exlpicitly into the model
-
This means that the estimate for sigma is the square root of 1 minus the variance of the slope estimate (0.75²). I
Could/should we make this explicitly part of the model, i.e., constrain this?
-
prior(normal(0, 0.5), class = "b", lb = -1, ub = 1)
seems, with brms, you can set lb and ub on classes but not on individual parameters
-
add_predicted_draws(model_height_weight) %>%
here we draw 'predicted entries'
-
add_epred_draws(model_height_weight) %>%
draws from the slope parameter
-
- Dec 2022
-
exploratory-altruism.org exploratory-altruism.org
-
CEARCH discovering a Cause X every three years and significantly increasing support for it.
This seems like 'assuming the result' ... why every 3 years?
-
-
willemsleegers.com willemsleegers.com
-
sample_prior = TRUE,
Does the 'prior predictive simulation' stuff here too
-
The output shows us that we need to set two priors, one for the Intercept and one for sigma. brms also already determined a default prior for each, but we’ll ignore that for now.
It's not clear to me what
get_prior
is doing here, or what its logic is. It would seem to be using the data to suggest priors, which McElreath seems to be against (but the 'empirical bayes' people seem to like)Of course, it does at least remind you what objects you need to set priors over
-
The prior for the slope is a lot easier now. We can simply specify a normal distribution with a mean of 0 and a standard deviation equal to the size of the effect we deem likely, together with a lower bound of 0 and upper bound of 1.
Update: I was wrong on the below, the SD is not 1 here, because it's the SD for the residual term in the linear model, not the SD for the raw outcome variable.
Previous comment:...
I’m ‘worried’ that if you give it data you know has sigma=1, but you allow it to choose any combination of beta and sigma, you may be getting it to do give a weird posterior to both of the parameters, in a way you know can’t make sense, in order to find the most likely parameters for the weird geocentric model you imposed.
on the other hand I would have thought that it would tend to converge to a sigma=1 anyways as the most likely, as that is ‘allowed’ by your model
my take is that the cauchy prior you impose in that part is heliocentric; well let me expand on this. I think you know that the true std deviation of the ‘standardized heights from this population’ is 1 what you don’t know is whether it is indeed normal (i.e., whether family = gaussian is right here) thus it might be finding ‘a sigma far from 1 is likely’ under this model, because that makes your ‘skewed’ or ‘fat tailed’ data seem more likely under the normal prior A better approach might be to allow a different distribution with some sort of ‘skew’ parameter, but imposing the sd must be 1
-
Apparently our prior was still very uninformed because the posterior shows we can be a confident in a much narrower range of slopes!
so here the priors mattered!
-
I increased the adapt_delta, as suggested in the documentation, from .8 to .9.
what does this mean?
-
he Rhat values did not show this was problematic but
what are rHat values and where do we see them?
-
egression model into a simple correlation analysis. That way we can specify a prior on what we think the correlation should be
to me, in this case, with physl interpretable data, it sounds more difficult to consider correlations. The 'small medium large' thing is from psychometrics I believe
-
-
-
Please note, not all rigor criteria are appropriate for all manuscripts.
Sciscore seems to have failed to be meaningful here
-
ScreenIT Sep 27, 2021 SciScore for 10.1101/2021.09.22.461342: (What is this?)Please note, not all rigor criteria are appropriate for all manuscripts.
Can we use any tools like this? E.g., Statcheck.io (for APA/Psych papers)
somewhat important
-
Is the study design appropriate and are the methods used valid? Yes
as noted before, this yes/no tickboxing is generally not optimal for our case. These things are on a spectrum.
-
Some details of the methods are lacking. For example, the MUpro provides two methods, it is necessary to specify which method was used in the analysis. The confidence score of each prediction should also be provided. Besides, some results from I-Mutant and MUpro were conflicting, the authors may want to discuss the discrepancy.
again, the markdown numbering is failing here
-
Discussion, revision and decision Discussion and Revision Author response We would like to thank the reviewers for their valuable comments. Below we provide pointwise response and the changes made in the revised manuscript. To Dr. Jyotsnamayee Sabat
Nice, but
- I'd like to be able to see this full screen
- A heading/table of contents would be very helpful here
fairly important
-
PeerRef Dec 15, 2021 Discussion, revision and decision
I would hope we could replace 'decision' with 'ratings and predictions' or something ... and make those ratings prominent
important
-
Author response
The 'order by recency' is good but sometimes limiting. I think readers would probably prefer to see the 'major comments and discussion' first, before the specific detailed small comments and clarification questions.
important
-
Nov 26, 2021 Peer review report Reviewer: Hurng-Yi Wang Institution: Institute of Ecology and Evolutionary Biology, National Taiwan University email: hurngyi@gmail.com Section 1 – Serious concerns Do you have any serious concerns about the manuscript such as fraud, plagiarism, unethical or unsafe practices? No Have authors’ provided the necessary ethics approval (from authors’ institution or an ethics committee)? not applicable Section 2 – Language quality How would you rate the English language quality? Medium quality Section 3 – validity and reproducibility Does the work cite relevant and sufficient literature? No Is the study design appropriate and are the methods used valid? No Are the methods documented and analysis provided so that the study can be replicated? Yes Is the source data that underlies the result available so that the study can be … More Peer review report Reviewer: Hurng-Yi Wang Institution: Institute of Ecology and Evolutionary Biology, National Taiwan University email: hurngyi@gmail.com
Nice. Is there a way we could put this at the top, or make a quick link to it?
Ideally, this would have the ratings/rankings/predictions show up first on the page, as some sort of table (and also metadata if we dare to dream),
important
-
Read the original source
This is a bit misleading here. The 'original source' is basically the same stream of text
-
I agree to change to Verified manuscript.
what does this mean?
-
and are shown below.
these are not shown below. Are graphics possible here? Obviously a direct hyperlink to the revised section of the paper would be convenient here
-
We would like to thank the reviewers for their valuable comments. Below we provide pointwise response and the changes made in the revised manuscript.
@gavin @ annabell -- this might read better if each comment quickly linked to the section of the hosted paper and/or the comments were inserted in that part of the hosted paper with hypothes.is
-
Pt-12:
what do the prefixes like
PT-12
mean here? I guess it's the reviewer number? -
The “Analysis of the Mutational Profile of Indian Isolates” should be moved to Materials and Methods.
The markdown numbering failed here!
-
Read the full article
I clicked this link, and it is not coming up, or it's very slow
-
Article activity feed Version 2 published on bioRxiv
having trouble interpreting this. The linked version was published on Bioarxiv after the PeerRef? So which version was evaluated?
OK, I guess the post-PeerRef version is published above ... so this is going from 'newest to oldest'. Maybe there's a way to make that clearer to someone visiting the page for the first time
-
AgarwalNita Parekh
why a 'full stop' (period) here after authors' names?
-
Abstract
abstract of which version?
-
In this study we carried out the early distribution of clades and subclades state-wise based on shared mutations in Indian SARS-CoV-2 isolates collected (27 th Jan – 27 th May 2020). Phylogenetic analysis of these isolates indicates multiple independent sources of introduction of the virus in the country, while principal component analysis revealed some state-specific clusters. It is observed that clade 20A defining mutations C241T (ORF1ab: 5’ UTR), C3037T (ORF1ab: F924F), C14408T (ORF1ab: P4715L), and A23403G (S: D614G) are predominant in Indian isolates during this period. Higher number of coronavirus cases were observed in certain states, viz ., Delhi, Tamil Nadu, and Telangana. Genetic analysis of isolates from these states revealed a cluster with shared mutations, C6312A (ORF1ab: T2016K), C13730T (ORF1ab: A4489V), C23929T, and C28311T (N: P13L). Analysis of region-specific shared mutations carried out to understand the large number of deaths in Gujarat and Maharashtra identified shared mutations defining subclade, I/GJ-20A (C18877T, C22444T, G25563T (ORF3a: H57Q), C26735T, C28854T (N: S194L), C2836T) in Gujarat and two sets of co-occurring mutations C313T, C5700A (ORF1ab: A1812D) and A29827T, G29830T in Maharashtra. From the genetic analysis of mutation spectra of Indian isolates, the insights gained in its transmission, geographic distribution, containment, and impact are discussed.
I really don't like this font, finding it very hard to read, but that's probably a taste thing. Still, I'd like if we could use a font that 'looks more like a journal'.
-
Pt-13: I want to know how the representative sequences were selected for different states. Is it based on no. of sequences submitted or positivity rate of a particular region? All the Indian isolates available in GISAID for the period 27th Jan – 27th May 2020 were download and considered for analysis. NO state-wise selection was done.
these authors seem to have use quotation the opposite way I would have done. I would have done
reviewer's comment here
My response here (unquoted)
-
Demographic Analysis of Mutations in Indian SARS-CoV-2 Isolates
would be nice to have keywords up top
-
Demographic Analysis of Mutations in Indian SARS-CoV-2 Isolates
Commenting on the format here
-
-
docs.google.com docs.google.com
-
What cause area(s) is/are you interested in working in if there was a role or project that was a good fit? (select all that apply)
In the view I'm seeing here, the list is very vertically long. Maybe a way to have fewer spaces or 2 columns for less scrolling?
If you are trying in general to learn from this rather than about specific people, you might have the survey tool randomise the list order
-
What type of role(s) would you be interested in working in? (select all that apply)
where does this list come from? 'Research' is rather vague
-
What obstacles are holding you back from changing roles or co-founding a new project? (select all that apply)
What is the purpose of this question? It seems like you are suggesting things they might not have thought of here.
-
-
-
Add the SurveyMonkey account’s OAuth token to your .Rprofile file. To open and edit that file, run usethis::edit_r_profile(),
For me this opened up some other profile. Maybe because I'm working in Rstudio with a Quarto?
When I just opened the .Rprofile file listed at the root of my repo and where the .Rproj is stored, it worked
-
-
psyarxiv.com psyarxiv.com
-
15desire to “pay it forward” for other donors by supporting the matching fund after receiving matching funds. This possibility may be explored infuture research. About a third ofdonors werewilling to support the matching fund with some or all of their donation. This provided enough matching funds to cover the matching funds received by donors, making the micro-matching system self-sustaining.Despite a long history of altruism,including centuries of organized philanthropy, humans have only recently attempted to systematically measure the cost-effectiveness of altruistic endeavors with the goal of doing as much good as possible10,11. The effective altruism movement is growing and has been notably successful in securing large commitments from relatively few people32,33. Effective altruism’s potential for more widespread adoptionis unknown. The sevenstudiesandproof-of-conceptdemonstration presented here are cause for optimism, grounded in a more detailed understanding of altruistic motivation. Today, relatively few donors prioritize effectiveness. But our results suggest that effective giving can be a satisfying complement to giving based on personal feelings, adding a “competence glow”27to the proverbial “warm glow” of giving. Some donors are willing to incentivize bundle donations in others, promoting a chain of giving that is both personally meaningful and effective. The stakes are high, as ordinary people have the power to do enormous good. The limited proof-of-concept demonstrationreported onhere raised funds sufficient to provide 100,700deworming treatments and 17,500 malaria nets, among other benefits. (See Supplementary Materials.) A better understanding of moral motivation, and how to channel it, could dramatically increase the impact of human altruism.MethodsAll reported studies, including the final proofofconcept,were pre-registered, except forStudy 7 (which was a pre-test for the proof of concept). Formore detailed descriptions of the methodsand results, please refer to our Supplementary Materialsavailable at https://osf.io/zu6j8/?view_only=28050663bd6b4b5cae0a73ad8068bc34. Across Studies 1-7w
I see the code and data here, but I can't find the study materials
-
- Nov 2022
-
willemsleegers.com willemsleegers.com
-
add_predicted_draws
not sure I get the syntax here. Why is this called
add_predicted_draws
? -
howed us the posterior distributions of the two parameters
I think you plotted the 'marginal posteriors' for each (for each case, averaging over the posterior for the other). Technically, there is a joint posterior distribution, which you could plot as in those heatmap plots in Kurz.
-
Apparently our posterior estimate for the Intercept is 154.63
They call it an 'estimate' in the code but that seems like terminology McElreath would disagree with. That's the maximum a-posteriori value ... but the estimate is the distribution (actually, the joint distribution of the parameters)
-
Here we see that the posterior distribution
would be interesting to plot this for 2 different posteriors
-
Notice that we sample from the prior so we can not only visualize our posterior later, but also the priors we have just defined.
not sure what this means. Also what does 'run the model mean? Calculate a posterior? With which approach?
-
whether the chains look good.
what 'chains? And what does 'look good' mean?
-
So, our priors result in a normal distribution of heights
how do you see it's normal?
-
model_height_prior <- brm(
repeated code after normalization. Maybe save as 2 separate versions to compare?
-
- brms’ default and my own
these both seem to allow negative values for sigma. These don't seem right -- aren't you supposed to do something that implies a strictly positive distribution, like letting the log of sigma be normally distributed?
(I think they are only positive in the plot because you cut off the x axis)
Maybe the brms procedure below fixes this in a mechanical way because it sees 'class= "sigma"' .. .but I'm not sure howw
-
sample_prior = "only",
what is this doing?
-
file = "models/model_height_prior.rds"
what is this saving?
-
family = gaussian,
what does
family = gaussian
do here, over and above the specified priors? -
we can simulate what we believe the data to be
I wouldn't say 'believe the data to be'. We have the data (at least the sample). We are simulating what we believe the population this is drawn from looks like
-
The sigma prior
'prior over sigma' -- 'sigma prior' makes it sound like it's a sigma distribution (if that's a thing) rather than a distribution over sigma
-
We’re not super uncertain about people’s heights, though, so let’s use a normal distribution.
uncertainty could be expressed in terms of a greater sigma (std deviation) also. So this isn't exactly uncertainty, but something about the shape of the distribution, the amount of tail-uncertainty for a given level of uncertainty closer to the mean
-
But this is the default prior. brms determines this automatic prior by peeking at the data, which is not what we want to do. Instead, we should create our own.
but what is it's justification for doing so? the people who designed it must have had something in mind.
-
parameter refers to, well, sigma
the standard deviation of heights around the mean
-
we should start with defining our beliefs, rather than immediately jumping into running an analysis.
Slight quibble: It doesn't have to be 'our own beliefs' but just 'what you think a reasonable starting belief would be', or 'what you think others would accept'.
This relates to the discussion of 'epistemological assumption', I believe.
It could also be a belief (distribution) based on prior data and evidence
-
a null effect is
I would say 'a small or zero effect'
-
-
-
predicts costly intergroup behavior
has this paper been peer-reviewed?
-
-
-
he time we can expect to wait between events is a decaying exponential.
$$P(T>t) = e^{-(events/time)t}$$
-
This is what “5 expected events” means! The most likely number of meteors is 5, the rate parameter of the distribution.
I think that's the mode -- does it coincide with the mean here?
-
-
outreach-handbook.notion.site outreach-handbook.notion.site
-
Include a field to add data from the “How did you hear about EA UNI NAME” question
what does this mean?
-
Attrition rate = % of fellows who do not complete the fellowship, assuming that completion of a fellowship is defined as attending at least 6 of 8 meetings
I know they track this for the virtual fellowships
-
The form will also include some formal definition for each category. What should this be?
maybe consult EA survey on this?
-
Completed Fellowship, highly engaged
may be challenging for them to classify this?
-
This form will consist of a list of all Fellows who filled out the “How you heard about us” question. Each organizer will be prompted to label each fellow as one of the following:
super important!
-
Post-Fellowship Engagement Form:
make the pre/post distinction clearer in the intro
-
using email automation
are we automating emails to faculty? That seems possibly problematic
-
It may not be worth tracking at all.
Why not?
-
maybe) Other variables which we might be interested in (Group age, # of organizers, existing group size, etc.).
This seems important -- identifying 'outcomes' and tracking them
-
Fellowship application, and regularly track fellowship attendance for every Fellow.
Can we clearly define or link which 'fellowships' we mean here? Can people be in these groups without doing the fellowship?
-
to be sent 8/30
Starting in 2023?
-
our base
The database will be an Airtable, I guess?
-
Participating groups will fill
Who in the group will have this responsibility?
-
outreach data
Define 'outreach'/'outreach data'?
-
-
adv-r.hadley.nz adv-r.hadley.nz
-
mtcars) integrate(function(x) sin(x) ^ 2, 0, pi)
Numerical integration?
-
-
www.givingwhatwecan.org www.givingwhatwecan.org
-
We Can community
Just testing annotation
-
- Oct 2022
-
bookdown.org bookdown.org
-
Apply the quadratic approximation to the globe tossing data with rethinking::map().
Here they actually use the Rethinking package instead of brms. Why?
-
- Sep 2022
-
forum.effectivealtruism.org forum.effectivealtruism.org
-
Why hasn’t such a movement for something like Guided Consumption happened already? Because Guiding Companies, by definition, generate profit for charities instead of traditional investors, a major issue they face is that Guiding Companies cannot access the same investment pool of private equity and angel investors for seed money. One solution to this would be to seek financing from philanthropists, particularly those who are looking to spend their money to advance the same cause area as the Guiding Company. However, the question remains: if Guided Consumption is a more effective means of funding charities than direct donation, why has this not been more fully explored already? I suspect that the reason stems from a deep-seated psychological separation between the way that people think about the business world, essentially a rather competitive, dog-eat-dog mindset and the kinder, more magnanimous mindset involved in charity work. The notion also seems to violate intuitions about sacrifice involved in charitable contributions, although these intuitions do not hold with the deliberate substitution of traditional stakeholders for charities. I would also note that some further red-teaming can be found in the comments of the longer paper.
These are good points.
-
But even if Guiding Companies engage in activities that consumers take issue with regarding traditional firms, such as competitive (i.e., princely) compensation for CEOs, it is not clear why this would cause a consumer to choose a company that enriches shareholder over a company that helps fight global poverty.
But the pressure not to do this might make the GC's less efficient and thus more expensive
-
What if selfish motivations make for the best founders/investors/etc.? The efforts of philanthropic investors are cap
I think this is a big issue and you are not taking it seriously enough. Without profit motives, it may be hard for these companies to stay efficient and well, profitable. Who is 'holding the CEOs' feet to the fire?' At least the conventional wisdom is that altruistically motivated leaders are less hard-headed, less efficient, etc.
-
the public
I feel like this already exists enough with Newman's Own etc. I think we should try to focus on GH&D here and maybe some Global Catastrophic Risk prevention public goods. Animal causes: maybe, but only to some demos/products (like vegetarian stuff).
-
Which Market Sectors?
I also suggest market sectors where there is some reluctance/repugnance to buying the product or service. The charity aspect will allow some moral licensing. E.g., I forget which charity allowed people to donate in exchange for cutting-in-line at some festival.
-
low-differentiation sectors, it may be easier to construct a “no-brainer” where a consumer is genuinely ambivalent as to two product
But are there substantial profits to be had by newcomers in such sectors? The profit margins may be low for such commonplace undifferentiated sectors.
-
Another approach is to capitalize on virtue-signaling, perhaps through products that could enable a consumer to conspicuously show that they bought through a Guiding Company.
I strongly agree with this. More conspicuous consumption.
-
A movement that enables everyday people to help charities without sacrificing anything personally should be much easier than one that demands people give significant things up or even mildly inconveniences people.
But can we really quantify the benefit?
-
Charities already hold shares of companies
-
People already do consider the owners of companies (usually through a political lens ... e.g., "Home Depot owner supports right wing causes so people boycott" or some such
-
How much more will shopping at a "Guided Consumption owned company" actually lead more to go to the charities?
-
Will people (over)compensate for this by reducing donations elsewhere?
-
If the big companies are differentiated in some way (like 'monopolistic competition' suggests, there could be a substantial cost to consumers (and to efficiency) to choosing the 'charity supporting brand'
-
-
I am optimistic about the prospects for a movement developing because of what it allows for consumers: they get the same product, at the same price, but profits benefit charities rather than shareholders.
I think you said this already
-
to be the most powerful, would likely require a social movement
why does it 'need a social movement'? That doesn't seem clear to me. It seems like it would benefit from one... but.
-
although a Guiding Company would likely enjoy a degree of advantage correspondent with a Guiding Company being able to communicate this feature with its customer base.
not sure why this is an 'although'?
-
the identity of the entities that benefit from your purchase, often, owners in some form.
Not entirely true. A lot of companies (e.g., Big Y) advertise themselves as 'American owned'
-
. This is because charities are more popular than normal investors and
that's not exactly what the study says, but it's close
-
would have a competitive advantage
"Would have" seems too strong. There are reasons to imagine an advantage and other reasons to imagine a disadvantage. I think EA forum prefers 'humble' epistemic statements
-
- Aug 2022
-
rethinkpriorities.github.io rethinkpriorities.github.io
-
+ (1|reader)
Richard: 2 reasons 1. I get this pooling/regularization effect 2. "I don't really care about reader" so ???
If reader were orthogonal to everything else I might still put it in because of the unbiasedness 'in a low dimensional setting' (DR: sort of thought it goes the opposite way)
If I do an idealized RCT with things I changed in exactly the same way I would not get overfitting. I might get error, but not overfitting.
-
Thinking by analogy to a Bayesian approach, what does it mean that we assume the intercept is a “random deviations drawn from a distribution”? Isn’t that what we always assume, for each parameter in a Bayesian model … so then, what would it mean for a Bayesian model to have a fixed (vs random) coefficient?
With bayesian mixed you are putting priors on ever coefficient true.
But also you have an (?additional) random effect .. somewhat more structure.
Also in LMER stuff we never update to a posterior
-
Why wouldn’t we want all our parameters to be random effects? Why include any fixed effects … considering general ideas of overfitting and effects as draws from larger distributions?
- analogy to existing examples of fields of wheat
- or build a nested model and look for sensitivity
-
How distinct is this from the ‘regularization with cross-validation’ that we see in Machine learning approaches? E.g., I could do a ridge model where I allow only the coefficient on reader to be regularized; this also leads to the same sort of ‘shrinkage’ … so what’s the difference?
Richard: The L1/L2 E-net approach does something mechanical ... also it can handle a lot of stuff high dimensional, quick and dirty
RE requires more thinking and more structure
How to do this "Does this line up with the canonical problems involving fields etc"
-
-
rstudio-pubs-static.s3.amazonaws.com rstudio-pubs-static.s3.amazonaws.com
-
## Correlation of Fixed Effects:
not sure how to interpret this
-
Groups Name Variance Std.Dev. Corr ## Chick (Intercept) 103.61 10.179 ## Time 10.01 3.165 -0.99 ## Residual 163.36 12.781 ## Number of obs: 578, groups: Chick, 50
note the coefficients are not reported, just the dispersion
-
-
tjmahr.github.io tjmahr.github.io
-
prior_covariance
what is the prior for the slopes?
-
higher confidence region.
higher confidence in what sense?
-
“randomly varying” or “random effects”.
but isn't this what Bayesians assume of every parameter?
-
This model assumes that each participant’s individual intercept and slope parameters are deviations from this average, and these random deviations drawn from a distribution of possible intercept and slope parameters.
presumably normally distributed .. or at least with more mass inthe center
-
It’s the fixed effects estimate, the center of gravity in the last plot.
this term is confusing for econometricians
-
-
stats.stackexchange.com stats.stackexchange.com
-
and can give rise to subtle biases that require considerable sophistication to avoid.)
I'm not sure the link refers to the same sort of 'random effects' technique, so the bias discussed there may not apply
-
-
lindeloev.github.io lindeloev.github.io
-
I’ll introduce ranks in a minute. For now, notice that the correlation coefficient of the linear model is identical to a “real” Pearson correlation, but p-values are an approximation which is is appropriate for samples greater than N=10 and almost perfect when N > 20.
this paragraph needs clarification. the coefficient on which linear model?
-
correlation coefficient of the linear model
what is the 'correlation coefficient of the linear model'? It's a transformation of the slope
-
t-tests, lm, etc., is simply to find the numbers that best predict yyy.
I don't think t-tests estimate slopes or predict anything
-
- Jul 2022
-
link.springer.com link.springer.com
-
under the assumption that ℋ1H1{\mathcal {H}}_1 is true, the associated credible interval for the test-relevant parameter provides a range that contains 95% of the posterior mass.
I don't get the 'under the assumption that H1 is true' in this sentence. Isn't this true of the credible interval in any case?
-
he Bayes factor (e.g., Etz and Wagenmakers 2017; Haldane 1932; Jeffreys 1939; Kass et al. 1995; Wrinch and Jeffreys 1921) reflects the relative predictive adequacy of two competing models or hypotheses, say ℋ0H0{\mathcal {H}}_0 (which postulates the absence of the test-relevant parameter) and ℋ1H1{\mathcal {H}}_1 (which postulates the presence of the test-relevant parameter).
Bayes Factor is critiqued (datacolada?) because the 'presence of the parameter' involves an arbitrary choice of distribution of what values the parameter would have 'if it were present'.
And sometimes the H0 is deemed more likely even when the observed parameter 'estimate' falls in this range.
-
a [100×(1−𝛼)][100×(1−α)][100 \times (1-\alpha )]% confidence interval contains only those parameter values that would not be rejected if they were subjected to a null-hypothesis test with level 𝛼α\alpha.
With the null hypothesis equal to the point estimate, I think, not a 0 null hypothesis
-
-
willemsleegers.github.io willemsleegers.github.io
-
They found that only 70% of their large (20:1) samples produced correct solutions, leading them to conclude that a 20:1 participant to item ratio produces error rates well above the field standard alpha = .05 level.
really confused here ... what is the 'gold standard' ... how do they know what is a 'correct solution'? Also, how does this fit into a NHST framework?
-
f course, the participant to item ratio is not a good benchmark for the appropriate sample size, so this is not enough to demonstrate that the sample size is insufficient. They did find support that this is not enough by sampling data of various sample sizes from a large data s
rephrase?
-
thumb only involve
'typically only' ... but they could be better
-
participants to item ratios of 10:1
you haven't yet defined this concept
-
Costello & Osborne (2005).
Link goes to a different reference. Also (small point), the names should be within the parentheses
-
Costello & Osborne (2005).
Link goes to a different reference. Also (small point), the names should be within the parentheses
-
the number of underlying factors and which items load on which factor
Would be good to link or define what terms like 'factors' and 'load' mean here
-
- Jun 2022
-
willemsleegers.github.io willemsleegers.github.io
-
In other words, the goal is to explore the data and reduce the number of variables.
That's not 'in other words', it's different. "Reduce the number of variables" can be done in many ways and for different reasons. Latent factors are (I think) something with a specific meaning in psychology and this sort of structural analysis in general.
-
is to study latent factors that underlie responses to a larger number of items
But what are 'factors'?
-
-
www.openscholar.org.uk www.openscholar.org.uk
-
The module also comes with a reviewer reputation system based on the assessment of reviews themselves, both by the community of users and by other peer reviewers. This allows a sophisticated scaling of the importance of each review on the overall assessment of a research work, based on the reputation of the reviewer.
This seems promising!
-
By transparent we mean that the identity of the reviewers is disclosed to the authors and to the public
Not sure this is good. I worry about flattery and avoiding public criticism.
-
Digital research works hosted in these repositories can thus be evaluated by an unlimited number of peers that offer not only a qualitative assessment in the form of text, but also quantitative measures that are used to build the work’s reputation.
but who will finance and coordinate this?
-
One important element still missing from open access repositories, however, is a quantitative assessment of the hosted research items that will facilitate the process of selecting the most relevant and distinguished content.
What we've been saying
-
- May 2022
-
wyclif.substack.com wyclif.substack.com
-
Einstein scooped Hilbert by a few days at most in producing general relativity. In that sense
interesting, I didn't know about this
-
EA focuses on two kinds of moral issue. The first is effective action in the here and now — maximising the bang for your charitable buck. The second is the very long run: controlling artificial general intelligence (AGI), or colonizing other planets so that humanity doesn’t keep all its eggs in one basket.
good summary
-
this issue in acute form
wait, which issue? He realized that achieving his moral objectives wouldn't make him feel happy. Did that change what he felt he should do or his sense of moral obligation?
-
duty
not sure it's always framed as a 'duty'
-
(You barge past me, about my lawful business, on your mission of mercy. “Out of the way! Your utility has already been included in my decision calculus!” Oh yeah, pal? Can I see your working?)
good analogy
-
Another reason is just that other people’s concerns, right or wrong, deserve listening to.
Is this related to the 'moral uncertainty' and 'moral hedging' ... or is this a fairness/justice argument?
-
Utilitarianism is an outgrowth of Christianity.1
This is a really big claim to make here ... needs more support. It kind of goes against religion in that it sets no 'thou shalt not's ... at least the act utilitarianism
-
Faced with questions of the infinite future, it swiftly devolves into fun with maths.
good point
-
What will motivate you if you don’t change the world? Can you be satisfied doing a little? Cultivating the mental independence to work without appreciation, and the willpower to keep pedalling your bike, might be a valuable investment. Ambition is like an oxidizer. It gets things going, and can create loud explosions, but without fuel to consume, it burns out. It also helps to know what you actually want. “To maximize the welfare of all future generations” may not be the true answer.
I'm not 100% sure what you are saying/suggesting here. Maybe this ends less strongly than it began? What is the 'fuel to consume' you are getting at here? What should it be?
-
Just by arithmetic, only few will succeed.
but if each have an independent probability of succeeding, each may still be having a large impact in expected value.
-
Here’s a more general claim: the more local the issue, the less substitutable people are. Many people are working on the great needs of the world.
This is possibly true, in some cases, for research work but probably not true for donations. If you donate $4000, lots more children get malaria nets or pills, fewer get severely ill, and on average 1 less children dies ... relative to your not having made that donation.
-
net contribution
what do you mean by 'net contribution'? There's a lot of discussion in the donations side of EA about making a counterfactual impact. They focus on the marginal difference you make in the world relative to not having done/donated this. If, absent your donation to Malaria Consortium just as many people would have gotten ill from malaria (because someone else would have stepped in) this would be counted as a 0. So this is already baked in.
-
Department’s
capital letters?
-
My marginal contribution would be small.
I think you DHJ could possibly make a big contribution. BUT what does this have to do with this essay? What is the point you are making here?
-
But enough other people are thinking about it already. I trust them to work it out. My marginal contribution would be small.
Relative to other things and relative to the magnitude of the problem people claim this is ... few people are working on it, it's seen to be neglected.
-
After a visit to Lesswrong, someone will probably associate EA more with preventing bad AIs than with expanding access to clean drinking water.
But LW is not EA ... see https://www.lesswrong.com/posts/bJ2haLkcGeLtTWaD5/welcome-to-lesswrong ... doesn't mention EA.
Also, I think most people who have heard of it still associate EA with the latter, and with the Giving What We Can 10% pledge (we actually have data on this) . Even though the most active EAs are in fact prioritizing longtermism these days.
-
which makes them a clear badge of identity
True. This might be why LT-ism is so ascendant in EA especially at university groups
-