951 Matching Annotations
  1. May 2022
    1. Contributing to the first topic requires discipline.

      Contributing research-wise and through your career, that is. You can always donate to the non-longtermist stuff, and that was at the core of the first-gen EA. And the whole GiveWell thing is about making it easy to know your gift makes the biggest difference per $

    2. More might come out of it than from all your earnest strivings.

      the implication here is that Gaugin's work contributed a lot. I guess?

    3. It’s questionable whether you even have the right.

      interesting ... maybe elaborate? "Right" in what sense?

    4. and admirable practices like steelmanning.

      yes, but are you doing this in the current essay?

    5. They contain contradictions. That makes them rich enough to cope with human life, which is contradictory.

      I think some examples in footnotes or links or a vignette would help here. Because I sort of feel like "no, the old religions really struggle to cope with modern life"

    6. g and pushed it to be more than this

      the link is long ... which bit should I read?

    7. the Axial religions

      I've never heard of this term

    8. moral demands

      also not sure it "imposes demands" ... it just suggests "this would be the best way to behave" I guess

    9. Either someone is maximizing utility, or they’re in the way.

      hmm., who said 'they're in the way?'

      Also, 'max util' is confusing here ... because in economics we think of it as maximizing our "personal" utility function. Maybe a distinction needs to be made at some point to make it clear that this is some weighted adding up.

    10. it imposes extreme moral demands on people. Sell all your possessions and give them to the poor.

      Not sure whom this is. EA doesn't really ask this. The push now is ~try to find more effective/impactful careers. And even the well known GWWC was 'only' advocating a 10% donation rate, and even that 'not for everyone'

    11. Utilitarians reduce all concerns to maximising utility. They can’t be swayed by argument, except about how to maximise utility. This makes them slightly like paperclip maximisers themselves.

      My guess is that the way you make your claim here might be seen as not Scout mindset, not fully reasoning transparent, not arguing in a generous spirit. It's not how people want to discourse on the EA forum anyways; not sure if effectiveideas would be ok with it or not.

      Would utilitarians say "we reduce all concerns to maximising utility"? If so, give a link/evidence to this statement.

    12. hey recycle humans into paperclips

      you definitely need a link here (again, not for EA's but if this is outreach people will be like WTF)

    13. AIs which

      Do all your readers or the ones you are reaching out to know what "AI's" means? All the EA readers will, but still...

    14. The Effective Altruism

      wrong link, perhaps? Lesswrong is not EA per se if I understand, it's Rationalist ... certainly adjacent to EA though. I might link https://www.centreforeffectivealtruism.org/ or https://forum.effectivealtruism.org/ as the canonical EA link

    15. and turns them into numbers.

      links/references here could help to avoid straw man accusations

    16. as preferences,

      I'm not 100% that all schools of utilitarianism treats it as choice or preference based. That is familiar from economics, but I think other schools consider things like 'intensity of pleasure and pain'?

    1. Demonstrate the potential for impact via thought experiments like The Drowning Child (although use this sparingly and cautiously, as people can be turned off by obligation framing).

      I think people are also sometimes turned off or disturbed by having to make these difficult Sophies' choices

  2. Apr 2022
    1. 13.1.1 Other lists and categorizations of tools

      no third level numbering.

      Could also use some updating ... and leveraging wht they have done

    2. Approaches to overcoming the barriers and biases discussed in previous chapters.

      give bolded titles to these categories and link to sections below

    1. The first intervention, surgical treatment, can’t even be seen on this scale, because it has such a small impact relative to other interventions. And the best strategy, educating high-risk groups, is estimated to be 1,400 times better than that. (It’s possible that these estimates might be inaccurate, or might not capture all of the relevant effects. But it seems likely that there are still big differences between interventions.)

      Some rigor might be hslpful here

    1. I created R functions for TOST for independent t-tests, paired samples t-tests, and correlation

      what about binary outcomes?

    2. So my functions don’t require access to the raw data

      there's always a workaround to regenerate the 'equivalent raw data'... but it's annoying

    3. while we are more used to think in terms of standardized effect sizes (correlations or Cohen’s d).

      not so much in Economics, where we often focus on 'non-unitless' outcomes, that have a context-specificinterpretation

    4. After choosing the SESOI, you can even design your study to have sufficient power to reject the presence of a meaningful effect.

      this

    1. your data set (e.g., means, standard deviations, correlations

      your expected data set.

      But simulation also allows you to use prior data... maybe worth mentioning?

    2. think they should power their study, rather than the set of analyses they will conduct

      can you clarify that a bit?

    1. This section was written by David Reinstein and Luke Arundel.

      Annabel Rayner also contributed/is contributing

    2. An example of this is presented in the study by Lichenstein et al., (1978).

      seminal paper; if we want to dig into the evidence we should look at replication, review, post-replication crisis work.

    3. Experiential distance describes one’s proximity to a particular situation or feeling as a result of having seen something or been a part of it. In particular, a greater experiential distance is seen to make it more difficult to imagine a particular situation or feeling, therefore making it harder to empathize with someone going through it. This could create a barrier to giving effectively as most charitable giving is motivated by empathy and sympathy for victims (Lowenstein & Small, 2007). Experiential distance explains why it is easier for an individual to feel empathy for a victim if they have personally experienced the ailment or someone close to them has. As a result, they do not have to imagine the suffering it may have caused because they have directly or indirectly physically experienced it. In other words, the experiential gap is smaller. For example, people living in wealthy nations are more likely to be affected by cancer than malaria, leading to a greater support for that cause.

      This comes up in the context of 'availability bias'

      'Experiential distance' may be our own definition. If so, let's make it clear that this is the case

      "As most charity" ... "as it is claimed that..."

      "Ailment = illness" ... it could be outside of the medical domain .. make it more concise .. And if this is an esxample and not the general point, use a parenthetical "(e.g.)"

    Annotators

  3. Mar 2022
  4. pub-tools-public-publication-data.storage.googleapis.com pub-tools-public-publication-data.storage.googleapis.com
    1. predict the precisionof an iROAS estimate that a proposed experimental design will deliver.

      'predict the precision' ... maybe that's the Bayesian equivalent of a power analysis

    2. design process featuring a statistical power analysis is a crucial step in planning of aneffective experiment.

      this is great -- statistical power!

    3. If it is obvious from this plot that the incrementaleffects have ceased to increase beyond the end of the intervention period, there is no needto add a cooldown period, since doing so only adds more uncertainty to the estimate ofiROAS

      this seems suspect: you are imposing a strong assumption based on casual empiricism, and this probably overstates the precision of the results.

      ideally, some uncertainty over that should be incorporated into the estimation and model, but maybe it's not important in practice?

    4. Due to the marketing intervention, incremental cost of clicks increasedduring the test period.

      This is, apparently, 'amount spent' not a marginal cost thing

    5. shows a similar TBR Causal Effect analysis for incremental ad spend (cost ofclicks) ∆cost(t).

      same thing but for costs

    6. nd the 90%middle posterior intervals of the predicted counterfactual time series, y∗t .

      you have to zoom in, they are hard to see

    7. Bayesian inference is usedto estimate the unknowns and to derive the posterior predictive distribution of y∗t foreach t.

      this is Bayesian!

    8. The simplest possible relationship is given by the regression model,yt = α + βxt + t, t in pretest period

      maybe we want some geo-specific error term?

    9. or TBR, data are aggregated across geos to provide one observation ofresponse metric volume at every time interval for the control and treatment groups:

      the aggregation seems to throw out important variation here ... that might tell us something about how much things 'typically vary by without treatments' ... but maybe I'm missing something

    10. matched market tests, which may compare the behavior of users in a singlecontrol region with the behavior of users in a single test region.

      I'm considering a case with one test region and many control regions. Will this paper still apply?

    11. ime-Based Regression Framework

      Is this equivalent to an 'event study' or a 'difference in difference' as discussed in econometrics?

  5. Feb 2022
    1. This appendix provides a brief introduction to the several types of software and processes used to creating websites such as Increasing Effective Charitable Giving and Researching and writing for Economics students. We aim to encourage others to participate in this collaborative work, and to spin off their own projects. If you would like to provide feedback or ask a question about these projects then using ‘hypothes.is’ is an easy way to do so (please write directly in the html and contact me at daaronr at gmail dot com to let me know you’ve done so).

      Answering some questions about participation:

      How many hours are student researchers required to allocate for the project? It depends on how much you want me to engage with you and talk you through the processes, unboarding etc. I think something like a minimum of 40-50 total hours seems about right, but 100+ hours would be better

      1. Is the project's intended audience the EA community? To an extent, yes. I guess 3 primary audiences.

      i. The EA community interested in learning more about (what they can do to promote effective) charitable giving and relevant attitudes,

      ii. Effective charities and organizations promoting effective giving and action (see the related EA Market testing team)

      iii. Academic researchers (Social science, economists, data scientists, human biology) interested in these issues, coming from a range of perspectives

      1. Which requirements constitute sufficient quality work for student co-authorship?

      It's hard to put this in writing succinctly. If you're continent is well written and reasoning-transparent we can probably integrated into the website and recognize you as the author of a particular section. In terms of peer reviewed academic output (this project is not itself a 'paper' but I am very pro-feedback and evaluation-- see bit.ly/eaunjournal) we have to discuss that more carefully.

    1. 10 Effect of analytical (effectiveness) information on generosity

      todo? maybe incorporate more lab work with clear strong 'analytical thinking' manipulations

    2. Much of this project is being openly presented in the (in-progress) “Impact of impact treatments on giving: field experiments and synthesis” bookdown, a project organised in the dualprocess repo.

      Todo -- integrate these better to remove overlap or make overlap directly synchronized

    1. We consider a range published work (from a non-systematised literature review) including laboratory and hypothetical experiments, and non-experimental analysis of related questions (such as ‘the impact of general charitable ratings).

      Consider: should I focus on the lab and otherwise framed work more?

      See private RP Slack thread here considering Moche et al, 2022

    1. After participants had completed the initial survey, which took most of them at least 10 minutes, we measured their volunteering behaviour by asking them whether they would be willing to fill in a ‘a few extra questions’ for charity (59.3% responded yes) rather than skipping directly to the final questions. The participants were informed that their choice would be completely anonymous and that 5 SEK (around 0.50 euro or 50 US cents) would be donated to a charity organization of their choosing if they would fill out the additional questions.

      This seems like a potentially meaningful measure. We need to read closely to consider the extent to which that 'desire for consistency with questionnaire response' could be driving/biasing this.

  6. Jan 2022
    1. Given the conjugacy of the beta for the binomial,

      Wikipedia

      In Bayesian probability theory, if the posterior distribution p(θ | x) is in the same probability distribution family as the prior probability distribution p(θ), the prior and posterior are then called conjugate distributions, and the prior is called a conjugate prior for the likelihood function p(x | θ).

      So, here, I guess, the combination of a binomial(\(\theta\)) distribution for the data and a Beta probability for \(\theta\), the probability of each positive outcome, implies that the posterior density will also have a Beta probability.

      However, the posterior density of the difference in the \(\theta\)s is something that would need to be computed.

    2. The statistical hypothesis we wish to investigate is whether the proportion of left-handed men is greater than, equal to, or less than the proportion of left-handed women.

      Note this is not the 'lady tasting tea' case, where true outcome shares are known

    1. Proportion of Funding Available for Program

      The 'user input' here should be something like a mean and a dispersion ... most people won't know what the parameters of the Beta distribution mean.

      if necessary, we could explain what the parameters you input here will do, and have a graph of the distribution of this input to the model

    2. Transfers as a percentage of total costs. GiveDirectly has other costs! So how much of our money is going to people in need? The value is derived here: https://docs.google.com/spreadsheets/d/1L03SQuAeRRfjyuxy20QIJByOx6PEzyJ-x4edz2fSiQ4/edit#gid=537899494 This is calculated by finding the average proportion over the years. TODO: Create a predictive model, fitting a normal and a beta distribution to financials Cell: B5 Units: Unitless (percentage), 0-100%

      I love that this is described, but obviously the display here doesn't work. I asked Causal about cold folding

    3. Arbitrary Donation Size, chosen to be $100,000 by GiveWell

      This doesn't seem to impact anything

  7. Dec 2021
    1. he current results are telling us more about the structure of the model than about the world. For real results, go try the Jupyter notebook!

      I would love for this to be made more user friendly and explained better! I was able to run it but its hard to wrap your head around all the parameters while you are doing it.

    2. minimally

      what do you mean 'minimally sensitive' here? This is a bit confusing ... you are highlighting what seem to be the LEAST important factors, then.

    3. Direct cash transfers

      the scatterplots are somewhat overwhelming ... so much information ... there must be a better way to depict this.

    1. Probability distributions of value per dollar for GiveWell’s top charities

      the chart is a bit challenging to read. I think it would benefit some indicators for where e.g., the 25th percentile, median, and 75th percentile are.

    2. Instead, what I have done is uniformly taken GiveWell’s best guess and added and subtracted 20%. These upper and lower bounds then become the 90% confidence interval of a log-normal distribution2. For example, if GiveWell’s best guess for a parameter is 0.1, I used a log-normal with a 90% CI from 0.08 to 0.12.

      this is a bit arbitrary, but I guess he is working to build on this

    1. But how do we usefully express probabilities over rankings

      But I don't think 'ranking' is precisely the target ... e.g., if 2 of the charities are extremely close in impact per dollar, it doesn't matter so much which ranks (trivially) higher

    1. References

      probably better to automate the references ? or are they already...

    1. 2.2 Multiple lines plot

      the variable and value labelling needs improvement. Nice colors, though

    2. 2.1 Simple line plot

      I think this example needs improvement (as you probably know, its a WIP). There should be x and y axes and ticks, no?

    3. ggplotly(p) 197019801990200020104000800012000dateunemploy.cls-1 {fill: #3f4f75;} .cls-2 {fill: #80cfbe;} .cls-3 {fill: #fff;}plotly-logomark {"x":{"data":[{"x":[-915,-884,-853,-823,-792,-762,-731,-700,-671,-640,-610,-579,-549,-518,-487,-457,-426,-396,-365,-334,-306,-275,-245,-214,-184,-153,-122,-92,-61,-31,0,31,59,90,120,151,181,212,243,273,304,334,365,396,424,455,485,516,546,577,608,638,669,699,730,761,790,821,851,882,912,943,974,1004,1035,1065,1096,1127,1155,1186,1216,1247,1277,1308,1339,1369,1400,1430,1461,1492,1520,1551,1581,1612,1642,1673,1704,1734,1765,1795,1826,1857,1885,1916,1946,1977,2007,2038,2069,2099,2130,2160,2191,2222,2251,2282,2312,2343,2373,2404,2435,2465,2496,2526,2557,2588,2616,2647,2677,2708,2738,2769,2800,2830,2861,2891,2922,2953,2981,3012,3042,3073,3103,3134,3165,3195,3226,3256,3287,3318,3346,3377,3407,3438,3468,3499,3530,3560,3591,3621,3652,3683,3712,3743,3773,3804,3834,3865,3896,3926,3957,3987,4018,4049,4077,4108,4138,4169,4199,4230,4261,4291,4322,4352,4383,4414,4442,4473,4503,4534,4564,4595,4626,4656,4687,4717,4748,4779,4807,4838,4868,4899,4929,4960,4991,5021,5052,5082,5113,5144,5173,5204,5234,5265,5295,5326,5357,5387,5418,5448,5479,5510,5538,5569,5599,5630,5660,5691,5722,5752,5783,5813,5844,5875,5903,5934,5964,5995,6025,6056,6087,6117,6148,6178,6209,6240,6268,6299,6329,6360,6390,6421,6452,6482,6513,6543,6574,6605,6634,6665,6695,6726,6756,6787,6818,6848,6879,6909,6940,6971,6999,7030,7060,7091,7121,7152,7183,7213,7244,7274,7305,7336,7364,7395,7425,7456,7486,7517,7548,7578,7609,7639,7670,7701,7729,7760,7790,7821,7851,7882,7913,7943,7974,8004,8035,8066,8095,8126,8156,8187,8217,8248,8279,8309,8340,8370,8401,8432,8460,8491,8521,8552,8582,8613,8644,8674,8705,8735,8766,8797,8825,8856,8886,8917,8947,8978,9009,9039,9070,9100,9131,9162,9190,9221,9251,9282,9312,9343,9374,9404,9435,9465,9496,9527,9556,9587,9617,9648,9678,9709,9740,9770,9801,9831,9862,9893,9921,9952,9982,10013,10043,10074,10105,10135,10166,10196,10227,10258,10286,10317,10347,10378,10408,10439,10470,10500,10531,10561,10592,10623,10651,10682,10712,10743,10773,10804,10835,10865,10896,10926,10957,10988,11017,11048,11078,11109,11139,11170,11201,11231,11262,11292,11323,11354,11382,11413,11443,11474,11504,11535,11566,11596,11627,11657,11688,11719,11747,11778,11808,11839,11869,11900,11931,11961,11992,12022,12053,12084,12112,12143,12173,12204,12234,12265,12296,12326,12357,12387,12418,12449,12478,12509,12539,12570,12600,12631,12662,12692,12723,12753,12784,12815,12843,12874,12904,12935,12965,12996,13027,13057,13088,13118,13149,13180,13208,13239,13269,13300,13330,13361,13392,13422,13453,13483,13514,13545,13573,13604,13634,13665,13695,13726,13757,13787,13818,13848,13879,13910,13939,13970,14000,14031,14061,14092,14123,14153,14184,14214,14245,14276,14304,14335,14365,14396,14426,14457,14488,14518,14549,14579,14610,14641,14669,14700,14730,14761,14791,14822,14853,14883,14914,14944,14975,15006,15034,15065,15095,15126,15156,15187,15218,15248,15279,15309,15340,15371,15400,15431,15461,15492,15522,15553,15584,15614,15645,15675,15706,15737,15765,15796,15826,15857,15887,15918,15949,15979,16010,16040,16071,16102,16130,16161,16191,16222,16252,16283,16314,16344,16375,16405,16436,16467,16495,16526],"y":[2944,2945,2958,3143,3066,3018,2878,3001,2877,2709,2740,2938,2883,2768,2686,2689,2715,2685,2718,2692,2712,2758,2713,2816,2868,2856,3040,3049,2856,2884,3201,3453,3635,3797,3919,4071,4175,4256,4456,4591,4898,5076,4986,4903,4987,4959,4996,4949,5035,5134,5042,4954,5161,5154,5019,4928,5038,4959,4922,4923,4913,4939,4849,4875,4602,4543,4326,4452,4394,4459,4329,4363,4305,4305,4350,4144,4396,4489,4644,4731,4634,4618,4705,4927,5063,5022,5437,5523,6140,6636,7501,7520,7978,8210,8433,8220,8127,7928,7923,7897,7794,7744,7534,7326,7230,7330,7053,7322,7490,7518,7380,7430,7620,7545,7280,7443,7307,7059,6911,7134,6829,6925,6751,6763,6815,6386,6489,6318,6337,6180,6127,6028,6309,6080,6125,5947,6077,6228,6109,6173,6109,6069,5840,5959,5996,6320,6190,6296,6238,6325,6683,6702,6729,7358,7984,8098,8363,8281,8021,8088,8023,7718,8071,8051,7982,7869,8174,8098,7863,8036,8230,8646,9029,9267,9397,9705,9895,10244,10335,10538,10849,10881,11217,11529,11938,12051,11534,11545,11408,11268,11154,11246,10548,10623,10282,9887,9499,9331,9008,8791,8746,8762,8456,8226,8537,8519,8367,8381,8198,8358,8423,8321,8339,8395,8302,8460,8513,8196,8248,8298,8128,8138,7795,8402,8383,8364,8439,8508,8319,8135,8310,8243,8159,7883,7892,7865,7862,7542,7574,7398,7268,7261,7102,7227,7035,6936,6953,6929,6876,6601,6779,6546,6605,6843,6604,6568,6537,6518,6682,6359,6205,6468,6375,6577,6495,6511,6590,6630,6725,6667,6752,6651,6598,6797,6742,6590,6922,7188,7368,7459,7764,7901,8015,8265,8586,8439,8736,8692,8586,8666,8722,8842,8931,9198,9283,9454,9460,9415,9744,10040,9850,9787,9781,9398,9565,9557,9325,9183,9056,9110,9149,9121,8930,8763,8714,8750,8542,8477,8630,8583,8470,8331,7915,7927,7946,7933,7734,7632,7375,7230,7375,7187,7153,7645,7430,7427,7527,7484,7478,7328,7426,7423,7491,7313,7318,7415,7423,7095,7337,6882,6979,7031,7236,7253,7158,7102,7000,6873,6655,6799,6655,6608,6656,6454,6308,6476,6368,6306,6422,5941,6047,6212,6259,6179,6300,6280,6100,6032,5976,6111,5783,6004,5796,5951,6025,5838,5915,5778,5716,5653,5708,5858,5733,5481,5758,5651,5747,5853,5625,5534,5639,5634,6023,6089,6141,6271,6226,6484,6583,7042,7142,7694,8003,8258,8182,8215,8304,8599,8399,8393,8390,8304,8251,8307,8520,8640,8520,8618,8588,8842,8957,9266,9011,8896,8921,8732,8576,8317,8370,8167,8491,8170,8212,8286,8136,7990,7927,8061,7932,7934,7784,7980,7737,7672,7651,7524,7406,7345,7553,7453,7566,7279,7064,7184,7072,7120,6980,7001,7175,7091,6847,6727,6872,6762,7116,6927,6731,6850,6766,6979,7149,7067,7170,7237,7240,7645,7685,7497,7822,7637,8395,8575,8937,9438,9494,10074,10538,11286,12058,12898,13426,13853,14499,14707,14601,14814,15009,15352,15219,15098,15046,15113,15202,15325,14849,14474,14512,14648,14579,14516,15081,14348,14013,13820,13737,13957,13855,13962,13763,13818,13948,13594,13302,13093,12797,12813,12713,12646,12660,12692,12656,12471,12115,12124,12005,12298,12471,11950,11689,11760,11654,11751,11335,11279,11270,11136,10787,10404,10202,10349,10380,9702,9859,9460,9608,9599,9262,8990,9090,8717,8903,8610,8504,8526],"text":["date: 1967-07-01<br />unemploy: 2944","date: 1967-08-01<br />unemploy: 2945","date: 1967-09-01<br />unemploy: 2958","date: 1967-10-01<br />unemploy: 3143","date: 1967-11-01<br />unemploy: 3066","date: 1967-12-01<br />unemploy: 3018","date: 1968-01-01<br />unemploy: 2878","date: 1968-02-01<br />unemploy: 3001","date: 1968-03-01<br />unemploy: 2877","date: 1968-04-01<br />unemploy: 2709","date: 1968-05-01<br />unemploy: 2740","date: 1968-06-01<br />unemploy: 2938","date: 1968-07-01<br />unemploy: 2883","date: 1968-08-01<br />unemploy: 2768","date: 1968-09-01<br />unemploy: 2686","date: 1968-10-01<br />unemploy: 2689","date: 1968-11-01<br />unemploy: 2715","date: 1968-12-01<br />unemploy: 2685","date: 1969-01-01<br />unemploy: 2718","date: 1969-02-01<br />unemploy: 2692","date: 1969-03-01<br />unemploy: 2712","date: 1969-04-01<br />unemploy: 2758","date: 1969-05-01<br />unemploy: 2713","date: 1969-06-01<br />unemploy: 2816","date: 1969-07-01<br />unemploy: 2868","date: 1969-08-01<br />unemploy: 2856","date: 1969-09-01<br />unemploy: 3040","date: 1969-10-01<br />unemploy: 3049","date: 1969-11-01<br />unemploy: 2856","date: 1969-12-01<br />unemploy: 2884","date: 1970-01-01<br />unemploy: 3201","date: 1970-02-01<br />unemploy: 3453","date: 1970-03-01<br />unemploy: 3635","date: 1970-04-01<br />unemploy: 3797","date: 1970-05-01<br />unemploy: 3919","date: 1970-06-01<br />unemploy: 4071","date: 1970-07-01<br />unemploy: 4175","date: 1970-08-01<br />unemploy: 4256","date: 1970-09-01<br />unemploy: 4456","date: 1970-10-01<br />unemploy: 4591","date: 1970-11-01<br />unemploy: 4898","date: 1970-12-01<br />unemploy: 5076","date: 1971-01-01<br />unemploy: 4986","date: 1971-02-01<br />unemploy: 4903","date: 1971-03-01<br />unemploy: 4987","date: 1971-04-01<br />unemploy: 4959","date: 1971-05-01<br />unemploy: 4996","date: 1971-06-01<br />unemploy: 4949","date: 1971-07-01<br />unemploy: 5035","date: 1971-08-01<br />unemploy: 5134","date: 1971-09-01<br />unemploy: 5042","date: 1971-10-01<br />unemploy: 4954","date: 1971-11-01<br />unemploy: 5161","date: 1971-12-01<br />unemploy: 5154","date: 1972-01-01<br />unemploy: 5019","date: 1972-02-01<br />unemploy: 4928","date: 1972-03-01<br />unemploy: 5038","date: 1972-04-01<br />unemploy: 4959","date: 1972-05-01<br />unemploy: 4922","date: 1972-06-01<br />unemploy: 4923","date: 1972-07-01<br />unemploy: 4913","date: 1972-08-01<br />unemploy: 4939","date: 1972-09-01<br />unemploy: 4849","date: 1972-10-01<br />unemploy: 4875","date: 1972-11-01<br />unemploy: 4602","date: 1972-12-01<br />unemploy: 4543","date: 1973-01-01<br />unemploy: 4326","date: 1973-02-01<br />unemploy: 4452","date: 1973-03-01<br />unemploy: 4394","date: 1973-04-01<br />unemploy: 4459","date: 1973-05-01<br />unemploy: 4329","date: 1973-06-01<br />unemploy: 4363","date: 1973-07-01<br />unemploy: 4305","date: 1973-08-01<br />unemploy: 4305","date: 1973-09-01<br />unemploy: 4350","date: 1973-10-01<br />unemploy: 4144","date: 1973-11-01<br />unemploy: 4396","date: 1973-12-01<br />unemploy: 4489","date: 1974-01-01<br />unemploy: 4644","date: 1974-02-01<br />unemploy: 4731","date: 1974-03-01<br />unemploy: 4634","date: 1974-04-01<br />unemploy: 4618","date: 1974-05-01<br />unemploy: 4705","date: 1974-06-01<br />unemploy: 4927","date: 1974-07-01<br />unemploy: 5063","date: 1974-08-01<br />unemploy: 5022","date: 1974-09-01<br />unemploy: 5437","date: 1974-10-01<br />unemploy: 5523","date: 1974-11-01<br />unemploy: 6140","date: 1974-12-01<br />unemploy: 6636","date: 1975-01-01<br />unemploy: 7501","date: 1975-02-01<br />unemploy: 7520","date: 1975-03-01<br />unemploy: 7978","date: 1975-04-01<br />unemploy: 8210","date: 1975-05-01<br />unemploy: 8433","date: 1975-06-01<br />unemploy: 8220","date: 1975-07-01<br />unemploy: 8127","date: 1975-08-01<br />unemploy: 7928","date: 1975-09-01<br />unemploy: 7923","date: 1975-10-01<br />unemploy: 7897","date: 1975-11-01<br />unemploy: 7794","date: 1975-12-01<br />unemploy: 7744","date: 1976-01-01<br />unemploy: 7534","date: 1976-02-01<br />unemploy: 7326","date: 1976-03-01<br />unemploy: 7230","date: 1976-04-01<br />unemploy: 7330","date: 1976-05-01<br />unemploy: 7053","date: 1976-06-01<br />unemploy: 7322","date: 1976-07-01<br />unemploy: 7490","date: 1976-08-01<br />unemploy: 7518","date: 1976-09-01<br />unemploy: 7380","date: 1976-10-01<br />unemploy: 7430","date: 1976-11-01<br />unemploy: 7620","date: 1976-12-01<br />unemploy: 7545","date: 1977-01-01<br />unemploy: 7280","date: 1977-02-01<br />unemploy: 7443","date: 1977-03-01<br />unemploy: 7307","date: 1977-04-01<br />unemploy: 7059","date: 1977-05-01<br />unemploy: 6911","date: 1977-06-01<br />unemploy: 7134","date: 1977-07-01<br />unemploy: 6829","date: 1977-08-01<br />unemploy: 6925","date: 1977-09-01<br />unemploy: 6751","date: 1977-10-01<br />unemploy: 6763","date: 1977-11-01<br />unemploy: 6815","date: 1977-12-01<br />unemploy: 6386","date: 1978-01-01<br />unemploy: 6489","date: 1978-02-01<br />unemploy: 6318","date: 1978-03-01<br />unemploy: 6337","date: 1978-04-01<br />unemploy: 6180","date: 1978-05-01<br />unemploy: 6127","date: 1978-06-01<br />unemploy: 6028","date: 1978-07-01<br />unemploy: 6309","date: 1978-08-01<br />unemploy: 6080","date: 1978-09-01<br />unemploy: 6125","date: 1978-10-01<br />unemploy: 5947","date: 1978-11-01<br />unemploy: 6077","date: 1978-12-01<br />unemploy: 6228","date: 1979-01-01<br />unemploy: 6109","date: 1979-02-01<br />unemploy: 6173","date: 1979-03-01<br />unemploy: 6109","date: 1979-04-01<br />unemploy: 6069","date: 1979-05-01<br />unemploy: 5840","date: 1979-06-01<br />unemploy: 5959","date: 1979-07-01<br />unemploy: 5996","date: 1979-08-01<br />unemploy: 6320","date: 1979-09-01<br />unemploy: 6190","date: 1979-10-01<br />unemploy: 6296","date: 1979-11-01<br />unemploy: 6238","date: 1979-12-01<br />unemploy: 6325","date: 1980-01-01<br />unemploy: 6683","date: 1980-02-01<br />unemploy: 6702","date: 1980-03-01<br />unemploy: 6729","date: 1980-04-01<br />unemploy: 7358","date: 1980-05-01<br />unemploy: 7984","date: 1980-06-01<br />unemploy: 8098","date: 1980-07-01<br />unemploy: 8363","date: 1980-08-01<br />unemploy: 8281","date: 1980-09-01<br />unemploy: 8021","date: 1980-10-01<br />unemploy: 8088","date: 1980-11-01<br />unemploy: 8023","date: 1980-12-01<br />unemploy: 7718","date: 1981-01-01<br />unemploy: 8071","date: 1981-02-01<br />unemploy: 8051","date: 1981-03-01<br />unemploy: 7982","date: 1981-04-01<br />unemploy: 7869","date: 1981-05-01<br />unemploy: 8174","date: 1981-06-01<br />unemploy: 8098","date: 1981-07-01<br />unemploy: 7863","date: 1981-08-01<br />unemploy: 8036","date: 1981-09-01<br />unemploy: 8230","date: 1981-10-01<br />unemploy: 8646","date: 1981-11-01<br />unemploy: 9029","date: 1981-12-01<br />unemploy: 9267","date: 1982-01-01<br />unemploy: 9397","date: 1982-02-01<br />unemploy: 9705","date: 1982-03-01<br />unemploy: 9895","date: 1982-04-01<br />unemploy: 10244","date: 1982-05-01<br />unemploy: 10335","date: 1982-06-01<br />unemploy: 10538","date: 1982-07-01<br />unemploy: 10849","date: 1982-08-01<br />unemploy: 10881","date: 1982-09-01<br />unemploy: 11217","date: 1982-10-01<br />unemploy: 11529","date: 1982-11-01<br />unemploy: 11938","date: 1982-12-01<br />unemploy: 12051","date: 1983-01-01<br />unemploy: 11534","date: 1983-02-01<br />unemploy: 11545","date: 1983-03-01<br />unemploy: 11408","date: 1983-04-01<br />unemploy: 11268","date: 1983-05-01<br />unemploy: 11154","date: 1983-06-01<br />unemploy: 11246","date: 1983-07-01<br />unemploy: 10548","date: 1983-08-01<br />unemploy: 10623","date: 1983-09-01<br />unemploy: 10282","date: 1983-10-01<br />unemploy: 9887","date: 1983-11-01<br />unemploy: 9499","date: 1983-12-01<br />unemploy: 9331","date: 1984-01-01<br />unemploy: 9008","date: 1984-02-01<br />unemploy: 8791","date: 1984-03-01<br />unemploy: 8746","date: 1984-04-01<br />unemploy: 8762","date: 1984-05-01<br />unemploy: 8456","date: 1984-06-01<br />unemploy: 8226","date: 1984-07-01<br />unemploy: 8537","date: 1984-08-01<br />unemploy: 8519","date: 1984-09-01<br />unemploy: 8367","date: 1984-10-01<br />unemploy: 8381","date: 1984-11-01<br />unemploy: 8198","date: 1984-12-01<br />unemploy: 8358","date: 1985-01-01<br />unemploy: 8423","date: 1985-02-01<br />unemploy: 8321","date: 1985-03-01<br />unemploy: 8339","date: 1985-04-01<br />unemploy: 8395","date: 1985-05-01<br />unemploy: 8302","date: 1985-06-01<br />unemploy: 8460","date: 1985-07-01<br />unemploy: 8513","date: 1985-08-01<br />unemploy: 8196","date: 1985-09-01<br />unemploy: 8248","date: 1985-10-01<br />unemploy: 8298","date: 1985-11-01<br />unemploy: 8128","date: 1985-12-01<br />unemploy: 8138","date: 1986-01-01<br />unemploy: 7795","date: 1986-02-01<br />unemploy: 8402","date: 1986-03-01<br />unemploy: 8383","date: 1986-04-01<br />unemploy: 8364","date: 1986-05-01<br />unemploy: 8439","date: 1986-06-01<br />unemploy: 8508","date: 1986-07-01<br />unemploy: 8319","date: 1986-08-01<br />unemploy: 8135","date: 1986-09-01<br />unemploy: 8310","date: 1986-10-01<br />unemploy: 8243","date: 1986-11-01<br />unemploy: 8159","date: 1986-12-01<br />unemploy: 7883","date: 1987-01-01<br />unemploy: 7892","date: 1987-02-01<br />unemploy: 7865","date: 1987-03-01<br />unemploy: 7862","date: 1987-04-01<br />unemploy: 7542","date: 1987-05-01<br />unemploy: 7574","date: 1987-06-01<br />unemploy: 7398","date: 1987-07-01<br />unemploy: 7268","date: 1987-08-01<br />unemploy: 7261","date: 1987-09-01<br />unemploy: 7102","date: 1987-10-01<br />unemploy: 7227","date: 1987-11-01<br />unemploy: 7035","date: 1987-12-01<br />unemploy: 6936","date: 1988-01-01<br />unemploy: 6953","date: 1988-02-01<br />unemploy: 6929","date: 1988-03-01<br />unemploy: 6876","date: 1988-04-01<br />unemploy: 6601","date: 1988-05-01<br />unemploy: 6779","date: 1988-06-01<br />unemploy: 6546","date: 1988-07-01<br />unemploy: 6605","date: 1988-08-01<br />unemploy: 6843","date: 1988-09-01<br />unemploy: 6604","date: 1988-10-01<br />unemploy: 6568","date: 1988-11-01<br />unemploy: 6537","date: 1988-12-01<br />unemploy: 6518","date: 1989-01-01<br />unemploy: 6682","date: 1989-02-01<br />unemploy: 6359","date: 1989-03-01<br />unemploy: 6205","date: 1989-04-01<br />unemploy: 6468","date: 1989-05-01<br />unemploy: 6375","date: 1989-06-01<br />unemploy: 6577","date: 1989-07-01<br />unemploy: 6495","date: 1989-08-01<br />unemploy: 6511","date: 1989-09-01<br />unemploy: 6590","date: 1989-10-01<br />unemploy: 6630","date: 1989-11-01<br />unemploy: 6725","date: 1989-12-01<br />unemploy: 6667","date: 1990-01-01<br />unemploy: 6752","date: 1990-02-01<br />unemploy: 6651","date: 1990-03-01<br />unemploy: 6598","date: 1990-04-01<br />unemploy: 6797","date: 1990-05-01<br />unemploy: 6742","date: 1990-06-01<br />unemploy: 6590","date: 1990-07-01<br />unemploy: 6922","date: 1990-08-01<br />unemploy: 7188","date: 1990-09-01<br />unemploy: 7368","date: 1990-10-01<br />unemploy: 7459","date: 1990-11-01<br />unemploy: 7764","date: 1990-12-01<br />unemploy: 7901","date: 1991-01-01<br />unemploy: 8015","date: 1991-02-01<br />unemploy: 8265","date: 1991-03-01<br />unemploy: 8586","date: 1991-04-01<br />unemploy: 8439","date: 1991-05-01<br />unemploy: 8736","date: 1991-06-01<br />unemploy: 8692","date: 1991-07-01<br />unemploy: 8586","date: 1991-08-01<br />unemploy: 8666","date: 1991-09-01<br />unemploy: 8722","date: 1991-10-01<br />unemploy: 8842","date: 1991-11-01<br />unemploy: 8931","date: 1991-12-01<br />unemploy: 9198","date: 1992-01-01<br />unemploy: 9283","date: 1992-02-01<br />unemploy: 9454","date: 1992-03-01<br />unemploy: 9460","date: 1992-04-01<br />unemploy: 9415","date: 1992-05-01<br />unemploy: 9744","date: 1992-06-01<br />unemploy: 10040","date: 1992-07-01<br />unemploy: 9850","date: 1992-08-01<br />unemploy: 9787","date: 1992-09-01<br />unemploy: 9781","date: 1992-10-01<br />unemploy: 9398","date: 1992-11-01<br />unemploy: 9565","date: 1992-12-01<br />unemploy: 9557","date: 1993-01-01<br />unemploy: 9325","date: 1993-02-01<br />unemploy: 9183","date: 1993-03-01<br />unemploy: 9056","date: 1993-04-01<br />unemploy: 9110","date: 1993-05-01<br />unemploy: 9149","date: 1993-06-01<br />unemploy: 9121","date: 1993-07-01<br />unemploy: 8930","date: 1993-08-01<br />unemploy: 8763","date: 1993-09-01<br />unemploy: 8714","date: 1993-10-01<br />unemploy: 8750","date: 1993-11-01<br />unemploy: 8542","date: 1993-12-01<br />unemploy: 8477","date: 1994-01-01<br />unemploy: 8630","date: 1994-02-01<br />unemploy: 8583","date: 1994-03-01<br />unemploy: 8470","date: 1994-04-01<br />unemploy: 8331","date: 1994-05-01<br />unemploy: 7915","date: 1994-06-01<br />unemploy: 7927","date: 1994-07-01<br />unemploy: 7946","date: 1994-08-01<br />unemploy: 7933","date: 1994-09-01<br />unemploy: 7734","date: 1994-10-01<br />unemploy: 7632","date: 1994-11-01<br />unemploy: 7375","date: 1994-12-01<br />unemploy: 7230","date: 1995-01-01<br />unemploy: 7375","date: 1995-02-01<br />unemploy: 7187","date: 1995-03-01<br />unemploy: 7153","date: 1995-04-01<br />unemploy: 7645","date: 1995-05-01<br />unemploy: 7430","date: 1995-06-01<br />unemploy: 7427","date: 1995-07-01<br />unemploy: 7527","date: 1995-08-01<br />unemploy: 7484","date: 1995-09-01<br />unemploy: 7478","date: 1995-10-01<br />unemploy: 7328","date: 1995-11-01<br />unemploy: 7426","date: 1995-12-01<br />unemploy: 7423","date: 1996-01-01<br />unemploy: 7491","date: 1996-02-01<br />unemploy: 7313","date: 1996-03-01<br />unemploy: 7318","date: 1996-04-01<br />unemploy: 7415","date: 1996-05-01<br />unemploy: 7423","date: 1996-06-01<br />unemploy: 7095","date: 1996-07-01<br />unemploy: 7337","date: 1996-08-01<br />unemploy: 6882","date: 1996-09-01<br />unemploy: 6979","date: 1996-10-01<br />unemploy: 7031","date: 1996-11-01<br />unemploy: 7236","date: 1996-12-01<br />unemploy: 7253","date: 1997-01-01<br />unemploy: 7158","date: 1997-02-01<br />unemploy: 7102","date: 1997-03-01<br />unemploy: 7000","date: 1997-04-01<br />unemploy: 6873","date: 1997-05-01<br />unemploy: 6655","date: 1997-06-01<br />unemploy: 6799","date: 1997-07-01<br />unemploy: 6655","date: 1997-08-01<br />unemploy: 6608","date: 1997-09-01<br />unemploy: 6656","date: 1997-10-01<br />unemploy: 6454","date: 1997-11-01<br />unemploy: 6308","date: 1997-12-01<br />unemploy: 6476","date: 1998-01-01<br />unemploy: 6368","date: 1998-02-01<br />unemploy: 6306","date: 1998-03-01<br />unemploy: 6422","date: 1998-04-01<br />unemploy: 5941","date: 1998-05-01<br />unemploy: 6047","date: 1998-06-01<br />unemploy: 6212","date: 1998-07-01<br />unemploy: 6259","date: 1998-08-01<br />unemploy: 6179","date: 1998-09-01<br />unemploy: 6300","date: 1998-10-01<br />unemploy: 6280","date: 1998-11-01<br />unemploy: 6100","date: 1998-12-01<br />unemploy: 6032","date: 1999-01-01<br />unemploy: 5976","date: 1999-02-01<br />unemploy: 6111","date: 1999-03-01<br />unemploy: 5783","date: 1999-04-01<br />unemploy: 6004","date: 1999-05-01<br />unemploy: 5796","date: 1999-06-01<br />unemploy: 5951","date: 1999-07-01<br />unemploy: 6025","date: 1999-08-01<br />unemploy: 5838","date: 1999-09-01<br />unemploy: 5915","date: 1999-10-01<br />unemploy: 5778","date: 1999-11-01<br />unemploy: 5716","date: 1999-12-01<br />unemploy: 5653","date: 2000-01-01<br />unemploy: 5708","date: 2000-02-01<br />unemploy: 5858","date: 2000-03-01<br />unemploy: 5733","date: 2000-04-01<br />unemploy: 5481","date: 2000-05-01<br />unemploy: 5758","date: 2000-06-01<br />unemploy: 5651","date: 2000-07-01<br />unemploy: 5747","date: 2000-08-01<br />unemploy: 5853","date: 2000-09-01<br />unemploy: 5625","date: 2000-10-01<br />unemploy: 5534","date: 2000-11-01<br />unemploy: 5639","date: 2000-12-01<br />unemploy: 5634","date: 2001-01-01<br />unemploy: 6023","date: 2001-02-01<br />unemploy: 6089","date: 2001-03-01<br />unemploy: 6141","date: 2001-04-01<br />unemploy: 6271","date: 2001-05-01<br />unemploy: 6226","date: 2001-06-01<br />unemploy: 6484","date: 2001-07-01<br />unemploy: 6583","date: 2001-08-01<br />unemploy: 7042","date: 2001-09-01<br />unemploy: 7142","date: 2001-10-01<br />unemploy: 7694","date: 2001-11-01<br />unemploy: 8003","date: 2001-12-01<br />unemploy: 8258","date: 2002-01-01<br />unemploy: 8182","date: 2002-02-01<br />unemploy: 8215","date: 2002-03-01<br />unemploy: 8304","date: 2002-04-01<br />unemploy: 8599","date: 2002-05-01<br />unemploy: 8399","date: 2002-06-01<br />unemploy: 8393","date: 2002-07-01<br />unemploy: 8390","date: 2002-08-01<br />unemploy: 8304","date: 2002-09-01<br />unemploy: 8251","date: 2002-10-01<br />unemploy: 8307","date: 2002-11-01<br />unemploy: 8520","date: 2002-12-01<br />unemploy: 8640","date: 2003-01-01<br />unemploy: 8520","date: 2003-02-01<br />unemploy: 8618","date: 2003-03-01<br />unemploy: 8588","date: 2003-04-01<br />unemploy: 8842","date: 2003-05-01<br />unemploy: 8957","date: 2003-06-01<br />unemploy: 9266","date: 2003-07-01<br />unemploy: 9011","date: 2003-08-01<br />unemploy: 8896","date: 2003-09-01<br />unemploy: 8921","date: 2003-10-01<br />unemploy: 8732","date: 2003-11-01<br />unemploy: 8576","date: 2003-12-01<br />unemploy: 8317","date: 2004-01-01<br />unemploy: 8370","date: 2004-02-01<br />unemploy: 8167","date: 2004-03-01<br />unemploy: 8491","date: 2004-04-01<br />unemploy: 8170","date: 2004-05-01<br />unemploy: 8212","date: 2004-06-01<br />unemploy: 8286","date: 2004-07-01<br />unemploy: 8136","date: 2004-08-01<br />unemploy: 7990","date: 2004-09-01<br />unemploy: 7927","date: 2004-10-01<br />unemploy: 8061","date: 2004-11-01<br />unemploy: 7932","date: 2004-12-01<br />unemploy: 7934","date: 2005-01-01<br />unemploy: 7784","date: 2005-02-01<br />unemploy: 7980","date: 2005-03-01<br />unemploy: 7737","date: 2005-04-01<br />unemploy: 7672","date: 2005-05-01<br />unemploy: 7651","date: 2005-06-01<br />unemploy: 7524","date: 2005-07-01<br />unemploy: 7406","date: 2005-08-01<br />unemploy: 7345","date: 2005-09-01<br />unemploy: 7553","date: 2005-10-01<br />unemploy: 7453","date: 2005-11-01<br />unemploy: 7566","date: 2005-12-01<br />unemploy: 7279","date: 2006-01-01<br />unemploy: 7064","date: 2006-02-01<br />unemploy: 7184","date: 2006-03-01<br />unemploy: 7072","date: 2006-04-01<br />unemploy: 7120","date: 2006-05-01<br />unemploy: 6980","date: 2006-06-01<br />unemploy: 7001","date: 2006-07-01<br />unemploy: 7175","date: 2006-08-01<br />unemploy: 7091","date: 2006-09-01<br />unemploy: 6847","date: 2006-10-01<br />unemploy: 6727","date: 2006-11-01<br />unemploy: 6872","date: 2006-12-01<br />unemploy: 6762","date: 2007-01-01<br />unemploy: 7116","date: 2007-02-01<br />unemploy: 6927","date: 2007-03-01<br />unemploy: 6731","date: 2007-04-01<br />unemploy: 6850","date: 2007-05-01<br />unemploy: 6766","date: 2007-06-01<br />unemploy: 6979","date: 2007-07-01<br />unemploy: 7149","date: 2007-08-01<br />unemploy: 7067","date: 2007-09-01<br />unemploy: 7170","date: 2007-10-01<br />unemploy: 7237","date: 2007-11-01<br />unemploy: 7240","date: 2007-12-01<br />unemploy: 7645","date: 2008-01-01<br />unemploy: 7685","date: 2008-02-01<br />unemploy: 7497","date: 2008-03-01<br />unemploy: 7822","date: 2008-04-01<br />unemploy: 7637","date: 2008-05-01<br />unemploy: 8395","date: 2008-06-01<br />unemploy: 8575","date: 2008-07-01<br />unemploy: 8937","date: 2008-08-01<br />unemploy: 9438","date: 2008-09-01<br />unemploy: 9494","date: 2008-10-01<br />unemploy: 10074","date: 2008-11-01<br />unemploy: 10538","date: 2008-12-01<br />unemploy: 11286","date: 2009-01-01<br />unemploy: 12058","date: 2009-02-01<br />unemploy: 12898","date: 2009-03-01<br />unemploy: 13426","date: 2009-04-01<br />unemploy: 13853","date: 2009-05-01<br />unemploy: 14499","date: 2009-06-01<br />unemploy: 14707","date: 2009-07-01<br />unemploy: 14601","date: 2009-08-01<br />unemploy: 14814","date: 2009-09-01<br />unemploy: 15009","date: 2009-10-01<br />unemploy: 15352","date: 2009-11-01<br />unemploy: 15219","date: 2009-12-01<br />unemploy: 15098","date: 2010-01-01<br />unemploy: 15046","date: 2010-02-01<br />unemploy: 15113","date: 2010-03-01<br />unemploy: 15202","date: 2010-04-01<br />unemploy: 15325","date: 2010-05-01<br />unemploy: 14849","date: 2010-06-01<br />unemploy: 14474","date: 2010-07-01<br />unemploy: 14512","date: 2010-08-01<br />unemploy: 14648","date: 2010-09-01<br />unemploy: 14579","date: 2010-10-01<br />unemploy: 14516","date: 2010-11-01<br />unemploy: 15081","date: 2010-12-01<br />unemploy: 14348","date: 2011-01-01<br />unemploy: 14013","date: 2011-02-01<br />unemploy: 13820","date: 2011-03-01<br />unemploy: 13737","date: 2011-04-01<br />unemploy: 13957","date: 2011-05-01<br />unemploy: 13855","date: 2011-06-01<br />unemploy: 13962","date: 2011-07-01<br />unemploy: 13763","date: 2011-08-01<br />unemploy: 13818","date: 2011-09-01<br />unemploy: 13948","date: 2011-10-01<br />unemploy: 13594","date: 2011-11-01<br />unemploy: 13302","date: 2011-12-01<br />unemploy: 13093","date: 2012-01-01<br />unemploy: 12797","date: 2012-02-01<br />unemploy: 12813","date: 2012-03-01<br />unemploy: 12713","date: 2012-04-01<br />unemploy: 12646","date: 2012-05-01<br />unemploy: 12660","date: 2012-06-01<br />unemploy: 12692","date: 2012-07-01<br />unemploy: 12656","date: 2012-08-01<br />unemploy: 12471","date: 2012-09-01<br />unemploy: 12115","date: 2012-10-01<br />unemploy: 12124","date: 2012-11-01<br />unemploy: 12005","date: 2012-12-01<br />unemploy: 12298","date: 2013-01-01<br />unemploy: 12471","date: 2013-02-01<br />unemploy: 11950","date: 2013-03-01<br />unemploy: 11689","date: 2013-04-01<br />unemploy: 11760","date: 2013-05-01<br />unemploy: 11654","date: 2013-06-01<br />unemploy: 11751","date: 2013-07-01<br />unemploy: 11335","date: 2013-08-01<br />unemploy: 11279","date: 2013-09-01<br />unemploy: 11270","date: 2013-10-01<br />unemploy: 11136","date: 2013-11-01<br />unemploy: 10787","date: 2013-12-01<br />unemploy: 10404","date: 2014-01-01<br />unemploy: 10202","date: 2014-02-01<br />unemploy: 10349","date: 2014-03-01<br />unemploy: 10380","date: 2014-04-01<br />unemploy: 9702","date: 2014-05-01<br />unemploy: 9859","date: 2014-06-01<br />unemploy: 9460","date: 2014-07-01<br />unemploy: 9608","date: 2014-08-01<br />unemploy: 9599","date: 2014-09-01<br />unemploy: 9262","date: 2014-10-01<br />unemploy: 8990","date: 2014-11-01<br />unemploy: 9090","date: 2014-12-01<br />unemploy: 8717","date: 2015-01-01<br />unemploy: 8903","date: 2015-02-01<br />unemploy: 8610","date: 2015-03-01<br />unemploy: 8504","date: 2015-04-01<br />unemploy: 8526"],"type":"scatter","mode":"lines","line":{"width":1.88976377952756,"color":"rgba(0,0,0,1)","dash":"solid"},"hoveron":"points","showlegend":false,"xaxis":"x","yaxis":"y","hoverinfo":"text","frame":null}],"layout":{"margin":{"t":26.2283105022831,"r":7.30593607305936,"b":40.1826484018265,"l":54.7945205479452},"plot_bgcolor":"rgba(242,242,242,1)","paper_bgcolor":"rgba(255,255,255,1)","font":{"color":"rgba(0,0,0,1)","family":"","size":14.6118721461187},"xaxis":{"domain":[0,1],"automargin":true,"type":"linear","autorange":false,"range":[-1787.05,17398.05],"tickmode":"array","ticktext":["1970","1980","1990","2000","2010"],"tickvals":[0,3652,7305,10957,14610],"categoryorder":"array","categoryarray":["1970","1980","1990","2000","2010"],"nticks":null,"ticks":"outside","tickcolor":"rgba(51,51,51,1)","ticklen":3.65296803652968,"tickwidth":0.66417600664176,"showticklabels":true,"tickfont":{"color":"rgba(77,77,77,1)","family":"","size":11.689497716895},"tickangle":-0,"showline":false,"linecolor":null,"linewidth":0,"showgrid":true,"gridcolor":"rgba(255,255,255,1)","gridwidth":0.66417600664176,"zeroline":false,"anchor":"y","title":{"text":"date","font":{"color":"rgba(0,0,0,1)","family":"","size":14.6118721461187}},"hoverformat":".2f"},"yaxis":{"domain":[0,1],"automargin":true,"type":"linear","autorange":false,"range":[2051.65,15985.35],"tickmode":"array","ticktext":["4000","8000","12000"],"tickvals":[4000,8000,12000],"categoryorder":"array","categoryarray":["4000","8000","12000"],"nticks":null,"ticks":"outside","tickcolor":"rgba(51,51,51,1)","ticklen":3.65296803652968,"tickwidth":0.66417600664176,"showticklabels":true,"tickfont":{"color":"rgba(77,77,77,1)","family":"","size":11.689497716895},"tickangle":-0,"showline":false,"linecolor":null,"linewidth":0,"showgrid":true,"gridcolor":"rgba(255,255,255,1)","gridwidth":0.66417600664176,"zeroline":false,"anchor":"x","title":{"text":"unemploy","font":{"color":"rgba(0,0,0,1)","family":"","size":14.6118721461187}},"hoverformat":".2f"},"shapes":[{"type":"rect","fillcolor":null,"line":{"color":null,"width":0,"linetype":[]},"yref":"paper","xref":"paper","x0":0,"x1":1,"y0":0,"y1":1}],"showlegend":false,"legend":{"bgcolor":"rgba(255,255,255,1)","bordercolor":"transparent","borderwidth":1.88976377952756,"font":{"color":"rgba(0,0,0,1)","family":"","size":11.689497716895}},"hovermode":"closest","barmode":"relative"},"config":{"doubleClick":"reset","modeBarButtonsToAdd":["hoverclosest","hovercompare"],"showSendToCloud":false},"source":"A","attrs":{"307d5caf07a9":{"x":{},"y":{},"type":"scatter"}},"cur_data":"307d5caf07a9","visdat":{"307d5caf07a9":["function (y) ","x"]},"highlight":{"on":"plotly_click","persistent":false,"dynamic":false,"selectize":false,"opacityDim":0.2,"selected":{"opacity":1},"debounce":0},"shinyEvents":["plotly_hover","plotly_click","plotly_selected","plotly_relayout","plotly_brushed","plotly_brushing","plotly_clickannotation","plotly_doubleclick","plotly_deselect","plotly_afterplot","plotly_sunburstclick"],"base_url":"https://plot.ly"},"evals":[],"jsHooks":[]}

      or 'pipe it to ggplotly()'

    4. p <- ggplot(economics, aes(x = date, y = unemploy)) +

      minor coding style thing: I generally prefer to 'pipe in data' from the previous line

      and make ggplot() its own line

      Makes the code more 'modular' IMO

    1. https://stats.idre.ucla.edu/spss/seminars/introduction-to-factor-analysis/a-practical-introduction-to-factor-analysis/

      But its in spss :(

    2. There are several rotation methods.

      but what is 'rotation'?

    3. set number of participants

      what are 'participants'? Wait -- I'm confused about the discussion of sample size here and how it relates to EFA.

    4. The reason for that is that these rules only involve one of two factors.

      this sentence doesn't make sense to me. So you are saying 'rules of thumb are bad because people haven't come up with good rules of thumb?'

    5. However, if you believe there is some latent construct that defines the interrelationship among items, then factor analysis may be more appropriate.

      I get the idea that 'latent factors' may be behind the relationships between items.

      But what I don't get is 'on what basis do I divide things up into these factors?' What would be a 'reasonable' versus 'unreasonable' way to do it and why? It would be great to have an example.

      Also, I think the choice to use FA rather than PCA is not just driven by whether you 'believe the latent construct is behind things...' but it is depends on what you want to do with these measures.

      PCA may be very good for building a predictive model, perhaps, while FA may yield more interpretable insights?

    6. makes up common variance

      maybe you mean 'drives common variance'?

      I think we probably want to give some math here and some examples ... to make this more clear. I have some notes I can try to integrate, but I admit I never fully understood it. (I understood PCA but not EFA or CFA).

    7. your set of items perfectly.

      what does 'set of items' refer to?

    8. assumes that there common variances takes up all of total variance, common factor analysis assumes that total variance can be partitioned into common and unique variance.

      I don't understand what this means. What does it mean that 'common variances take up all of total variance"? What is 'common variance"?

    9. It conceptualizes constructs as causally determined by the observations (Edwards & Bagozzi, 2000). This is likely not what you want because this means that “principal component scores are”caused” by their indicators in much the same way that sumscores are “caused” by item scores

      Who is 'you' here and how do you know what he wants? :)

      By the way, my take on this was that PCA did not assume any causal relationships ... it is simple data reduction and geometric relationships.

      In my reading, I do recall a lot of discussion about how the PCA 'projects from the data vectors to the latent components' while EFA is meant to do the reverse.

    10. This means that the number of underlying factors is determined mostly empirically, rather than theoretically.

      Maybe this could be made more clear. I don't get a sense of how one could 'determine the number of factors empirically' here.

    11. the latent factors

      it is presumed that a smaller number of factors underly a large number of measured items

    1. If the elasticity of substitution between capital and labor is bounded above 1, capital and labor are “gross substitutes”. In this case, any given level of output can be achieved with enough capital or enough labor; neither individual production factor is necessary. If we just have a high enough saving rate and pile up ever more capital—factories producing factories—output will rise indefinitely.

      great idea but what does elasticity mean? (J/K)

  8. Nov 2021
    1. To enable them, you’ll need to first tell Git that your local branch has a remote equivalent: git push --set-upstream origin <branch-name>

      does this create the remote branch too?

    1. subjective scales are best understood as cardinally comparable unless, and until, other evidence suggests otherwise.

      this 'induction' seems too strong. I can see justifying treating these as comparable in modeling. But in assessing policies, especially those where people start from clearly different 'happiness starting points', I would be more careful.

      Consider, would you really recommend a policy that makes a bunch of 6's into 8's over a policy that makes a slightly smaller number of 4's into 6's?

    2. how such tests could be used to ‘correct’ the data if people do not intuitively interpret subjective scales in the way hypothesised

      interesting, but is it practical

    3. In each case, there is evidence indicating the condition does hold and no strong evidence suggesting it does not.

      This is the most important part. Perhaps worth our digging into.

      I expect the 'evidence suggesting it holds' might often involve very weak and underpowered tests. E.g., something like 'the ordered logit model doesn't predict substantially better than a linear model' ... but maybe both predict poorly, and there are large error bounds on the measure of how well each predict.

    4. (different individuals use the scale the same way and the scale end-points represent the real limits)

      also seems doubtful... I expect many differences in how people use this. Ask some people 'how did you choose the happiness number', I expect they will state it in different ways ..

      E.g., some people will say "I am a 9 because it would be almost impossible to be a 10, as that is perfect happiness"

    5. (each reported unit change represents the same change in magnitude)

      this seems like a very strong and fairly implausible assumption to me. It demands a lot of the respondent. And I expect that many people would agree with things like

      "It would be much more important for me to have a happiness of 6 rather than 4 ... but I wouldn't care as much about having a happiness of 10 rather than 8" ... which would violate the linearity.

      Or maybe they would say "Happiness 3 is much better than 1. While happiness 8 is pretty close to happiness 6" ... suggesting some people might think proportionally rather than linearly.

    6. The cardinality thesis could fail to be exactly true, but nevertheless be approximately true, such that it is unproblematic to treat it as true. 

      no need to say this each time ... of course this is the case

    7. However, all the other conditions can fail by degree and what’s important is by how much they deviate.

      this is obvious ... I don't see why he needs to state this

    8. C1: phenomenal cardinality(the underlying subjective state, e.g. happiness, is felt in units)

      OK this is a short summary, but I think 'felt in units' needs to be defined in more detail.

      The Expected Utility framework would offer one way of pinning that down ... i.e.,

      "I respond to the happiness scale in a way such that I would be willing to sacrifice 1 happiness unit in one state-of-the-world to gain 1 happiness unit in another state-of-the-world, for any two equiprobable states of the world, no matter what the happiness starting point is in either."

      (This is with linearity, otherwise it's just that there is 'some exact correspondence')

      This would be one way of pinning it down, and then in fact the stated happiness measure would give us what those 'lottery choice elicitation measures' I referred to were targeting.

    9. Further, they interpret the scale as linear, so each point represents the same change in quantity.

      this seems too strong. Particularly when dealing with such a fuzzy concept as happiness or life satisfaction.

      Also, some evidence suggests people consider scales in proportional terms.

      I've also heard about some evidence that people do tend to treat Likert scales differently in many contexts. (I should dig that up)

    10. scale interpretation should be understood as a search for a ‘focal point’ (or ‘Schelling point’), a default solution chosen in the absence of communication (Schelling, 1960).

      OK that makes sense ... as a goal

    11. too theoretical for social science.

      that's hard to believe

    12. eir decision (Kristoffersen, 2011). Thi

      "the cardinality assumption is reasonable in most research contexts." they say. When we are trying to make a general statement like 'does more income make people happier' maybe this is not very sensitive to the estimation method (e.g., linear vs. ordered logit).

      But for assessing the *welfare impact of income gains from different starting levels" the reasonableness hurdle is a lot higher IMO.

    13. psychologists less so (Ferrer‐i‐Carbonell and Frijters, 2004). Despite

      A quick read is that the cited paper is just saying 'this doesn't matter much for particular coefficients of a model predicting happiness'.

      But applying this to say 'we can be confident that if policy A moves 100 people from 8 to 10 and policy B moves 100 people from 1-2 than A is better' is a stronger statement.

    14. t would not be possible to use subjective scales to say what would increase overall happiness. 

      I think this is too strong ... because we can have cases where some policies yield improvements that strictly dominate others, and we can also have particular criteria and standards.

  9. Oct 2021
    1. this suggests they might be working on this, or have some willingness to pursue it. In particular, we want to know how the impact varies depending on initial incomes, and what this suggests for the coefficient of isoelastic utility (imposing that framework).

    1. Finally, we do not detect a significant spillover effect, that is an effect on non-recipients in other households in the same community. This result is important as a potential concern about cash transfers

      this is important ... but 'not significant or underpowered'?

      OK in the meta-analysis it is fairly tightly bounded

    1. A dollar to AMF is worth 10 times that of a dollar to GiveDirectly,

      which estimate is this? Are we assuming that the people benefiting from AMF are the same as those who benefit from give directly?

    2. GWWC (already donated to top charities)fn-13 6x The Life You Can Savefn-14

      i think the footnotes are off here -- TLYCS goes to gwwc

    3. this can be achieved by increasing η above and beyond the experimentally determined rate (e.g. by adding 1 to it).

      that's a confusing workaround. You should change the weighting in the social welfare function.

    4. Group Annual Consumption η = 1 η = 2 Median US incomefn-4 $21,000 1× 1× US poverty linefn-5 $6,000 3.5× 12× Mean income in Kenyafn-6 $1,400 15× 230× World Bank’s international poverty linefn-7 $230 91× 8,300x GiveDirectly’s average recipientsfn-8 $180 120× 14,000×         Table 1. Some key consumption levels.

      \(\eta\) really matters here!

    5. how much more utility

      I don't like the phrasing 'how much more utility' because utility is not measurable in that way. However, one might phrase it as 'how much more would need to be spent to yield the same utility boost'

    1. To leverage parochialism, interventions should aim to broaden people's moral circle and

      I would say "to counter parochialism"… It's not really leveraging it

    2. To leverage conformity, effective charities should be set as the default,

      This is the sort of thing that Nick and Ari at momentum are trying to do

    3. [40] or any benefits at all [34].

      I am a little bit skeptical about the external generalizability of this to the relevant real world situation

    4. A series of studies by Montealegre and colleagues [36] showed that donors who are driven by effectiveness concerns (rather than emotional concerns) were rated as less moral and less desirable partners.

      although, to my reading, some of these results looked to be mixed or under powered. Furthermore, driven by effectiveness were rated as better in other ways

    5. ate when they believe, or are explicitly told, that most others have also donated [48, 49, 50]

      I think the link numbering is off here

    6. Altruism that is driven by emotions is perceived as more genuine [41].

      A quick read suggests that the paper reports this belief for people who gave emotionally in contrast to people who clearly donated for status or personal gain. Is there other evidence in addition to this?

    7. Conformity

      this drives inertia, but in a sense it's not an ultimate explanation -- because if others were effective, you would conform to that

    8. see also [39])

      The second study seems to go a bit in the opposite direction -- the people in the vignettes did seem to be rewarded (assessed as having both higher warmth and competence) by the participants for achieving a greater benefit, holding cost constant

    9. Although altruism is generally rewarded, several studies suggest that effective altruism is not [6,36,37].

      this seems important. I like how it is stated here "several studies suggest" ... rather than as a universal truth

    10. d selected as leaders [33].

      in 'econ-lab games', it seems (I can't easily access this paper as it's behind a pay wall)

    11. Parochialism also biases cost-benefit calculations that could lead to effective giving.

      this doesn't seem like evidence for a bias for to me. Also, the evidence seems a bit thin here.

      And why should parochialism cause a bias to CB calculation anyway?

    12. K.F. Law, D. Campbell, B. GaesserBiased benevolence: The perceived morality of effective altruism across social distancePersonal Soc Psychol Bull (2021), Article 014616722110027

      return to this

    13. approximately two thousand cases of trachoma—a bacterial infection of the eye that can lead to permanent blindness [4].

      This latter figure has been 'recanted' by Singer and others, see posts on the EA forum. It is true that charities differ by order of magnitude in the effectiveness, this particular claim about blindness prevention is overstated.

    1. Looking at the OLS plot, we get the impression that we've identified certain text messages that outperform others. The Bayesian plots show us that this perception is incorrect. According to the Bayesian models, we have almost no idea which messages are better than others.

      Two of my colleagues ask:

      I'm a bit confused about where the final two graphs have come from - it almost looks like an error?! The Bayesian models clearly did indicate variability among the messages, just not as much as OLS, then all of a sudden the messages seem exactly the same as one another in the final graphs. They look like the presentation of prior distributions around .02 rather than a posterior estimated from the data!

    2. Imagine we randomly select one text message and put the data for that treatment in a locked box. What should our prior belief about the effect of this text message be? Empirical Bayes says, roughly, that our prior belief about the effect of the message we locked in the box should be the average effect of the other 18. We can also use the variability in the effects of the other 18 messages to tell us how confident we should be in our prior, giving us a prior distribution.

      Something like ‘estimating a posterior for the effect of treatment 1 using the effects on treatments 2-17 to construct a prior’ …

      it feels like ‘unfair peeking’ along one margin perhaps, leading to a too-narrow posterior for the treatment effects overall, but perhaps a reasonable posterior for relative effects?

    3. We can apply the same logic to the flu study. Imagine we randomly select one text message and put the data for that treatment in a locked box. What should our prior belief about the effect of this text message be? Empirical Bayes says, roughly, that our prior belief about the effect of the message we locked in the box should be the average effect of the other 18. We can also use the variability in the effects of the other 18 messages to tell us how confident we should be in our prior, giving us a prior distribution.

      I almost get this argument but something feels missing

    4. In Bayesian terms, we've constructed a prior belief about the next season's rookies' OBP from data about this season's rookies' OBP.

      but isn't this like using "data from previous studies" as you said the Classical Bayes did?

      Others at RP:

      I agree, using last season's data to estimate the likely performance of a fresh rookie is just a prior plain and simple, and not quite the same as generating the prior of each condition of a current experiment from all the other conditions, one by one!

    5. That's where empirical Bayes comes to the rescue. Empirical Bayes estimates a prior based on the data

      my reading of McElreath is that he would be against that sort of 'peeking'

    6. The original data aren't yet available, but we can approximately reproduce the data given what we know about the study. We know that 47,306 participants were evenly assigned to one of 19 treatments or a control condition. The outcome was binary (did the patient get a vaccine or not), and we know the vaccination rate in each treatment from the PNAS publication.

      How is this not exactly the same as what you would get from the individual data? What more information would that get you?

    7. You should use Bayesian analysis when comparing 4 or more "things."

      This seems like a strange blanket rule. Only if I'm comparing 4 or more things?

    1. declare_sampling(S = complete_rs(N, n = 50)) +

      I'm not sure this is letting sample size vary in a meaningful way ... or the way intended. With the 'n=50' option, it seems to do the same thing when we compare designs?

    2. Similarly, we can compare two designs on the basis of their diagnoses:

      this seems very useful for evaluating existing work!

    3. compare_designs(planned_design, implemented_design)

      note, you never defined these objects ... of course the reader could contruct them

    4. diagnose_design(designs)

      How can we specify the number of sims and bootstrap_sims in this context?

    5. diagnose_design(designs)

      show the output here.

      What is 'coverage' telling us here? Something about traditional confidence intervals versus the simulated ones?

    6. Our simulation and diagnosis tools can take a list of designs and simulate all of them at once, creating a column called design to keep track. For example: diagnose_design(designs)

      This is spilling out a massive amount of

      Warning messages:
      1: The argument 'estimand = ' is deprecated. Please use 'inquiry = ' instead.
      
    7. diagnose_design(designs)

      how to we specify the number of sims and bootstrap_sims here?

    8. A designer is a function that makes designs based on a few design parameters.

      Making the arguments the things we wish to consider varying.

      i was confused at first because I thought designer was a new object in your package.

    9. redesign(design, N = c(100, 200, 300, 400, 500))

      This generated a list of (lists of?) dataframes. I'm not sure what to do with it. I guess it generates the simulated sample data, estimates, etc. (but not diagnosands?) for each of the compinations in the list of arguments, here the sample sizes of 100, 200, etc... (as well as the original 50)?

    10. diagnose_design(simulation_df, diagnosands = study_diagnosands)

      when I run this I get a much larger output matrix, including columns Design, Inquiry, and more...

    11. Building a design from design steps

      Suggestion: You might put this section first so that people have an idea of how it all fits together. Otherwise it's hard to understand what each step is al about and why we are doing this.

    12. declare_inquiry(PATE = mean(Y_Z_1 - Y_Z_0)) +

      but where were Y_Z_1 and Y_Z_0 defined as potential outcomes?

      Y_Z_1 and Y_Z_0 are created (and named) as potential outcomes as a result of the potential_outcomes function

    13. draw_estimands(design)

      perhaps 'draw' is confusing here, as its not stochastic?

    14. difference_in_means

      "Difference-in-means estimators that selects the appropriate point estimate, standard errors, and degrees of freedom for a variety of designs: unit randomized, cluster randomized, block randomized, block-cluster randomized, matched-pairs, and matched-pair cluster randomized designs"

      difference_in_means takes the place of 'a model function, e.g. lm or glm' (default is lm_robust)

    15. estimator <-

      the thing I estimate (I guess)... the regression I run?

    16. measurement <-

      Not sure what this is. Maybe it allows measurement error or censoring?

    17. assignment <- declare_assignment(Z = complete_ra(N, prob = 0.5))

      how does this meaningfully differ from sampling? Practical examples

    18. diagnose_design(design, diagnosands = study_diagnosands)

      What seems to be missing here is ... what if I want the free parameter to be the treatment effect size ... i.e., minimum detectable effect size? Maybe they give a guide to this elsewhere?

    19. declare_sampling(S = complete_rs(N, n = 50))

      this should be put in context. What role does the "N" have here?

    20. declare_model( Y_Z_0 = U, Y_Z_1 = Y_Z_0 + 0.25)

      I'm not sure that this first bit of code was necessary. What does it do? Is this meant to be here?

      Ah, this is another syntax for declaring/defining the potential outcomes... you should assign it to an object so this is clear. And is the first part standalone, or does it need N and U to be defined?

    21. draw_data(design)

      Finally, we see what to do with all this stuff we are building.

    22. When we run this population function, we will get a different 100-unit dataset each time, as shown here.

      You don't really show how to generate this dataset at this point

      You should add the draw_data(+ NULL) thing here

    23. As an example of a two-level hierarchical data structure, here is a declaration for 100 households with a random number of individuals within each household.

      cf fundraising pages and donations/donors

    24. As a bonus, the data also includes the probability that each unit is assigned to the condition in which it is in (Z_cond),

      where is Z_cond? I don't see it

    25. install.packages("tidyverse")

      i think you also have to run library(DeclareDesign)

  10. Sep 2021
    1. ...: Optional model-function-specific parameters. As with args, these will be quosures and can be varying().

      I think this is meant to be a first-level item (byllet)

    1. base_recipe %>% step_corr(all_of(stations), threshold = tune())

      nice -- you can add elements to recipes to get a new one

    2. # remove any columns with a single unique value step_zv(all_predictors()) %>%

      looks useful!

    3. many numeric predictors that are highly correlated

      does this actually harm OOS prediction or does it just make the model harder to interpret?

    1. Linear Models: the absolute value of the t-statistic for each model parameter is used.

      t-stat is the coefficient divided by the estimated standard error of the coefficient.

      But we are already using 'standardized coefficients' ... so (recall) why wouldn't these already denote importance?

  11. Aug 2021
    1. Ways to contribute

      We already do all of these … which/how to do more or differently?

    2. Encourage more “here’s what we aren’t doing” posts (e.g. OpenPhil posting what they don’t fund)

      How does this encourage retention? Can you elaborate on the theory of change here?

    3. Make EA easier for women with children

      In what ways? Is it a timing/scheduling thing? Or should, e.g., GWWC have a 'child deduction' in counting income for pledges?

    4. me sort of interpersonal conflict i

      I'd like to hear some examples of this to understand what was meant

    5. Number of people who listed that reason (without prompting)

      Recall this is the count of people who list that they think 'this is the reason for others leaving'. The table, seen by itself, might make it seem like these are reasons people themselves cited for leaving

    6. I spoke with approximately 20 people who were recommended on the basis of their knowledge about EA retention. These were mostly non-University group organizers. There was moderate agreement about the reasons people leave and stay in EA.The major reasons why people leave EA are: inability to contribute, lack of cultural fit or interpersonal conflict, major life events (moving, having a child), burnout/mental health.

      This seems like a qualitative/vignette sort of study. Interviews were conducted with individuals separately.

      I think there is some risk of a sort of 'double-counting' here: people may be reporting and taking on-board what they heard others say or write.

    1. Given this, CEA is evaluating alternative metrics. Our current top choice is to focus on people who use our products, instead of those who are "engaged" with EA in a more subjective sense. This allows us to analyze larger populations, improving the power of our tests.

      This seems very promising to me, I'd definitely recommend pursuing this approach.

      Also, as I said in the notes on the other post, you can get more of an upper bound to complement these lower bounds by reaching out to people/ giving people incentives to respond, and using 'those who respond' as the denominator.

    2. For example, to detect a change in retention rate from 95% to 90%, we need a sample of 185 individuals.[2]

      the basis for this calculation seems approximately correct, but "detect" has a particular operationalization here. You still should be able to have some reasonably tight credible-intervals over the change in retention with a smaller sample.

      That said, the selectivity of the sample, and some of the concerns above, make this limited in other ways

    3. Number of people

      I don't understand what the numerator and denominator are here. How does this add up? I think something is missing.

    4. otherwise engage with any of CEA's projects. 

      but they might have engaged in other ways, e.g., GWWC

    5. remaining attendees who worked for an EA organization.

      But this is a very special group. 'People who work for an EA org' should not be taken as representative of a typical engaged EA

    6. population, 50-70% of the individuals who engaged in 2020 also engaged in some way in 2021.

      again this is likely a lower bound because of:

      • limits to email matching
      • other forms of engagement not mentioned here, e.g., local EA group meetings, EA Facebook groups, etc

      Make it clearer that its a lower bound, I would suggest

    7. Results

      Table is a bit hard to read ... needs reformatting

    8. Over the past six months, CEA has moved to unify our login systems. As of this writing, event applications, the EA Forum, and EA Funds/GWWC all use the same login system. This means that we are less likely to have issues with people using different emails.

      This is really promising!

    9. Peter Wildeford has done the largest non-manual retention analysis I know, which looked at the percentage of people who answered the EA survey using the same email in multiple years. He found retention rates of around 27%, but cautioned that this was inaccurate due to people using different email addresses each year.

      He didn't really consider these to be 'retention rates in EA'

    10. 50-70% of people who engaged with CEA's projects in 2020 also engaged with one of our projects so far in 2021, using a naïve method of matching people (mostly looking at email addresses).

      Maybe better to note that this is likely a lower bound.

    11. engagement with CEA's projects as a proxy.

      "Projects" makes it sound like a higher level of engagement than it is. Accessing the EA forum is high engagement but 'projects' sounds like they were given funding, working for CEA, etc.

    1. 29.8% is much closer to the annual retention estimate produced by Peter Wildeford based on the 2018 EA Survey

      This is probably not a correct interpretation of what Peter was saying ... something like "30% return to complete the EA Survey again year on year" is not at all the same as a measure of 'retention'

    2.  * This is not a projection, but the retention rate observed 2019-2021. 

      Where does the asterisk lead to?

    3. EAG Reattendees

      this needs to be made more clear -- what exactly are the denominators here? And what is the outcome denoted 'retention'? Does an EAG attendee have to go to another EAG to count as 'retained'?

    4. Projected retentionover lifetime

      Perhaps a better/additional measure to report and consider: "Expected number of lifetime years in EA"

    5. We identified markers of retention that we could reliably analyze from year to year for the three cohorts. 

      These are mainly "search for positive values" methods .. I expect high precision (few false positives) but poor recall (many false negatives). I.e., I expect this to underestimate retention for the group in question.

      An alternative which could get at the upper bound would take the people that you could locate/contact as the denominator. ...

      And perhaps so that it wasn't too much of an overestimate, give a strong incentive for people to respond/identify themselves.

    6. Employees at certain organizations connected to effective altruismRecipients of Community Building GrantsAttendees of EA Global

      This is obviously a selected group that is on the very very highly engaged end of the spectrum.

    1. specify() allows you to specify the variable, or relationship between variables, that you’re interested in. hypothesize() allows you to declare the null hypothesis. generate() allows you to generate data reflecting the null hypothesis. calculate() allows you to calculate a distribution of statistics from the generated data to form the null distribution.

      This is the core, it seems.

    2. If this probability is below some pre-defined significance level 𝛼α\alpha, then we can reject our null hypothesis.

      NHST ... can this tool accommodate more informative measures like Bayes Factors (as I understand them)

    3. we start by assuming that the observed data came from some world where “nothing is going on” (i.e. the observed effect was simply due to random chance), and call this assumption our null hypothesis.

      very uch classical frequentist NHST

    1. formalS4classes describe the data model and the conditional test procedures,consisting of multivariate linear statistics, univariate test statistics and a reference distribu-tion.

      what are s4 classes... I wish I knew

  12. Jul 2021
    1. To create a symbol, simply convert a string or a variable containing a string with the sym() function. Plug in these symbols anywhere dplyr expects a symbol or an expression of only one field name as a function argument using the !! operator.

      you need to turn a field name (text) into a 'symbol object' and then you can use it to represent a tibble column etc. But I don't understand how this is different from a quosure

    2. To capture a quosure, simply capture an expression within the quo() function

      the example below is great:

      # Capture a quosure
      gear_filter <- quo(gear == 5)
      
      # Filter our data with our quosure
      mtcars %>%
        filter(!!gear_filter)
      
    3. he R interpreter does not evaluate the expression before it is passed to the mutate function. Since the expression is passed in a raw format to the function, mutate can do whatever it wants with it and evaluate it however it likes

      a function gets the 'raw' argument, here cyl+1 ... not an evaluated version of it (here would be a vector of numbers)

      So the function can do what it wants with cyl+1

    1. The funding overhang also created bottlenecks for people able to staff projects, and to work in supporting roles.

      I don't understand what the 'bottlenecks' being referred to here are.

    2. I’d typically prefer someone in these roles to an additional person donating $400,000–$4 million.

      also very high

    3. Personally, if given the choice between finding an extra person for one of these roles who’s a good fit or someone donating $X million per year, to think the two options were similarly valuable, X would typically need to be over three, and often over 10.

      this is shockingly high

    4. A big uncertainty here is what fraction of people will ever be interested in EA in the long term – it’s possible its appeal is very narrow, but happens to include an unusually large fraction of wealthy people. In that case, the overhang could persist much longer.

      motivation for 'market testing' survey/outreach work.

    5. Working at an EA org is only one option, and a better estimate would aim to track the number of people ‘deployed’ in research, policy, earning to give, etc. as well.

      we have this in the EA survey ... something we could put more focus on in cross-year work

    6. Overall, my guess is that we’re only deploying 1–2% of the net present value of the labour of the current membership.

      I don't get where this estimate come from. And is this 1-2% of lifetime or 1-2% of the available labour per year?

    7. people only hit peak productivity around age 40–60

      empirical support for this?

    8. In terms of the level of ‘talent’ of new members, we don’t have great data.

      what is this citing/quoting/referencing?

    9. The average age of community members is several years higher.

      have we reported/do we've data on this?

    10. so most ways of measuring this growth will undercount it.

      but this will catch up in the medium run if things are smooth

    11. You can see some more statistics on what these people are like in the EA Survey.

      there's a 2020 post now

    12. GWWC members, EA Funds, Founders Pledge members, Longview Philanthropy, SFF, etc. have all grown significantly (i.e. more than doubling) in the last five years.

      this is not reflected in EA Survey data, although that may not be picking up the same things. The EA Survey seems to show fairly constant donation rates across years, and response rates are not increasing.

      But it could be that differential non-response to the EA survey is masking a trend of growth and donations.

    13. t could crash the price

      this needs to be a bit more nuanced. this is a particular kind of 'iliquidity' .. and I'm not even sure that's the right word.

      If 'selling means crashing', one should reestimate the value

    14. I’ve

      in above the 'crash' would usually be overstated but here the market is probably pretty volatile in response to perceived inside information

    15. hen I think the stock is most relevant, since that determines how many resources will be deployed in the long term.

      I disagree -- the total expenditure over time is important. If each year we raise and spend more and more we are still growing and having a big impact!

    1. The CIs are not exactly identical, but very close.

      why not?

    2. the slope becomes r if x and y have identical standard deviations.

      important point to remember... just normalize and the correlation coefficient is the slope

    3. fit = lm(I(X2 * 0.5 + 0.4) ~ I(X1 * 0.5 + 0.2), D_correlation)

      what's this doing? Why this weird transformation?

  13. Jun 2021
    1. rbeta(1, alpha, alpha) - 1

      rbeta ... generates n=1 random 'deviates' from the beta function with parameters alpha ... what does this mean in context?

    2. (K - 2)/2

      is this meant to be the number of tests or something?

    3. rlkj <- function (K, eta = 1) {

      what does this function do? what is the point of it?