The example comes from the replication crisis in social psychology75
How to replicate the replication crisis
The example comes from the replication crisis in social psychology75
How to replicate the replication crisis
list of evidence clearinghouses
Stefan Dercon
Radical Simplification: A Practical Way to Get More Out of Limited Foreign Assistance Budgets
nebraska case study of data sharing for court-involved youth
Several nonprofit studies document the consequences of using evaluation to meet funder accountability requirements and also point to other reasons nonprofits may not use the data they collect. These reasons include limited capacity, inability to control the data they collect, and inadequate technology (Benjamin et al., 2017; Hoefer, 2000). For example, the nonprofit literature on accountability discussed earlier showed how goal displacement or overclaiming of results might be a natural consequence of using evaluation to meet funder demands. The organizational effectiveness literature shows how the ambiguity inherent in defining nonprofit effectiveness is often resolved in favor of meeting funder requirements, resulting in evidence that is neither useful for organizational-level decision-making nor for learning (Bryan et al., 2020; Carman & Fredericks, 2010). Consequently, evaluative data needed by internal audiences for learning and improvement are often not available (Gugerty & Karlan, 2014). As a result of these forces, managers and staff may ultimately see evaluation as symbolic and separate from their “real” work (Buckmaster, 1999; Mitchell & Berlan, 2016; Riddell, 1999
ment has catalyzed and I totally take her point. But I think one way to counter the backlash to the trust-based concept — and there definitely is one based on the discussions I have with funders, anyway — is to actually embrace the nuance just a bit more. Those advocating for funders to change will be more effective, in my view, if they are perhaps just a bit less dogmatic than what I sometimes read and hear — and a bit more inviting of an ongoing conversation and reasoned debate about effective (or whatever words we want to use) philanthropy. Because if ever there wa
This revealed a more fundamental issue: programming to increase contraceptive uptake in the postpartum period likely produces little meaningful reduction in pregnancy risk
DARPA is incredibly flexible with who it hires to be program managers.
DARPA is incredibly flexible with who it hires to be program managers.
At the portfolio level, a focus on RCTs as a preferred evaluation methodology means that programmes that are amenable to RCTs are more likely to be evaluated than those that are not (Ravallion, 2015).
history of impact measurement
We are at crossroads when it comes to evaluation, its purpose, and how it is used. Before the pandemic and into the recovery phase, evaluation has been largely used to artificially enforce “accountability” and maintain a transactional relationship between funders and non-profit organizations. Demand for “accountability”; expectation to do more with less; and reporting on impacts are phrases that keep leaders of non-profits, including B3s awake at night. These phrases are particularly disempowering in an environment of transactional relationship between funders and non-profits. You might ask, what does this have to do with evaluation? Everything!Let’s think and reflect deeply on these phrases: Demand for accountability – who is demanding accountability and whose interest is fulfilled by meeting that accountability? Expectation to do more with less – who is expecting or promising to do more with less? Reporting on impacts – who is defining impact and/or how is impact defined and understood?Evaluation is neither neutral nor objective and in a transactional relationship between funders and non-profit organizations, its sole purpose becomes keeping the transactional relationship in place.
Without exaggeration, I believe that the majority of published works in my field (broadly defined as psychology) do not add value. Many papers draw conclusions that are not supported by evidence, which cascades through the literature, because these papers are cited for the conclusions, not the evidence. The majority of published works are not reproducible, in the sense that authors conduct science behind closed doors without sharing data or code. Many published works are not replicable, i.e., will not hold up to scrutiny over time. Theories are verbal and vague, which means they can never get properly rejected. Instead, as Paul Meehl famously wrote, they sort of just slowly fade away as people lose interest. Let me try to convince you that it is an entirely reasonable position, based on the evidence we have.
Preserve All Cash Until Needed. Then Spend.
To be clear, funders and funded organizations should emphatically not reflexively believe that existing communications and communications channels are addressing these issues – one of the findings of our previous national high-lights report (available via the link on the final page of this report) was that funded organizations regularly communicating with funders about evaluation results were equally likely to feel that funders were driving the evaluation process and not making consistent use of findings.
Show Ponies are typically built with limited grant funding that is allocated on a project basis. Sometimes they’re created merely to be a proof of concept. In other cases, their funders hope that “if you build it, they will come.” But because Show Ponies are usually funded by governments or non-profit organizations, they rarely have a revenue model. So even if they do gain traction and users, a Show Pony’s continued existence depends on continued support from governments or philanthropy rather than their users. This is a fragile existence, and the Internet is littered with neglected Show Ponies that aren’t being maintained.
is open data dead
Given the potential real-world benefits, why have decision makers within governments, aid agencies, multilateral organizations, and NGOs not yet fully harnessed the value of evidence—including from impact evaluations—for better public policies?
How can we build an open community-led commons of grants data?
The earlier a serious Manhattan-like project to develop nanotechnology is initiated, the longer it will take to complete, because the earlier you start, the lower the foundation from which you begin. The actual project will then run for longer, and that will then mean more time for preparation: serious preparation only starts when the project starts, and the sooner the project starts, the longer it will take, so the longer the preparation time will be. And that suggests that we should push as hard as we can to get this product launched immediately, to maximize time for preparation.
for sure?
relevant for the nonprofit sector
However, one crucial question remains that has not yet been settled, and it is not a technical, but a social or political question: with everybody locked-in, who is to act in which way to ensure the redirection of funds from the legacy system to the replacement solution, i.e. an open scholarly infrastructure?
interesting paper about replacing journals with more "modern" scholarly infrastructure
Back to the basics: Identifying and addressing underlying challenges in achieving high quality and relevant health statistics for indigenous populations in Canada
using theory of change to create a roadmap
Information Environment
Information Environment
Knowledge Translation in the Global South
Knowledge Translation in the Global South
As a result, there has been an increased focus on understanding and identifying the institutional changes that can support a more dynamic and effective relationship between marine science, policy, and practice
As a result, there has been an increased focus on understanding and identifying the institutional changes that can support a more dynamic and effective relationship between marine science, policy, and practice
history of gambling in ontario