306 Matching Annotations
  1. Mar 2024
    1. Die Europäische Umweltagentur hat ihren ersten Klimarisiko-Bericht veröffentlicht. Von 36 Risiken erfordern 21 sofortiges Handeln, acht mit besonderer Dringlichkeit. Insgesamt sei Europa bei weitem nicht ausreichend auf die Risiken der globalen Erhitzung vorbereitet, die in Südeuropa am bedrohlichsten seien. Europa ist der von der Erhitzung am stärksten betroffene Kontinent. https://www.derstandard.de/story/3000000211032/eu-muss-sich-auf-katastrophale-folgen-des-klimawandels-vorbereiten

      Bericht: https://www.eea.europa.eu/publications/european-climate-risk-assessment

  2. Feb 2024
  3. Jan 2024
  4. Dec 2023
  5. Oct 2023
  6. May 2023
    1. Citation impact indicators play a relevant role in the evaluation of researchers’ scientific production and can influence research funding and future research outputs. The H-index is widely used in this regard, in spite of several shortcomings such as not considering the actual contribution of each author, the number of authors, their overall scientific production and the scientific quality of citing articles. Several authors have highlighted some of these limits. Alternative systems have been proposed but have gained less fortune.In order to show that fairer criteria to assess researchers’ scientific impact can be achieved, a workable example is presented through a novel method, integrating the aforementioned elements by using information available in bibliographic databases.A better, merit-based proxy measure is warranted and can be achieved, although a perfect score without shortcomings is a chimera. Any proposal on a new measure would require clear reasoning, easy math and a consensus between publishers, considering researchers’ and research funders’ point of view. In any case, the relevance of authors’ scientific achievements cannot be adequately represented by a quantitative index only, and qualitative judgements are also necessary. But the time is ripe to make decisions on a fairer, although proxy, measure of scientific outputs.

      My complete review:

      Take Off Your Mask

      I genuinely appreciate the dedicated effort put into developing a new approach for measuring citations. However, I respectfully disagree with the effectiveness of the h-index as a reliable metric, and I believe that proposing a new metric that closely resembles it may not address the existing flaws adequately. Furthermore, I strongly advocate for the inclusion of qualitative measurements alongside quantitative ones, as I believe a comprehensive evaluation should consider both aspects.

      Sketchnote the "Wall of Metric" by Dasapta Erwin Irawan to showcase the small playground of researchers/scientists that is filled with self-centered indicators.

      The h-index is a simplified measure that counts the number of papers that have been published by a researcher, and the number of times those papers have been cited. However, it is a flawed measure because it does not directly take into account the quality of the papers that have been published. A researcher could have a high h-index by publishing a large number of papers that are not very well-cited, or by publishing a small number of papers that are very well-cited.

      Publication as new currency

      I believe that it is important to include qualitative measurements in addition to quantitative measurements. Qualitative measurements can be used to assess the impact of a researcher's work, and the quality of the work that has been published. For example, qualitative measurements could be used to assess the impact of a researcher's work on other researchers, or the impact of a researcher's work on the field of science.

      I believe that a new measure of citation should include both quantitative and qualitative measurements. This would allow for a more accurate and reliable assessment of a researcher's impact.

      Chasing Liberty

      I would like to suggest that the new measure of citation should include the following qualitative measurements:

      • The impact of the researcher's work on other researchers.
      • The impact of the researcher's work on the field of science.
      • The quality of the researcher's work.

      With the advancement of technology, we now have the capability to utilize applications such as Open Knowledge Maps, Scite, or Vosviewer to explore the context of citations, their interconnectedness within a network, and the specific keywords employed in the citing manuscripts.

      I believe that these qualitative measurements would provide a more accurate and reliable assessment of a researcher's impact than the h-index alone.

      About the #TakeOffYourMask I would like to introduce the idea of a hashtag called #TakeOffYourMask as a symbol of my commitment to challenging the reliance on prestige-based assessments, such as the h-index, and embracing a more authentic representation of our research endeavors.

      Take Off Your Mask

    1. They get points here and there. I think it incentivizes them more than if you had known points. It's one of those things. Extrinsic motivation. Intrinsic motivation. It's a great debate to have but. Maybe having a little bit a little bit not 50% of their grade, but five. Something really small, I think is enough to get them to pay attention. Oh, this is part of the. Learning process as opposed to thinking. Okay. What is the grade? Oh, it's 50%. This okay. That's all Jeremy D.I need to pay attention to. Right. Start paying attention right before the Viranga P.midterm, because Nothing, right? Yeah. That's the downside. If you make the whole course grade because people will advocate for that, people will say, okay, the grade should be based on demonstrating learning, not activities, right. There's a school of thought that way. Okay. So if you do that, then midterms and the final is the whole grade. Essentially, maybe homework. Then that's when they are going to pay attention to it. Right. So they might pay attention when the homework is you. But they're largely going to be disengaged till midterm one shows up and go, oh, I have to study from interim one, but by that time. It's not one of those classes. You can't do that in physics. You can't just do it three Jeremy D.days Viranga P.before. I've Jeremy D.been there Viranga P.thinking I could do that. Jeremy D.Yeah. You can't do it with. I should have been Viranga P.going to lecture. Yeah. So I think it's one of those things where it's a tricky balance. But to say, look. You have to show up and if it helps here's 5% to help you come

      Why assign formative assessments/activities ahead of midterm, etc.

  7. Apr 2023
    1. Eine n neue Studie untersucht, am welchen stehen der Erde ein Risiko für extreme Hitzewellen versteht, ohne dass es bisher dazu gekommen ist. Bei diesen Gebieten, zu denen sich Deutschland und die Benelux-Staaten gehören, besteht die Gefahr, dass sie sich unzureichend auf Extremereignisse vorbereiten. https://www.nytimes.com/2023/04/25/climate/extreme-heat-waves.html

  8. Feb 2023
  9. Dec 2022
    1. Universally accepted assessments ordemonstration opportunities, particularlyfor softer skills, could help learners andworkers validate any type of skill withoutbeing told that they will have to “go backand get a degree” before being consideredfor professional track careers

      Universally accepted assessments can also add trust to college and university credentials. There is merit to the notion that higher ed institutions have a conflict of interest when it comes to serving as both learning provider and validator of that learning.

    1. Credential criteria that should be included in quality standards:● Degree and granularity to which an organization documents student or employeelearning outcomes● Degree to which organizations provide users ownership of this data by issuing them asverifiable credentials● Degree to which issued credentials use standards that can be verified and validated byemployers and other actors

      Credential quality and assessment.

  10. Nov 2022
    1. publishedassessment procedures,

      Transparency about the assessment procedure for determining if an learner has earned a credential adds trust to the credential.

  11. Oct 2022
    1. An assessment method for algorithms. In een sessie werd dit genoemd in combinatie met IAMA als methoden voor assessment.

    1. Impact assessments: Law 25 is broad and requires a PIR to be carried out whenever conditions are met, regardless of the level of risk. The GDPR is less stringent, only requiring assessments in cases where processing is likely to result in a ‘high risk’ to rights and freedoms. Because the CCPA does not specifically focus on accountability-related obligations, it does not mandate impact assessments.
  12. Aug 2022
    1. Test ways to build earned credentials (certificates, badges, coursework) into degrees; build banks of experience (on-the-job training, internships) that earn credit; admit students simultaneously to two- and four-year institutions; guarantee transfer agreements so students don’t take numerous courses that don’t transfer into a four-year degree; set goals for meaningful employment upon graduation; and work collectively to measure our progress and hold ourselves accountable for the outcome.
  13. Jul 2022
  14. bafybeiapea6l2v2aio6hvjs6vywy6nuhiicvmljt43jtjvu3me2v3ghgmi.ipfs.dweb.link bafybeiapea6l2v2aio6hvjs6vywy6nuhiicvmljt43jtjvu3me2v3ghgmi.ipfs.dweb.link
    1. evers and leverage points fortransformative changeOur assessment—the most comprehensive car-ried out to date, including the nexus analysisof scenarios and an expert input process withliterature reviews—revealed clearly that re-versing nature’s ongoing decline (100) whilealso addressing inequality will require trans-formative change, namely a fundamental,system-wide reorganization across techno-logical, economic, and social factors, makingsustainability the norm rather than the altru-istic exception.

      Transformative change is required across all aspects of society. With such short time windows, leverage points become critical.

  15. Jun 2022
  16. bafybeiccxkde65wq2iwuydltwmfwv733h5btvyrzqujyrt5wcfjpg4ihf4.ipfs.dweb.link bafybeiccxkde65wq2iwuydltwmfwv733h5btvyrzqujyrt5wcfjpg4ihf4.ipfs.dweb.link
    1. Designing policy for climate change requires analyses which integrate the interrelationshipbetween the economy and the environment. We argue that, despite their dominance in theeconomics literature and influence in public discussion and policymaking, the methodologyemployed by Integrated Assessment Models (IAMs) rests on flawed foundations, which becomeparticularly relevant in relation to the realities of the immense risks and challenges of climatechange, and the radical changes in our economies that a sound and effective response require. Weidentify a set of critical methodological problems with the IAMs which limit their usefulness anddiscuss the analytic foundations of an alternative approach that is more capable of providinginsights into how best to manage the transition to net-zero emissions

      The claim of this paper is that the current (2022) Integrated Assessment Models (IAMs) used by IPCC and therefore policymakers is inadequate due to shortcomings in predicting risk. The paper offers the analytic foundations for an alternative model.

  17. May 2022
    1. I explore how moves towards ‘objective’ data as the basis for decision-making orientated teachers’ judgements towards data in ways that worked to standardise judgement and exclude more multifaceted, situated and values-driven modes of professional knowledge that were characterised as ‘human’ and therefore inevitably biased.

      But, aren't these multifaceted, situated, and values-driven modes also constituted of data? Isn't everything represented by data? Even 'subjective' understanding of the world is articulated as data.

      Is there some 'standard' definition of data that I'm not aware of in the context of this domain?

    2. Frequent testing to monitor children’s ‘expected progress’ through a tightly defined curriculum reflects a limited view of how children learn, in which children are seen as “functional machines” who should all automatically progress at the same rate (Llewellyn, 2016).

      This seems like an over-reach. There's nothing about testing that inherently implies that students 'should' progress at the same rate.

  18. Apr 2022
  19. Feb 2022
    1. Meaghan Kall. (2022, February 17). BA.2 risk assessment New this week is upgrading Immune Evasion—Amber 🟨 from low to moderate that BA.2 is antigentically different to BA.1 Unsurprising given the mutation profile, with BA.2 slightly more immune evasive than BA.1 on neuts studies https://t.co/n6DWtiRaNH [Tweet]. @kallmemeg. https://twitter.com/kallmemeg/status/1494100170195312646

  20. Jan 2022
    1. Assessment of the environmental impacts of conservation practices for reporting at the regional and national scales. • �Continue CEAP activities designed to estimate environmental benefits of conservation practices and programs. • �Develop a framework for reporting impacts of conservation practices and programs in terms of ecosystem services. • �Identify future conservation requirements and provide information for setting national and regional priorities. • �Expand assessment capabilities to address potential impacts of changes in agricultural land use and policy and define necessary conservation programs to meet new environmental challenges brought about by alternative land use or policy changes.
    2. Three principal themes will guide CEAP investments and activities in the future (Maresch et al. 2008): 1. �Research addressing effective and efficient implementation of conservation practices and programs to meet environmental goals and enhance environmental quality. • �Continue and expand CEAP research projects on the effects and benefits of conservation practices for soil and water quality at the watershed and landscape scales. • �Implement a new research and assessment initiative for grazing lands designed to provide scientific evidence for implementation of conservation practices at the landscape scale. • �Determine the critical processes and attributes to be measured at the appropriate landscape position for evaluation of environmental benefits. • �Expand the scope of assessment to include evaluation of a full suite of ecosystem services influenced by conservation practices and programs.
    3. CEAP products would have wide utility for diverse stakeholders within the conservation community. CEAP has evolved into an assessment and research initiative directed at determining not only the impacts of conservation practices, but also evaluating procedures to more effectively manage agricultural landscapes in order to address environmental quality goals at local, regional, and national scales (Maresch et al. 2008).
    1. The USDA engaged the Soil and Water Conservation Society in 2005 to assemble a panel of university scientists and conservation community leaders to recommend the most effective, proactive, and scientifically credible CEAP activities—thereby ensuring that
    2. A secondary goal of CEAP is to establish a framework for assessing and reporting the full suite of ecosystem services impacted by various conservation practices. Ecosystem services represent the benefits that ecological processes convey to human societies and the natural environment. For example, agricultural lands provide flood and drought mitigation, water and air purification, biodiversity, carbon sequestration, nutrient cycling, and aesthetics and recreation, in addition to the primary agricultural commodities produced. These ecosystem services are often taken for granted and unpriced or underpriced by the marketplace. Research and assessment activities will be integrated within CEAP to provide a scientific foundation for assessing the extent to which ecosystem services are enhanced by conservation practices and programs.
    3. quality of managed lands. CEAP is focused on establishing principles to guide cost-effective conservation practices at landscape scales and to achieve multiple environmental quality goals by placing specified conservation practices or combinations of complementary practices at appropriate locations on the landscape to maximize their effectiveness. CEAP is also developing science-based guidance, information, and decision support tools to determine the appropriate practices to be implemented at various locations on the landscape and to provide conservation program managers with a blueprint for delivery of science-based and cost-effective conservation programs (Duriancik et al. 2008).
    1. The Conservation Effects Assessment (Mausbach and Dedrick 2004). Project (CEAP) is a unique, multiagency effort designed to quantify conservation effects and to determine how conservation practices can be most effectively designed and implemented to protect and enhance environmental quality (Duriancik et al. CeaP Goals The primary goal of CEAP is to strengthen the scientific foundation underpinning conservation programs to protect and enhance environmental Rangelands represent non-cultivated, non-forested land that is extensively managed with ecological principles. (Photo: David Briske) 2008). CEAP was jointly initiated in 2003 by the Natural Resources Conservation Service (NRCS) in partnership with the Agricultural Research Service (ARS) and the National Institute of Food and Agriculture (NIFA) in response to requests from Congress and the Office of Management and Budget for greater accountability to US taxpayers following a near doubling of US Department of Agriculture (USDA) conservation program funding in the 2002 Farm Bill. These funds are allocated to multiple conservation practices through several USDA-sponsored conservation programs, including the Environmental Quality Incentives Program, Wetlands Reserve Program, Wildlife Habitat Incentives Program, Conservation Reserve Program, and NRCS Conservation Technical Assistance Program. This funding increase was concomitant with substantial modifications to
    1. These used to be part of a reward system

      "Rewards" are extrinsic motivators - like the carrot and the stick. Make every session an Awesome Gym Day and let Ss reward themselves by achieving goals they set for themselves. Give them some autonomy and be patient with those who just want to play. They need time.

    2. fun, fairness, and challenge

      Fun, fairness, and challenge could inform the development of three standards with students that could be used to structure their PE sessions. Ask them how do you measure fun? How do you measure fairness: How do you measrue challenge? If they participate in the development of standards, they will be more interested in using them as a guide to improvement - have more fun, play more fairly, ramp up the challenge.

    3. My students do not arrive in the gym thinking about how their performance will be evaluated.

      If they are focused on improving something - like catching - then they should come with the intention of working on that. PE class is not recess. They should have fun, but if they are not focused on anything other than having fun, then they will not be able to improve in any substantive way and it will be impossible to provide any coaching that might lead to that.

    4. They will demonstrate the art of the catch. Their art of the catch.

      Similar problems in language and writing instruction. Ss want to show off their skills and to experiment and do things their own way. That is fine - accomplished writers do this all the time Shakespeare made up hundreds of words. We are not all Shakespeare though - we can make up words in specific contexts, but in writing instruction, the goal is to master common forms and structures before moving on to display personal creativity, yet, even within common forms, there is room for personal creativity. When assessing in this way, it is important to focus on the standards and what those mean in terms of performance - otherwise, be become bogged down and unable to provide clear, consistent, and actional feedback that can lead to improvement in performance.

    5. there are thousands of pieces that I miss

      All performances are complex, and when coaching, it is impossible to attend to every minute detail. Formative assessment - active coaching - is individualized feedback to improve overall performance. Evaluating that performance, is to focus on the performance as a whole.

    6. “performance'' because I teach physical education

      I think a performance focus in important in a lot of fields because, ultimately, education is about what folks are able to do. Knowledge of things is not useful until it is applied to some problem or task. A performance focus could improve assessment across the board and shift teachers away from merely testing "content".

    7. address both the process of learning as well as the performance or outcome

      Assessment of process is commonly formative; while assessment of performance outcome is often summative, though formative assessments do look at performance outcomes too - from the perspective of informing improvement.

    8. what useful things I could say about assessment that wouldn’t expose me as a fraud

      Imposter syndrom is common as people move into more specialized fields. It's common to hear about it from PhD candidates and from PhDs.

    1. Second, although we investigated the effects of formative feedback on students’ metacognitive skills when using feedback strategies with polling systems, we are not able to answer the question how feedback strategies affect student learning. Future research studies should provide understanding which mechanisms behind feedback strategies are responsible for affecting metacognition and must teachers get insight to design effective formative assessments to promote deeper learning.

  21. Dec 2021
    1. you know, I liked the results very much

      That feeling of self satisfaction is a happy end per se. I think it is a characteristic of rewilding if you are looking to assess that.

    1. Evaluating poetry by heritage

      தண்ணீரும் காவிரியே தார்வேந்தன் சோழனே மண்ணாவ துஞ்சோழ மண்டலமே - பெண்ணாவாள் அம்பொற் சிலம்பி யரவிந்தத் தாளணியுஞ் செம்பொற் சிலம்பே சிலம்பு.

      பொருள் :-

      வற்றாதது காவிரி ஆறு. சோழமன்னனே மன்னருள் சிறந்தோன். சோழநாடே நிலவளம் மிகுந்தது. அம்பர் என்னும் கிராமத்தில் வாழும் சிலம்பியே பெண் என்று சொல்லத்தக்கவள் ஆவாள்.

    1. Once we introduce evaluation into our learning spaces, we change the way we interact with student work.

      Evaluation is not the same as feedback. Evaluation is almost always directed at unsolicited advice. Feedback may praise or criticize, but usually seeks value in something.

  22. Nov 2021
    1. assessment, 
      1. Tracking and assessment of students’ progress and assessment during the pandemic online education.
    2. assessment
      1. Tracking and assessment of students’ progress and assessment during the pandemic online education
  23. Oct 2021
    1. It is also important to recognize that high-stakes tests are not race-neutral tools capable of promoting racial equality. At their origins more than 100 years ago, standardized tests were used as weapons against communities of color, immigrants, and the poor. Because they were presumed to be objective, test results were used to “prove” that whites, the rich, and the U.S.-born were biologically more intelligent than non-whites, the poor, and immigrants. In turn, the tests provided backing to early concepts of aptitude and IQ, which were then used to justify the race, class, and cultural inequalities of the time.
    1. negative impacts of the use of standardized assessments
    2. Our present-day assessment instruments used by states to measure student achievement are almost invariably developed to measure student content knowledge on a unidimensional scale—a lasting byproduct of the early efforts to order people on an intelligence scale.
  24. Sep 2021
  25. Aug 2021
  26. Jul 2021
    1. Gargano, J. W., Wallace, M., Hadler, S. C., Langley, G., Su, J. R., Oster, M. E., Broder, K. R., Gee, J., Weintraub, E., Shimabukuro, T., Scobie, H. M., Moulia, D., Markowitz, L. E., Wharton, M., McNally, V. V., Romero, J. R., Talbot, H. K., Lee, G. M., Daley, M. F., & Oliver, S. E. (2021). Use of mRNA COVID-19 Vaccine After Reports of Myocarditis Among Vaccine Recipients: Update from the Advisory Committee on Immunization Practices — United States, June 2021. MMWR. Morbidity and Mortality Weekly Report, 70(27), 977–982. https://doi.org/10.15585/mmwr.mm7027e2

  27. Jun 2021
    1. Unless their self-assessments have power—either to shape future learning activities, or to change the gradebook—they will not be true self-assessments.

      I want to disagree with this and argue that we should be crafting lessons which allow students to understand the different forms of power which are in play in self-assessment and assessment by others. I appreciate, though, that grades may have too many advantages for that lesson to really take within the context of a course.

  28. May 2021
    1. She reminded us of the challenging but extremely important truth that there are some things as instructors and even administrators that are absolutely within our control when it comes to improving equity

      I feel like many of us can relate to this! Equity is uncomfortable, it can be silencing. This reminds me of some Brene Brown (https://debbiedonsky.com/embracing-discomfort-in-equity-work-lessons-from-brene-brown-on-shame-triggers-from-an-anti-oppression-lens/) writings on diving into equitable work. Anti-oppression work requires people to feel deeply and sometimes uncomfortable - as long as they are learning from that discomfort.

    2. We need to first understand how systems of power and oppression influence how students experience college, engage with the learning process, and build knowledge before we can understand how to better assess their learning.

      Power and oppression exist in our interactions with students everyday in our instructor/student relationship. This is one of the many reasons I try to be aware and reflection on my privilege everyday in lessons, marking and conversations. Reflection on power and oppression is ongoing and we need to be mindful of this in our leadership positions.

  29. Apr 2021
    1. This article is ostensibly a response to the use of proctoring software in higher education.

      But in order to do that properly the author has also delved into learning and assessment.

      It's a well-written piece that questions some of our taken-for-granted assumptions around assessment.

  30. Mar 2021
    1. Est-ce que je peux être dispensé de la Piscine, puisque je l’ai déjà fait dans un autre campus de 42? Malheureusement, il n’est pas possible de transférer votre dossier vers 42 Québec et d’être dispensé de l’étape de la Piscine. Il faut la refaire à Québec.
  31. Feb 2021
  32. Jan 2021
    1. In fact, such small effectively closed scientific communities built on interpersonal relationships already exist to some extent

      so the weights in the reputation graph are personal knowledge, not citations or whatever.

  33. Dec 2020
    1. Therefore, it could be argued that belief regarding the usefulness of technologies could lead to change and ultimately the actual use of digital technologies in teaching and learning.

      This goes both ways. A teacher who believes that their job is to control access to specialised information, and to control assessment may use technology to close down learning opportunities (e.g. by banning the use of Wikipedia, YouTube, etc.) and even insisting on the installation of surveillance (proctoring) software on students' personal computers.

      Again, you can argue that technology in itself doesn't make the difference.

  34. Nov 2020
    1. The study found positive impact on student achievement and on the learning experience,

      This seems important: assessment is (ideally) a medium through which learners receive feedback on what they know and are able to do. What if assessment is also a (conscious) feedback loop on the learning experience itself AND perhaps even a source of positive impact regarding the learning experience?

    2. Assessment, if not done with equity in mind, privileges and validates certain types of learning and evidence of learning over others, can hinder the validation of multiple means of demonstration, and can reinforce within students the false notion that they do not belong

      When we privilege certain types of assessment, we necessarily exclude others, and this will often have result of privileging and excluding certain assessment takers.

  35. Oct 2020
    1. proctored, multiple-choice tests are necessary to prepare students to take other multiple-choice assessments they may encounter in the course of their education

      Important point. We design not only courses, but programs, and they relate to experiences after the program.

  36. Sep 2020
    1. students basically just stopped doing the reading

      In addition, there are also some interesting strategies for getting students to do the reading. See Reading Engagement Strategies, for example.

  37. Aug 2020
  38. Jul 2020
    1. Keeping Assessment Relevant and "Authentic"

      Never answer questions, why are we learning this?

      Real world applications built into learning targets

      Grades based on performance versus memorization of formulas and facts

      Authentic Assessment: measures student learning according to the application of skills during the performance of a real-world task

      Reenacting historical acts

      Let students demonstrate knowledge by doing

      1. Challenging
      2. Results in a performance or product
      3. Encourages real-world applications
      4. Self-evaluation
      5. Collaborate, discuss, and receive feedback on work

      Rubric

      I hear and I forget, I see and I remember, I do and I understand

    2. Keeping Assessment Relevant and "Authentic"

      Authentic Assessment: designed to hit skills and needs of population Why did we get to the right answer; what was the process? What were the steps? What are common mistakes? Take mistake and throw it back into class a few days later Give incorrect answers and have them break down the thought process Connect to real life; hands-on, experiential learning Side coaching as assessment Anticipate problems Make tasks authentic to real world tasks Process v. product Use assessment as a teaching tool!

    1. Defining Formative Assessment

      Yes! Formative assessment should not just be for grade's sake, but as an actual way to gauge student understanding and then to assess the next steps to take to get students to where you want them to be

  39. Jun 2020
  40. May 2020
  41. Apr 2020
  42. Feb 2020
    1. The criteria you put in your assessment will guide students toward the content and skills you want them to learn. You might even want to get their input before you finalize the project’s assessment.Be sure that your assessment gives students lots of leeway in how they investigate and share their projects. Every project should turn out differently. As Chris Lehmann says, “If you assign a project and you get back 30 of the exact same thing, that’s not a project, that’s a recipe.”

      assessments and project based learning

  43. Jan 2020
    1. Current Assessment of SDL

      Assessment stage of paper

    Tags

    Annotators

  44. Nov 2019
    1. Validate Candidate’s Skills with Coding Assessment Test

      Test coding skills of candidates using Wild Noodle's online coding assessment tests. Their automated online coding challenges to assess developers' programming skills and also have an extensive pool of role based programming and objective questions. Contact now!

    1. Author Mary Burns discusses the key elements of computer adaptive testing (CAT). CAT is defined as assessments that use algorithms to progressively adjust test difficulty based upon learner's correct or incorrect responses. The benefits of CAT include more immediate data and are often more reliable. Types of test items are also covered to illustrate how the test can meet various levels of cognition and measure expertise. An issue related to CAT is the intensive time needed to develop multiple test items at multiple levels of cognition. Rating: 8/10

    1. The Office of Educational Technology website that is featured on Section 4: Measuring of Learning, discusses pedagogical implications of assessment shifts and how technology is part of enhancing assessment. The site places emphasis on using assessment as a real-time measurement of learning for real-life skills. The infographic that displays traditional models of assessment next to next-generation assessments is a quick reference for shifting ideology and practices. Ultimately, increased personalized learning serves the learner more efficiently and promotes increased achievement and mastery. Rating: 8/10

  45. Oct 2019
    1. Research advances in learning and teaching over the past few decades provide a way to meet these challenges. These advances have established expertise in university teaching: a set of skills and knowledge that consistently achieve better learning outcomes than the traditional and still predominant teaching methods practiced by most faculty. Widespread recognition and adoption of these expert practices will profoundly change the nature of university teaching and have a large beneficial impact on higher education.

      Carl Wieman paper on evidence based learning implementation in the disciplines

  46. Sep 2019
    1. Explaining requires you to organize and elaborate on the ideas that you are trying to convey to your audience. Depending on your audience, you will have to provide more details and, thereby, engage in deeper processing of the information. On the other hand, if you are asked to simply retrieve ideas from a text, you may be less likely to engage in elaborate structuring or re-organization of the material – at least not to the same extent as preparing an explanation to someone else.

      benefits of activities requiring students to explain a concept to peers vs. memory recall activities like quizzes

    1. The "doer effect" is an association between the number of online interactive practice activities students' do and their learning outcomes that is not only statistically reliable but has much higher positive effects than other learning resources, such as watching videos or reading text.

      "doer effect" - interactive practice activities have greater learning benefits that watching videos or reading

    1. Although unguided or minimally guided instructional approaches are very popular and intuitively appealing, the point is made that these approaches ignore both the structures that constitute human cognitive architecture and evidence from empirical studies over the past half-century that consistently indicate that minimally guided instruction is less effective and less efficient than instructional approaches that place a strong emphasis on guidance of the student learning process.

      This paper provides a counter argument to minimally guided instruction approaches.

  47. Aug 2019
    1. a syllabus can’t mandate a particular emotional experience

      And yet, machines are being invented and put in to use that attempt to measure student emotion and attention to inform assessment...

    2. Could you imagine grading students on anger as an “outcome”?

      I'm imagining a course where the only way to earn an "A" would be to become totally outraged by its end.

    3. “It is likely that the authoritarian syllabus is just the visible symptom of a deeper underlying problem, the breakdown of trust in the student-teacher relationship.”

      Yes: aligned with the mentality that students are cheating on exams, plagiarizing works, and inventing excuses for late work. In all these cases, there are things teachers can do to restructure the educational experience and stop casting blame for the inadequacies of machine graded assessments, disposable assignments, and even date-based grading.

    1. Doing this too often, however, steals valuable time away from the teacher that may reduce the quality of instruction for all the other students.

      The teacher's mental and physical health is important, yes. But arguing that allowing retakes is a detriment to your own health, even though it is a benefit to the student, is a hard sell.

      Case in point: my wife's family weathered two deaths in the same week. I left school on bereavement for one and had to extend my absence in the wake of the second death. We were in the middle of budgeting and my requests were not finalized.

      My principal could certainly have disallowed an extension because I wasn't "proactive" and didn't have it done before the due date. Instead, I was given grace and I was able to submit a better report and request because of it.

      Grace goes a long way.

    2. However, every minute writing and grading retakes or grading long-overdue work is a minute that I’m not planning effective and creative instruction, grading current work so students receive timely feedback, or communicating with parents.

      This may mean you're grading too much.

      Assessment should be focused and pointed. Narrative feedback is helpful. Allowing retakes gives you an opportunity to focus only on what needs improvement. It is not a wholesale redo of the assignment. A retake should have the student focus on the specific gap in understanding which prevents them from achieving proficiency.

    3. Under retake policies, parents at my school have expressed concerns about how overwhelmed their children become due to being caught in a vicious cycle of retakes.

      This is not caused by a retake policy itself. It is caused by either A) not having a robust formative assessment strategy to catch struggling students or, B) not implementing reasonable checkpoints which help students learn to self-regulate.

    4. Retakes and soft deadlines allow students to procrastinate

      It is a major assumption that hard deadlines and tests prevent students from procrastinating. What disallowing retakes ends up doing is locking students into a cycle where they are actively discouraged from learning rather than taking the time to learn something.

    5. They spend hours a day on video games and social media

      Or:

      • working
      • taking care of siblings
      • taking care of other relatives
      • trying to find something to eat
      • ...
    6. In math classes, where concepts constantly build on one another, traditional policies hold students to schedules that keep them learning with the class.

      Assuming all students learn content at the same rate is dangerous. There may be fundamental math skills that take one student longer to learn than another. That may mean multiple attempts at demonstrating those skills.

      If I were to disallow retakes, even the intrinsically motivated student who struggles with fundamentals loses out on mastering the concept. I lose out on knowing that student is struggling. Retakes allow me to more fully assess a student's progress toward mastery, incrementally working on correcting errors and gaps in understanding.

      By promoting pacing over learning, we are doing our students a disservice.

    7. One of his research studies showed that college students who were held to firm deadlines performed better, in general, than students who chose their own deadlines or turned in all work at the end of the semester.

      This argument is errantly conflating two separate ideas: retakes and deadlines.

      The act of allowing a retake does not preclude the use of deadlines. Setting deadlines for initial work is important because that way, I can check student work before the major assessment. There are also deadlines for completing retakes…the end of the semester being the hard stop.

      I'm also building in structure for retakes. The fact that I allow a retake does not mean it happens when and where a student wants. They work within my defined schedule, which includes deadlines.

      Arguing against retakes because deadlines disappear assumes that they are contingent upon one another when in reality, they work together to help students develop agency and time management skills.

      This makes sense at a high level, but in reality, none of us - in school or out of school - lives in a deadline free world. I have deadlines to meet at work and if my product is not quality at the deadline, I have to do it again.

      The difference is that we cannot fire students from school.

    8. In my experience, however, the more lenient we are in these matters, the less students learn. The traditional policies—giving each assessment only once, penalizing late work, and giving zeros in some situations—help most students maximize their learning and improve their time management skills, preparing them for success in college and career.

      This statement comes with zero qualification for "in my experience." Is there research or empirical evidence that supports this statement? Are there other interventions or policies that could be used in place of allowing retakes?

      Setting up the entire post on the premise of "in my experience" makes it a hard sell to start.

  48. Jul 2019
    1. A variety of educational taxonomies have been adopted by districts and states nationwide. Examples of widely used taxonomies include but are not limited to Bloom’s Taxonomy of Educational Objectives;23 [ 23] Bloom’s revised Taxonomy for Learning, Teaching, and Assessing;24 [ 24] Marzano and Kendell’s New Taxonomy of Educational Objectives;25 [ 25] and Webb’s Depth of Knowledge Levels.26 [ 26] Using educational taxonomies to facilitate the development and guide the organization of learning objectives can improve content appropriateness, assessment effectiveness, and efficiency in learning and teaching.

      Bloom's Taxonomy

    2. How you track student progress can make a difference in their learning and your teaching.

      I will have to develop my own assessment strategies - formative and summative.

    1. Performance assessment does not have to be a time-consuming ordeal; it is a great way to assess our students' skills. It is essential to create a rubric that is simple, quick, and objective. This article discusses the process of creating a rubric as well as showing a rubric used by the author in her general music classroom for several years. Differences between assessment and evaluation are also mentioned.

      How to create a rubric for performance assessment?

    1. It is interesting to notice that this article from a decade ago doesn't even mention any online assessment. So much has changed since then! I'm glad to see that from measuring attendance and attitude we are moving toward a more professionally acceptable system where we can teach, assign and assess measurable knowledge in music ed, more specifically in choral programs.

    2. 11% for music knowledge

      Only 11% for knowledge! That is surprising and could be more if we don't try to measure "talent" but the knowledge that is teachable and factual. Again, this is old data (1991) so today the numbers might look different.

    3. attendance and attitude were the most common grading criteria employed by instrumental and choral music teachers.

      Yes. I noticed that in schools.

    4. Some music teachers believe the creative or interpretive nature of music precludes assessment but then readily employ subjective methods of assessment, many of which "are determined haphazardly, ritualistically, and/or with disregard for available objective information" (Boyle & Radocy, 1987, p. 2).

      This is old data (1987) but still true on some levels. By now, what I see in practice is that music educators have figured out what is that's measurable and what is not and in the school I was student teaching, the choral program is taken as an academic subject and is graded.

    1. In prior work, we found that different student choices oflearning methods (e.g., doing interactive activities, reading onlinetext, or watching online lecture videos) are associated withlearning outcomes [7]. More usage in general is associated withhigher outcomes, but especially for doing activities which has anestimated 6x greater impact on total quiz and final examperformance than reading or video watching.
  49. Apr 2019
    1. Trauma-Informed Screening and Assessment Tools

      Difference between trauma screening and trauma assessment tools: Screening tools are brief, used universally, and designed to detect exposure to traumatic events and symptoms. They help determine whether the child needs a professional, clinical, trauma-focused assessment. Functional assessments are more comprehensive and capture a range of specific information about the child’s symptoms, functioning, and support systems. A trauma assessment can determine strengths as well as clinical symptoms of traumatic stress. It assesses the severity of symptoms, and can determine the impact of trauma (how thoughts, emotions, and behaviors have been changed by trauma) on the child’s functioning in the various well-being domains.

  50. Mar 2019
    1. This article discusses that technology rich classroom research is lacking in the research world. This paper created a scale in which it could evaluate classroom environments. The authors tested this scale and determined it was a good starting framework for how to improve classroom environments. This scale could be useful later in class when evaluating technologies.Rating 9/10 for help assessment techniques

    1. 4Vision: Preparing Learning Communities to succeed in College and Careers in a global society through technology.Vision and Goals

      This proposal outlines a draft for a technology plan for Arizona regarding adult education. This plan outlines the goals of the plan and how Arizona can address them moving forward. This plan outlines trends for the future in technology and acknowledges challenges that might come up later down the line. This plan also reviews teaching standards and instruction, as well as operations for the future. Rating 6/10 for being a draft, but with good ideas!

    1. This page is free resource to download a book about how people learn. This selected chapter provides recommendations for assessments and feedback in learning environments in general which also applies to adult learning. In addition to these examples, this chapter provides a section on theory and framework to better understand the overall topics. Rating: 10/10 Great free, open source resource with reputable information about learning.

    1. personalized learning: how does it differ from traditional learning Some of the text here is gray and it is also small, so that does not make it easy to read. Nonetheless it is an infographic about personalized learning from which a fair amount of information can e learned in a short time. rating 4/5

    1. classroom assessment techniques These are quick ways to complete formative assessment during a class session. The results can help the instructor determine what he or she should address. it can unearth learner misconceptions. These were designed for college classrooms but can be used in other adult learning contexts. rating 4/5

    1. teachthought This particular page is entitled '20 simple assessment strategies you can use every day' but the reason it is listed here is because the page itself serves as a jumping off point for reviewing or learning about many educational theories and principles. This site may have been designed for K-12 teachers - I am not sure, but it is quite usable for those who teach adults. This is a place to come if you are interested in browsing - not if you have a specific thing you need at the moment. Rating 3/5

    1. This 69 page PDF offers good advice on writing a variety of types of test questions. It is called "Is this a trick question?" Despite the length of the PDF, it is easy to browse if you are interested in writing a specific type of question. As may be suggested by the length, this resource is more comprehensive than others. Rating 5/5