1,080 Matching Annotations
  1. Jul 2020
    1. Fontanet, A., Grant, R., Tondeur, L., Madec, Y., Grzelak, L., Cailleau, I., Ungeheuer, M.-N., Renaudat, C., Pellerin, S. F., Kuhmel, L., Staropoli, I., Anna, F., Charneau, P., Demeret, C., Bruel, T., Schwartz, O., & Hoen, B. (2020). SARS-CoV-2 infection in primary schools in northern France: A retrospective cohort study in an area of high transmission. MedRxiv, 2020.06.25.20140178. https://doi.org/10.1101/2020.06.25.20140178

    1. Fontanet, A., Tondeur, L., Madec, Y., Grant, R., Besombes, C., Jolly, N., Pellerin, S. F., Ungeheuer, M.-N., Cailleau, I., Kuhmel, L., Temmam, S., Huon, C., Chen, K.-Y., Crescenzo, B., Munier, S., Demeret, C., Grzelak, L., Staropoli, I., Bruel, T., … Hoen, B. (2020). Cluster of COVID-19 in northern France: A retrospective closed cohort study. MedRxiv, 2020.04.18.20071134. https://doi.org/10.1101/2020.04.18.20071134

    1. Sapoval, N., Mahmoud, M., Jochum, M. D., Liu, Y., Elworth, R. A. L., Wang, Q., Albin, D., Ogilvie, H., Lee, M. D., Villapol, S., Hernandez, K., Berry, I. M., Foox, J., Beheshti, A., Ternus, K., Aagaard, K. M., Posada, D., Mason, C., Sedlazeck, F. J., & Treangen, T. J. (2020). Hidden genomic diversity of SARS-CoV-2: Implications for qRT-PCR diagnostics and transmission. BioRxiv, 2020.07.02.184481. https://doi.org/10.1101/2020.07.02.184481

  2. Jun 2020
    1. Kucharski, A. J., Klepac, P., Conlan, A. J. K., Kissler, S. M., Tang, M. L., Fry, H., Gog, J. R., Edmunds, W. J., Emery, J. C., Medley, G., Munday, J. D., Russell, T. W., Leclerc, Q. J., Diamond, C., Procter, S. R., Gimma, A., Sun, F. Y., Gibbs, H. P., Rosello, A., … Simons, D. (2020). Effectiveness of isolation, testing, contact tracing, and physical distancing on reducing transmission of SARS-CoV-2 in different settings: A mathematical modelling study. The Lancet Infectious Diseases, 0(0). https://doi.org/10.1016/S1473-3099(20)30457-6

    1. (((Howard Forman))) on Twitter: “#Italy remains one of the worst outbreaks & one of the best & most consistent responses to lockdown/NPI measures. 0.6% positive rate; STILL testing at rate of greater than 1/1000 each day. The US is NOT currently on this path. (some regions are). 33K fatalities. https://t.co/5lWdXMdlEf” / Twitter. (n.d.). Twitter. Retrieved June 2, 2020, from https://twitter.com/thehowie/status/1266873463681298433

    1. It is not customary in Rails to run the full test suite before pushing changes. The railties test suite in particular takes a long time, and takes an especially long time if the source code is mounted in /vagrant as happens in the recommended workflow with the rails-dev-box.As a compromise, test what your code obviously affects, and if the change is not in railties, run the whole test suite of the affected component. If all tests are passing, that's enough to propose your contribution.
    1. Pell, Samantha, closeSamantha PellReporter covering the Washington CapitalsEmailEmailBioBioFollowFollowC, ace Buckner, closeC, and ace BucknerNational Basketball Association with an emphasis in covering the Washington Wizards EmailEmailBioBioFollowFollowJacqueline Dupree closeJacqueline DupreeNewsroom Intranet EditorEmailEmailBioBioFollowFollow. ‘Coronavirus Hospitalizations Rise Sharply in Several States Following Memorial Day’. Washington Post. Accessed 10 June 2020. https://www.washingtonpost.com/health/2020/06/09/coronavirus-hospitalizations-rising/.

  3. May 2020
    1. Döhla, M., Boesecke, C., Schulte, B., Diegmann, C., Sib, E., Richter, E., Eschbach-Bludau, M., Aldabbagh, S., Marx, B., Eis-Hübinger, A.-M., Schmithausen, R. M., & Streeck, H. (2020). Rapid point-of-care testing for SARS-CoV-2 in a community screening setting shows low sensitivity. Public Health. https://doi.org/10.1016/j.puhe.2020.04.009

    1. Mei, X., Lee, H.-C., Diao, K., Huang, M., Lin, B., Liu, C., Xie, Z., Ma, Y., Robson, P. M., Chung, M., Bernheim, A., Mani, V., Calcagno, C., Li, K., Li, S., Shan, H., Lv, J., Zhao, T., Xia, J., … Yang, Y. (2020). Artificial intelligence for rapid identification of the coronavirus disease 2019 (COVID-19). MedRxiv, 2020.04.12.20062661. https://doi.org/10.1101/2020.04.12.20062661

    1. The test is being marked as skipped because it has randomly failed. How much confidence do we have in that test and feature in the first place.
    2. “Make it work” means shipping something that doesn’t break. The code might be ugly and difficult to understand, but we’re delivering value to the customer and we have tests that give us confidence. Without tests, it’s hard to answer “Does this work?”
    1. It involves continuously building, testing, and deploying code changes at every small iteration, reducing the chance of developing new code based on bugged or failed previous versions.
    1. Arons, M. M., Hatfield, K. M., Reddy, S. C., Kimball, A., James, A., Jacobs, J. R., Taylor, J., Spicer, K., Bardossy, A. C., Oakley, L. P., Tanwar, S., Dyal, J. W., Harney, J., Chisty, Z., Bell, J. M., Methner, M., Paul, P., Carlson, C. M., McLaughlin, H. P., … Jernigan, J. A. (2020). Presymptomatic SARS-CoV-2 Infections and Transmission in a Skilled Nursing Facility. New England Journal of Medicine, NEJMoa2008457. https://doi.org/10.1056/NEJMoa2008457

    1. Gudbjartsson, D. F., Helgason, A., Jonsson, H., Magnusson, O. T., Melsted, P., Norddahl, G. L., Saemundsdottir, J., Sigurdsson, A., Sulem, P., Agustsdottir, A. B., Eiriksdottir, B., Fridriksdottir, R., Gardarsdottir, E. E., Georgsson, G., Gretarsdottir, O. S., Gudmundsson, K. R., Gunnarsdottir, T. R., Gylfason, A., Holm, H., … Stefansson, K. (2020). Spread of SARS-CoV-2 in the Icelandic Population. New England Journal of Medicine, NEJMoa2006100. https://doi.org/10.1056/NEJMoa2006100

    1. In a classroom or professional setting, an expert might perform some of these tasks for a learner (Metacognitive supports as cognitive scaffolding), but when a learner’s on their own, these metacognitive activities may be taxing or beyond reach.

      In a classroom setting a teacher may perform many of the metacognitive tasks that are necessary for the student to learn. E.g. they may take over monitoring for confusion as well as testing the students to evaluate their understanding.

    1. Shweta, F., Murugadoss, K., Awasthi, S., Venkatakrishnan, A., Puranik, A., Kang, M., Pickering, B. W., O’Horo, J. C., Bauer, P. R., Razonable, R. R., Vergidis, P., Temesgen, Z., Rizza, S., Mahmood, M., Wilson, W. R., Challener, D., Anand, P., Liebers, M., Doctor, Z., … Badley, A. D. (2020). Augmented Curation of Unstructured Clinical Notes from a Massive EHR System Reveals Specific Phenotypic Signature of Impending COVID-19 Diagnosis [Preprint]. Infectious Diseases (except HIV/AIDS). https://doi.org/10.1101/2020.04.19.20067660

    1. I try to write a unit test any time the expected value of a defect is non-trivial.

      Write unit tests at least for the most important parts of code, but every chunk of code should have a trivial unit test around it – this verifies that the code is written in a testable way, which indeed is extremely important

    2. I’m defining an integration test as a test where you’re calling code that you don’t own

      When to write integration tests:

      • importing code you don't own
      • when you can't trust the code you don't own
    1. A few takeaways

      Summarising the article:

      • Types and tests save you from stupid mistakes; these're gifts for your future self!
      • Use ESLint and configure it to be your strict, but fair, friend.
      • Think of tests as a permanent console.
      • Types: It is not only about checks. It is also about code readability.
      • Testing with each commit makes fewer surprises when merging Pull Requests.
  4. Apr 2020
    1. I would use ESLint in full strength, tests for some (especially end-to-end, to make sure a commit does not make project crash), and add continuous integration.

      Advantage of tests

    2. Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.

      According to the Kernighan's Law, writing code is not as hard as debugging

    3. Creating meticulous tests before exploring the data is a big mistake, and will result in a well-crafted garbage-in, garbage-out pipeline. We need an environment flexible enough to encourage experiments, especially in the initial place.

      Overzealous nature of TDD may discourage from explorable data science

    1. Peto, J., Alwan, N. A., Godfrey, K. M., Burgess, R. A., Hunter, D. J., Riboli, E., Romer, P., Buchan, I., Colbourn, T., Costelloe, C., Smith, G. D., Elliott, P., Ezzati, M., Gilbert, R., Gilthorpe, M. S., Foy, R., Houlston, R., Inskip, H., Lawlor, D. A., … Yao, G. L. (2020). Universal weekly testing as the UK COVID-19 lockdown exit strategy. The Lancet, 0(0). https://doi.org/10.1016/S0140-6736(20)30936-3

    1. To customize settings for debugging tests, you can specify "request":"test" in the launch.json file in the .vscode folder from your workspace.

      Customising settings for debugging tests while running

      Python: Debug All Tests

      or

      Python: Debug Test Method

    2. For example, the test_decrement functions given earlier are failing because the assertion itself is faulty.

      Debugging tests themselves

      1. Set a breakpoint on the first line of the failing function (e.g. test_decrement)
      2. Click the "Debug Test" option above the function
      3. Open Debug Console and type: inc_dec.decrement(3) to see what is the actual output when we use x=3
      4. Stop the debugger and correct the tests
      5. Save the test file and run the tests again to look for a positive result
    3. Support for running tests in parallel with pytest is available through the pytest-xdist package.

      pytest-xdist provides support for parallel testing.

      1. To enable it on Windows:

      py -3 -m pip install pytest-xdist

      1. Create a file pytest.ini in your project directory and specify in it the number of CPUs to be used (e.g. 4):
        [pytest]
        addopts=-n4
        
      2. Run your tests
    4. With pytest, failed tests also appear in the Problems panel, where you can double-click on an issue to navigate directly to the test

      pytest displays failed tests also in PROBLEMS

    5. VS Code also shows test results in the Python Test Log output panel (use the View > Output menu command to show the Output panel, then select Python Test Log

      Another way to view the test results:

      View > Output > Python Test Log

    6. For Python, test discovery also activates the Test Explorer with an icon on the VS Code activity bar. The Test Explorer helps you visualize, navigate, and run tests

      Test Explorer is activated after discovering tests in Python:

    7. Once VS Code recognizes tests, it provides several ways to run those tests

      After discovering tests, we can run them, for example, using CodeLens:

    8. You can trigger test discovery at any time using the Python: Discover Tests command.

      After using python.testing.autoTestDiscoverOnSaveEnabled, it'll be set to true and discovering tests whenever a test file is saved.

      If discovery succeeds, the status bar shows Run Tests instead:

    9. Sometimes tests placed in subfolders aren't discovered because such test files cannot be imported. To make them importable, create an empty file named __init__.py in that folder.

      Tip to use when tests aren't discoverable in subfolderds (create __init__.py file)

    10. Testing in Python is disabled by default. To enable testing, use the Python: Configure Tests command on the Command Palette.

      Start testing in VS Code by using Python: Configure Tests (it automatically chooses one testing framework and disables the rest).

      Otherwise, you can configure tests manually by setting only one of the following to True:

      • python.testing.unittestEnabled
      • python.testing.pytestEnabled
      • python.testing.nosetestsEnabled
    11. python.testing.pytestArgs: Looks for any Python (.py) file whose name begins with "test_" or ends with "_test", located anywhere within the current folder and all subfolders.

      Default behaviour of test discovery by pytest framework

    12. python.testing.unittestArgs: Looks for any Python (.py) file with "test" in the name in the top-level project folder.

      Default behaviour of test discovery by unittest framework

    13. Create a file named test_unittest.py that contains a test class with two test methods

      Sample test file using unittest framework. inc_dec is the file that's being tested:

      import inc_dec    # The code to test
      import unittest   # The test framework
      
      class Test_TestIncrementDecrement(unittest.TestCase):
          def test_increment(self):
              self.assertEqual(inc_dec.increment(3), 4) # checks if the results is 4 when x = 3
      
          def test_decrement(self):
              self.assertEqual(inc_dec.decrement(3), 4)
      
      if __name__ == '__main__':
          unittest.main()
      
    14. Each test framework has its own conventions for naming test files and structuring the tests within, as described in the following sections. Each case includes two test methods, one of which is intentionally set to fail for the purposes of demonstration.
      • each testing framework has own naming conventions
      • each case includes two test methods (one of which fails)
    15. Nose2, the successor to Nose, is just unittest with plugins

      Nose2 testing

    16. Python tests are Python classes that reside in separate files from the code being tested.
    17. general background on unit testing, see Unit Testing on Wikipedia. For a variety of useful unit test examples, see https://github.com/gwtw/py-sorting
    18. Running the unit test early and often means that you quickly catch regressions, which are unexpected changes in the behavior of code that previously passed all its unit tests.

      Regressions

    19. Developers typically run unit tests even before committing code to a repository; gated check-in systems can also run unit tests before merging a commit.

      When to run unit tests:

      • before committing
      • ideally before merging
      • many CI systems run it after every build
    20. in unit testing you avoid external dependencies and use mock data or otherwise simulated inputs

      Unit tests are small, isolated piece of code making them quick and inexpensive to run

    21. The practice of test-driven development is where you actually write the tests first, then write the code to pass more and more tests until all of them pass.

      Essence of TDD

    22. The combined results of all the tests is your test report

      test report

    23. each test is very simple: invoke the function with an argument and assert the expected return value.

      e.g. test of an exact number entry:

          def test_validator_valid_string():
              # The exact assertion call depends on the framework as well
              assert(validate_account_number_format("1234567890"), true)
      
    24. Unit tests are concerned only with the unit's interface—its arguments and return values—not with its implementation
    25. unit is a specific piece of code to be tested, such as a function or a class. Unit tests are then other pieces of code that specifically exercise the code unit with a full range of different inputs, including boundary and edge cases.

      Essence of unit testing

    1. SMOKE TESTING is a type of software testing that determines whether the deployed build is stable or not.

      stable or not

    1. Then the programmer(s) will go over the scenarios, refining the steps for clarification and increased testability. The result is then reviewed by the domain expert to ensure the intent has not been compromised by the programmers’ reworking.
    1. Enable Frictionless Collaboration CucumberStudio empowers the whole team to read and refine executable specifications without needing technical tools. Business and technology teams can collaborate on acceptance criteria and bridge their gap.
    1. We really only need to test that the button gets rendered at all (we don’t care about what the label says — it may say different things in different languages, depending on user locale settings). We do want to make sure that the correct number of clicks gets displayed

      An example of how to think about tests. Also asserting against text that's variable isn't very useful.

  5. Mar 2020
    1. Methods must be tested both via a Lemon unit test and as a QED demo. The Lemon unit tests are for testing a method in detail whereas the QED demos are for demonstrating usage.
    1. Standardized test scores improved dramatically. In 2006, only 10% of Noyes' students scored "proficient" or "advanced" in math on the standardized tests required by the federal No Child Left Behind law. Two years later, 58% achieved that level. The school showed similar gains in reading. Because of the remarkable turnaround, the U.S. Department of Education named the school in northeast Washington a National Blue Ribbon School. Noyes was one of 264 public schools nationwide given that award in 2009. Michelle Rhee, then chancellor of D.C. schools, took a special interest in Noyes. She touted the school, which now serves preschoolers through eighth-graders, as an example of how the sweeping changes she championed could transform even the lowest-performing Washington schools. Twice in three years, she rewarded Noyes' staff for boosting scores: In 2008 and again in 2010, each teacher won an $8,000 bonus, and the principal won $10,000. A closer look at Noyes, however, raises questions about its test scores from 2006 to 2010. Its proficiency rates rose at a much faster rate than the average for D.C. schools. Then, in 2010, when scores dipped for most of the district's elementary schools, Noyes' proficiency rates fell further than average.
    1. Atlanta’s rampant test manipulation amplified calls for nationwide education reform. Seven years after the Atlanta Journal-Constitution first reported on testing problems, policymakers have failed to make significant progress toward changing the way students take standardized tests and how teachers interpret those scores. In fact, the problem has worsened, resulting in documented cheating in at least 40 states, since the APS cheating scandal first came to light. “Atlanta is the tip of the iceberg,” says Bob Schaeffer, public education director of FairTest, a nonprofit opposed to current testing standards. “Cheating is a predictable outcome of what happens when public policy puts too much pressure on test scores.” Some experts, including Schaeffer, point to the No Child Left Behind Act of 2001 as a source of today’s testing problems, though others say the woes predated the law. Then-president George W Bush, who signed the measure in January 2002, aimed to boost national academic performance and close the achievement gap between white and minority students. To make that happen, the law relied upon standardized tests designed to hold teachers accountable for classroom improvements. Federal funding hinged on school improvements, as did the future of the lowest-performing schools. But teachers in many urban school districts already faced enormous challenges that fell outside their control – including high poverty, insufficient food access, and unstable family situations. Though high-stakes testing increased student achievement in some schools, the federal mandate turned an already-difficult challenge into a feat some considered insurmountable. The pressure led to problems. Dr Beverly Hall, the former APS superintendent who was praised for turning around student performance, was later accused of orchestrating the cheating operation. During her tenure, Georgia investigators found 178 educators had inflated test scores at 44 elementary and middle schools.
    1. Atlanta public schools. The urban school district has already suffered one of the most devastating standardized-testing scandals of recent years. A state investigation in 2011 found that 178 principals and teachers in the city school district were involved in cheating on standardized tests. Dozens of former employees of the school district have either been fired or have resigned, and 21 educators have pleaded guilty to crimes like obstruction and making false statements.
    1. For automated testing, include the parameter is_test=1 in your tests. That will tell Akismet not to change its behaviour based on those API calls – they will have no training effect. That means your tests will be somewhat repeatable, in the sense that one test won’t influence subsequent calls.
    1. I would like to make an appeal to core developers: all design decisions involving involuntary session creation MUST be made with a great caution. In case of a high-load project, avoiding to create a session for non-authenticated users is a vital strategy with a critical influence on application performance. It doesn't really make a big difference, whether you use a database backend, or Redis, or whatever else; eventually, your load would be high enough, and scaling further would not help anymore, so that either network access to the session backend or its “INSERT” performance would become a bottleneck. In my case, it's an application with 20-25 ms response time under a 20000-30000 RPM load. Having to create a session for an each session-less request would be critical enough to decide not to upgrade Django, or to fork and rewrite the corresponding components.
  6. Feb 2020
    1. I had created a bunch of annotations on: https://loadimpact.com/our-beliefs/ https://hyp.is/bYpY5lKoEeqO_HdxChFU0Q/loadimpact.com/our-beliefs/

      But when I click "Visit annotations in context"

      Hypothesis shows an error:

      Annotation unavailable The current URL links to an annotation, but that annotation cannot be found, or you do not have permission to view it.

      How do I edit my existing annotations for the previous URL and update them to reference the new URL instead?

    1. Performance Benchmarking What it is: Testing a system under certain reproducible conditions Why do it: To establish a baseline which can be tested against regularly to ensure a system’s performance remains constant, or validate improvements as a result of change Answers the question: “How is my app performing, and how does that compare with the past?”