541 Matching Annotations
  1. Jun 2020
    1. Guan, D., Wang, D., Hallegatte, S., Davis, S. J., Huo, J., Li, S., Bai, Y., Lei, T., Xue, Q., Coffman, D., Cheng, D., Chen, P., Liang, X., Xu, B., Lu, X., Wang, S., Hubacek, K., & Gong, P. (2020). Global supply-chain effects of COVID-19 control measures. Nature Human Behaviour, 1–11. https://doi.org/10.1038/s41562-020-0896-8

    1. Simpson, C. R., Thomas, B. D., Challen, K., De Angelis, D., Fragaszy, E., Goodacre, S., Hayward, A., Lim, W. S., Rubin, G. J., Semple, M. G., & Knight, M. (2020). The UK hibernated pandemic influenza research portfolio: Triggered for COVID-19. The Lancet Infectious Diseases, S1473309920303984. https://doi.org/10.1016/S1473-3099(20)30398-4

  2. May 2020
  3. Apr 2020
    1. Wynants, L., Van Calster, B., Bonten, M. M. J., Collins, G. S., Debray, T. P. A., De Vos, M., Haller, M. C., Heinze, G., Moons, K. G. M., Riley, R. D., Schuit, E., Smits, L. J. M., Snell, K. I. E., Steyerberg, E. W., Wallisch, C., & van Smeden, M. (2020). Prediction models for diagnosis and prognosis of covid-19 infection: Systematic review and critical appraisal. BMJ, m1328. https://doi.org/10.1136/bmj.m1328

  4. Feb 2020
    1. Reverse engineering a bronze cannon from theLaBelleshipwreck

      The benefit to archaeology, museum curation, and other areas presented by computer modeling and 3D printing cannot be overstated. These technologies allow us to explore artifacts, sites, and more, in ways that we never could before.

  5. Jan 2020
  6. Aug 2019
  7. Apr 2019
    1. behave appropriately

      Students feed off of adult behavior. If they see that a teacher is having positive interactions and supporting others, they will respond accordingly. If they see the opposite, they will think that mal-adaptive behavior and negative interactions are appropriate.

  8. Mar 2019
    1. such as scope, simplicity, fruitfulness, accuracy

      Theories can be measured according to multiple metrics. The current default appears to be predictive accuracy, but this lists others, such as scope. If theory A predicts better but narrower and theory B predicts worse (in A's domain) but much more broadly, which is a better theory?

      Others might be related to simplicity and whatnot. For example, if a theory is numerical but not explanatory (such as scaling laws or the results of statistical fitting) this theory might be useful but not satisfying.

  9. Feb 2019
    1. The Two Sides of the H-LAM/T System

      When I view this diagram, I am reminded of Robert Rosen's Modeling Relation, an image of which is here The Modeling Relation grew out of research in Relational Biology which was the first mathematical biology to recognize that relations among organism components and between those components and the environment are key to understanding complex adaptive systems.

  10. Nov 2018
    1. For the second, we could try to detect inconsistencies, eitherby inspecting samples of the class hierarchy

      Yes, that's what I do when doing quality work on the taxonomy (with the tool wdtaxonomy)

    2. Possible relations between Items

      This only includes properties of data-type item?! It should be made more clear because the majority of Wikidata classes has other data types.

    3. A KG typically spans across several domains and is built on topof a conceptual schema, orontology, which defines what types of entities (classes) are allowed inthe graph, alongside the types ofpropertiesthey can have

      Wikidata differs from typical KG as it is not build on top of classes (entity types). Any item (entity) can be connected by any property. Wikidata's only strict "classes" in the sense of KG classes are its data types (item, lemma, monolingual string...).

    Tags

    Annotators

  11. Jul 2018
  12. course-computational-literary-analysis.netlify.com course-computational-literary-analysis.netlify.com
    1. Having heard the story of the past, my next inquiries (still inquiries after Rachel!) advanced naturally to the present time. Under whose care had she been placed after leaving Mr. Bruff’s house? and where was she living now?

      Blake's account of Rachel is clearly distinct form the other narrators because of their romantic past. He mentions her frequently throughout his narrative. I would like to run a frequency count the number of times he mentions Rachel compared tot he rest of the narratives in the book. I wonder if it is possible to isolate the discussions of Rachel in each character's narrative and then do some topic modeling with the extracted texts to examine how Rachel is discussed by each character.

  13. Mar 2018
  14. Aug 2017
    1. Thus, predicting species responses to novel climates is problematic, because we often lack sufficient observational data to fully determine in which climates a species can or cannot grow (Figure 3). Fortunately, the no-analog problem only affects niche modeling when (1) the envelope of observed climates truncates a fundamental niche and (2) the direction of environmental change causes currently unobserved portions of a species' fundamental niche to open up (Figure 5). Species-level uncertainties accumulate at the community level owing to ecological interactions, so the composition and structure of communities in novel climate regimes will be difficult to predict. Increases in atmospheric CO2 should increase the temperature optimum for photosynthesis and reduce sensitivity to moisture stress (Sage and Coleman 2001), weakening the foundation for applying present empirical plant–climate relationships to predict species' responses to future climates. At worst, we may only be able to predict that many novel communities will emerge and surprises will occur. Mechanistic ecological models, such as dynamic global vegetation models (Cramer et al. 2001), are in principle better suited for predicting responses to novel climates. However, in practice, most such models include only a limited number of plant functional types (and so are not designed for modeling species-level responses), or they are partially parameterized using modern ecological observations (and thus may have limited predictive power in no-analog settings).

      Very nice summary of some of the challenges to using models of contemporary species distributions for forecasting changes in distribution.

    2. In eastern North America, the high pollen abundances of temperate tree taxa (Fraxinus, Ostrya/Carpinus, Ulmus) in these highly seasonal climates may be explained by their position at the edge of the current North American climate envelope (Williams et al. 2006; Figure 3). This pattern suggests that the fundamental niches for these taxa extend beyond the set of climates observed at present (Figure 3), so that these taxa may be able to sustain more seasonal regimes than exist anywhere today (eg Figure 1), as long as winter temperatures do not fall below the −40°C mean daily freezing limit for temperate trees (Sakai and Weiser 1973).

      Recognizing where species are relative to the observed climate range will be important for understanding their potential response to changes in climate. This information should be included when using distribution models to predict changes in species distributions. Ideally this information could be used in making point estimates, but at a minimum understanding its impact on uncertainty would be a step forward.

  15. Apr 2017
    1. if your goal is word representation learning,you should consider both NCE and negative sampling

      Wonder if anyone has compared these two approaches

  16. Jan 2017
    1. To simulate equilibrium sagebrush cover under projected future climate, we applied average projected changes in precipitation and temperature to the observed climate time series. For each GCM and RCP scenario combination, we calculated average precipitation and temperature over the 1950–2000 time period and the 2050–2098 time period. We then calculated the absolute change in temperature between the two time periods (ΔT) and the proportional change in precipitation between the two time periods (ΔP) for each GCM and RCP scenario combination. Lastly, we applied ΔT and ΔP to the observed 28-year climate time series to generate a future climate time series for each GCM and RCP scenario combination. These generated climate time series were used to simulate equilibrium sagebrush cover.

      This is an interesting approach to forecasting future climate values with variation.

      1. Use GCMs to predict long-term change in climate condition
      2. Add this change to the observed time-series
      3. Simulate off of this adjusted time-series

      Given short-term variability may be important, that it is not the focus of the long-term GCM models, and that the goal here is modeling equilibrum (not transitional) dynamics, this seems like a nice compromise approach to capture both long-term and short-term variation in climate.

    2. Our process model (in Eq. (2)) includes a log transformation of the observations (log(yt − 1)). Thus, our model does not accommodate zeros. Fortunately, we had very few instances where pixels had 0% cover at time t − 1 (n = 47, which is 0.01% of the data set). Thus, we excluded those pixels from the model fitting process. However, when simulating the process, we needed to include possible transitions from zero to nonzero percent cover. We fit an intercept-only logistic model to estimate the probability of a pixel going from zero to nonzero cover: yi∼Bernoulli(μi)(8)logit(μi)=b0(9)where y is a vector of 0s and 1s corresponding to whether a pixel was colonized (>0% cover) or not (remains at 0% cover) and μi is the expected probability of colonization as a function of the mean probability of colonization (b0). We fit this simple model using the “glm” command in R (R Core Team 2014). For data sets in which zeros are more common and the colonization process more important, the same spatial statistical approach we used for our cover change model could be applied and covariates such as cover of neighboring cells could be included.

      This seems like a perfectly reasonable approach in this context. As models like this are scaled up to larger spatial extents the proportion of locations with zero abundance will increase and so generalizing the use of this approach will require a different approach to handling zeros.

    3. Our approach models interannual changes in plant cover as a function of seasonal climate variables. We used daily historic weather data for the center of our study site from the NASA Daymet data set (available online: http://daymet.ornl.gov/). The Daymet weather data are interpolated between coarse observation units and capture some spatial variation. We relied on weather data for the centroid of our study area.

      This seems to imply that only a single environmental time-series was used across all of the spatial locations. This is reasonable given the spatial extent of the data, but it will be necessary to allow location specific environmental time-series to allow this to be generalized to large spatial extents.

    4. Because SDMs typically rely on occurrence data, their projections of habitat suitability or probability of occurrence provide little information on the future states of populations in the core of their range—areas where a species exists now and is expected to persist in the future (Ehrlén and Morris 2015).

      The fact that most species distribution models treat locations within a species range as being of equivalent quality for the species regardless of whether there are 2 or 2000 individuals of that species is a core weakness of the occupancy based approach to modeling these problems. Approaches, like those in this paper, that attempt to address this weakness are really valuable.

  17. Nov 2016
    1. Whilst the consensus method we used provided the best predictions under AUC assessment – seemingly confirming its potential for reducing model-based uncertainty in SDM predictions [58], [59] – its accuracy to predict changes in occupancy was lower than most single models. As a result, we advocate great care when selecting the ensemble of models from which to derive consensus predictions; as previously discussed by Araújo et al. [21], models should be chosen based on aspects of their individual performance pertinent to the research question being addressed, and not on the assumption that more models are better.

      It's interesting that the ensembles perform best overall but more poorly for predicting changes in occupancy. It seems possible that ensembling multiple methods is basically resulting in a more static prediction, i.e., something closer to a naive baseline.

    2. Finally, by assuming the non-detection of a species to indicate absence from a given grid cell, we introduced an extra level of error into our models. This error depends on the probability of false absence given imperfect detection (i.e., the probability that a species was present but remained undetected in a given grid cell [73]): the higher this probability, the higher the risk of incorrectly quantifying species-climate relationships [73].

      This will be an ongoing challenge for species distribution modeling, because most of the data appropriate for these purposes is not collected in such a way as to allow the straightforward application of standard detection probability/occupancy models. This could potentially be addressed by developing models for detection probability based on species and habitat type. These models could be built on smaller/different datasets that include the required data for estimating detectability.

  18. Jul 2016
    1. Charney determined that the impracticality of Richardson’s methods could be overcome by using the new computers and a revised set of equations, filtering out sound and gravity waves in order to simplify the calculations and focus on the phenomena of most importance to predicting the evolution of continent-scale weather systems.

      The complexity of the forecasting problem was initially overcome in the 1940's both by an improved rate of calculation (using computers) and by simplifying the models to focus on the most important factors.

  19. Jun 2015
    1. The comparison between the model and the experts is based on the species distribution models (SMDs), not on actual species occurrences, so the observed difference could be due to weakness in the SDM predictions rather than the model outperforming the experts. The explanation for this choice in Footnote 4 is reasonable, but I wonder if it could be addressed by rarifying the sampling appropriately.