6 Matching Annotations
  1. Sep 2021
  2. Aug 2020
    1. Bartik, A. W., Cullen, Z. B., Glaeser, E. L., Luca, M., Stanton, C. T., & Sunderam, A. (2020). The Targeting and Impact of Paycheck Protection Program Loans to Small Businesses (Working Paper No. 27623; Working Paper Series). National Bureau of Economic Research. https://doi.org/10.3386/w27623

  3. Jul 2020
  4. Sep 2017
    1. ask people to list those in their social circles who have intervened in abusive situations, people they have talked to about bystander intervention, or people whose opinion on intervening is important to them.

      What would be the links between these people? If you asked someone to list their friends, you will get lists which produce a star network. There needs to be a second round of questions involving friends of friends. Getting network data requires asking interrelated people.

  5. Sep 2016
    1. The importance of models may need to be underscored in this age of “big data” and “data mining”. Data, no matter how big, can only tell you what happened in the past. Unless you’re a historian, you actually care about the future — what will happen, what could happen, what would happen if you did this or that. Exploring these questions will always require models. Let’s get over “big data” — it’s time for “big modeling”.
  6. Jul 2016
    1. p. 100

      Data are not useful in and of themselves. They only have utility if meaning and value can be extracted from them. In other words, it is what is done with data that is important, not simply that they are generated. The whole of science is based on realising meaning and value from data. Making sense of scaled small data and big data poses new challenges. In the case of scaled small data, the challenge is linking together varied datasets to gain new insights and opening up the data to new analytical approaches being used in big data. With respect to big data, the challenge is coping with its abundance and exhaustivity (including sizeable amounts of data with low utility and value), timeliness and dynamism, messiness and uncertainty, high relationality, semi-structured or unstructured nature, and the fact that much of big data is generated with no specific question in mind or is a by-product of another activity. Indeed, until recently, data analysis techniques have primarily been designed to extract insights from scarce, static, clean and poorly relational datasets, scientifically sampled and adhering to strict assumptions (such as independence, stationarity, and normality), and generated and alanysed with a specific question in mind.

      Good discussion of the different approaches allowed/required by small v. big data.