19 Matching Annotations
  1. Apr 2021
  2. Mar 2021
  3. Oct 2020
  4. Sep 2020
  5. Jul 2020
  6. Jun 2020
  7. May 2020
  8. Feb 2020
    1. Social media research ethics faces a contradiction between big data positivism and research ethics fundamentalism. Big data positivists tend to say, ‘Most social media data is public data. It is like data in a newspaper. I can therefore gather big data without limits. Those talking about privacy want to limit the progress of social science’. This position disregards any engagement with ethics and has a bias towards quantification. The ethical framework Social Media Research: A Guide to Ethics (Townsend and Wallace, 2016) that emerged from an ESRC-funded project tries to avoid both extremes and to take a critical-realist position: It recommends that social scientists neither ignore nor fetishize research ethics when studying digital media.Research ethics fundamentalists in contrast tend to say,You have to get informed consent for every piece of social media data you gather because we cannot assume automatic consent, users tend not to read platform’s privacy policies, they may assume some of their data is private and they may not agree to their data being used in research. Even if you anonymize the users you quote, many can still be identified in the networked online environment.
    2. Research ethics concerns issues, such as privacy, anonymity, informed consent and the sensitivity of data. Given that social media is part of society’s tendency to liquefy and blur the boundaries between the private and the public, labour/leisure, production/consumption (Fuchs, 2015a: Chapter 8), research ethics in social media research is par-ticularly complex.
  9. Mar 2019
    1. Worse yet, it wouldn’t surprise me if we saw more unethical people publish data as a strategic communication tool, because they know people tend to believe numbers more than personal stories. That’s why it’s so important to have that training on information literacy and methodology.”

      Like the way unethical people use statistics in general? This should be a concern, especially as government data, long considered the gold standard of data, undergoes attacks that would skew the data toward political ends. (see the census 2020)

  10. Sep 2016
    1. the risk of re-identification increases by virtue of having more data points on students from multiple contexts

      Very important to keep in mind. Not only do we realise that re-identification is a risk, but this risk is exacerbated by the increase in “triangulation”. Hence some discussions about Differential Privacy.

    2. the automatic collection of students’ data through interactions with educational technologies as a part of their established and expected learning experiences raises new questions about the timing and content of student consent that were not relevant when such data collection required special procedures that extended beyond students’ regular educational experiences of students

      Useful reminder. Sounds a bit like “now that we have easier access to data, we have to be particularly careful”. Probably not the first reflex of most researchers before they start sending forms to their IRBs. Important for this to be explicitly designated as a concern, in IRBs.

    3. Responsible Use

      Again, this is probably a more felicitous wording than “privacy protection”. Sure, it takes as a given that some use of data is desirable. And the preceding section makes it sound like Learning Analytics advocates mostly need ammun… arguments to push their agenda. Still, the notion that we want to advocate for responsible use is more likely to find common ground than this notion that there’s a “data faucet” that should be switched on or off depending on certain stakeholders’ needs. After all, there exists a set of data use practices which are either uncontroversial or, at least, accepted as “par for the course” (no pun intended). For instance, we probably all assume that a registrar should receive the grade data needed to grant degrees and we understand that such data would come from other sources (say, a learning management system or a student information system).

    1. the use of data in scholarly research about student learning; the use of data in systems like the admissions process or predictive-analytics programs that colleges use to spot students who should be referred to an academic counselor; and the ways colleges should treat nontraditional transcript data, alternative credentials, and other forms of documentation about students’ activities, such as badges, that recognize them for nonacademic skills.

      Useful breakdown. Research, predictive models, and recognition are quite distinct from one another and the approaches to data that they imply are quite different. In a way, the “personalized learning” model at the core of the second topic is close to the Big Data attitude (collect all the things and sense will come through eventually) with corresponding ethical problems. Through projects vary greatly, research has a much more solid base in both ethics and epistemology than the kind of Big Data approach used by technocentric outlets. The part about recognition, though, opens the most interesting door. Microcredentials and badges are a part of a broader picture. The data shared in those cases need not be so comprehensive and learners have a lot of agency in the matter. In fact, when then-Ashoka Charles Tsai interviewed Mozilla executive director Mark Surman about badges, the message was quite clear: badges are a way to rethink education as a learner-driven “create your own path” adventure. The contrast between the three models reveals a lot. From the abstract world of research, to the top-down models of Minority Report-style predictive educating, all the way to a form of heutagogy. Lots to chew on.

  11. Jul 2016
    1. the results remain compelling nonetheless

      At least, they’ve become unavoidable in class discussions even tangentially related to social psychology. In intro sociology, they lead to some interesting thoughts about lab vs. field experiments.

  12. Jun 2016
    1. demanding that technologies designed for a group of people be designed and built, in part, by those people

      Despite some differences, it sounds a bit like the standard by which risks and benefits of research are measured in terms of a given population. Since the troublesome Tuskegee syphilis experiments, it has led to the evaluation of “fair or just distribution of risks and benefits to eligible participants” (WP). The connection may be a little bit strained, especially since Zuckerman is talking about pragmatic issues instead of ethical ones. But there’s some insight in this line of thought, IMHO.