- Feb 2021
-
www.olivertacke.de www.olivertacke.de
-
Educational Data Mining (EDM) and Learning Analytics (LA) are among the top buzzwords of the EdTech scene right now.
-
- Nov 2017
-
files.eric.ed.gov files.eric.ed.gov
-
Mount St. Mary’s use of predictive analytics to encourage at-risk students to drop out to elevate the retention rate reveals how analytics can be abused without student knowledge and consent
Wow. Not that we need such an extreme case to shed light on the perverse incentives at stake in Learning Analytics, but this surely made readers react. On the other hand, there’s a lot more to be said about retention policies. People often act as though they were essential to learning. Retention is important to the institution but are we treating drop-outs as escapees? One learner in my class (whose major is criminology) was describing the similarities between schools and prisons. It can be hard to dissipate this notion when leaving an institution is perceived as a big failure of that institution. (Plus, Learning Analytics can really feel like the Panopticon.) Some comments about drop-outs make it sound like they got no learning done. Meanwhile, some entrepreneurs are encouraging students to leave institutions or to not enroll in the first place. Going back to that important question by @sarahfr: why do people go to university?
-
- Oct 2017
-
solaresearch.org solaresearch.org
-
The learning analytics and education data mining discussed in this handbook hold great promise. At the same time, they raise important concerns about security, privacy, and the broader consequences of big data-driven education. This chapter describes the regulatory framework governing student data, its neglect of learning analytics and educational data mining, and proactive approaches to privacy. It is less about conveying specific rules and more about relevant concerns and solutions. Traditional student privacy law focuses on ensuring that parents or schools approve disclosure of student information. They are designed, however, to apply to paper “education records,” not “student data.” As a result, they no longer provide meaningful oversight. The primary federal student privacy statute does not even impose direct consequences for noncompliance or cover “learner” data collected directly from students. Newer privacy protections are uncoordinated, often prohibiting specific practices to disastrous effect or trying to limit “commercial” use. These also neglect the nuanced ethical issues that exist even when big data serves educational purposes. I propose a proactive approach that goes beyond mere compliance and includes explicitly considering broader consequences and ethics, putting explicit review protocols in place, providing meaningful transparency, and ensuring algorithmic accountability. Export Citation: Plain Text (APA
-
- Sep 2017
-
edutechnica.com edutechnica.com
-
Over the course of many years, every school has refined and perfected the connections LMSs have into a wide variety of other campus systems including authentication systems, identity management systems, student information systems, assessment-related learning tools, library systems, digital textbook systems, and other content repositories. APIs and standards have decreased the complexity of supporting these connections, and over time it has become easier and more common to connect LMSs to – in some cases – several dozen or more other systems. This level of integration gives LMSs much more utility than they have out of the box – and also more “stickiness” that causes them to become harder to move away from. For LMS alternatives, achieving this same level of connectedness, particularly considering how brittle these connections can sometimes become over time, is a very difficult thing to achieve.
-
-
edutechnica.com edutechnica.com
-
privacy controls over student data,
-
- Aug 2017
-
analytics.jiscinvolve.org analytics.jiscinvolve.org
-
This has much in common with a customer relationship management system and facilitates the workflow around interventions as well as various visualisations. It’s unclear how the at risk metric is calculated but a more sophisticated predictive analytics engine might help in this regard.
Have yet to notice much discussion of the relationships between SIS (Student Information Systems), CRM (Customer Relationship Management), ERP (Enterprise Resource Planning), and LMS (Learning Management Systems).
-
- Oct 2016
-
www.businessinsider.com www.businessinsider.com
-
Outside of the classroom, universities can use connected devices to monitor their students, staff, and resources and equipment at a reduced operating cost, which saves everyone money.
-
Devices connected to the cloud allow professors to gather data on their students and then determine which ones need the most individual attention and care.
-
-
www.google.com www.google.com
-
For G Suite users in primary/secondary (K-12) schools, Google does not use any user personal information (or any information associated with a Google Account) to target ads.
In other words, Google does use everyone’s information (Data as New Oil) and can use such things to target ads in Higher Education.
Tags
Annotators
URL
-
- Sep 2016
-
www.sr.ithaka.org www.sr.ithaka.org
-
Application Modern higher education institutions have unprecedentedly large and detailed collections of data about their students, and are growing increasingly sophisticated in their ability to merge datasets from diverse sources. As a result, institutions have great opportunities to analyze and intervene on student performance and student learning. While there are many potential applications of student data analysis in the institutional context, we focus here on four approaches that cover a broad range of the most common activities: data-based enrollment management, admissions, and financial aid decisions; analytics to inform broad-based program or policy changes related to retention; early-alert systems focused on successful degree completion; and adaptive courseware.
Perhaps even more than other sections, this one recalls the trope:
The difference probably comes from the impact of (institutional) “application”.
-
the risk of re-identification increases by virtue of having more data points on students from multiple contexts
Very important to keep in mind. Not only do we realise that re-identification is a risk, but this risk is exacerbated by the increase in “triangulation”. Hence some discussions about Differential Privacy.
-
Responsible Use
Again, this is probably a more felicitous wording than “privacy protection”. Sure, it takes as a given that some use of data is desirable. And the preceding section makes it sound like Learning Analytics advocates mostly need ammun… arguments to push their agenda. Still, the notion that we want to advocate for responsible use is more likely to find common ground than this notion that there’s a “data faucet” that should be switched on or off depending on certain stakeholders’ needs. After all, there exists a set of data use practices which are either uncontroversial or, at least, accepted as “par for the course” (no pun intended). For instance, we probably all assume that a registrar should receive the grade data needed to grant degrees and we understand that such data would come from other sources (say, a learning management system or a student information system).
-
Data sharing over open-source platforms can create ambiguous rules about data ownership and publication authorship, or raise concerns about data misuse by others, thus discouraging liberal sharing of data.
Surprising mention of “open-source platforms”, here. Doesn’t sound like these issues are absent from proprietary platforms. Maybe they mean non-institutional platforms (say, social media), where these issues are really pressing. But the wording is quite strange if that is the case.
-
captures values such as transparency and student autonomy
Indeed. “Privacy” makes it sound like a single factor, hiding the complexity of the matter and the importance of learners’ agency.
-
Activities such as time spent on task and discussion board interactions are at the forefront of research.
Really? These aren’t uncontroversial, to say the least. For instance, discussion board interactions often call for careful, mixed-method work with an eye to preventing instructor effect and confirmation bias. “Time on task” is almost a codeword for distinctions between models of learning. Research in cognitive science gives very nuanced value to “time spent on task” while the Malcolm Gladwells of the world usurp some research results. A major insight behind Competency-Based Education is that it can allow for some variance in terms of “time on task”. So it’s kind of surprising that this summary puts those two things to the fore.
-
Research: Student data are used to conduct empirical studies designed primarily to advance knowledge in the field, though with the potential to influence institutional practices and interventions. Application: Student data are used to inform changes in institutional practices, programs, or policies, in order to improve student learning and support. Representation: Student data are used to report on the educational experiences and achievements of students to internal and external audiences, in ways that are more extensive and nuanced than the traditional transcript.
Ha! The Chronicle’s summary framed these categories somewhat differently. Interesting. To me, the “application” part is really about student retention. But maybe that’s a bit of a cynical reading, based on an over-emphasis in the Learning Analytics sphere towards teleological, linear, and insular models of learning. Then, the “representation” part sounds closer to UDL than to learner-driven microcredentials. Both approaches are really interesting and chances are that the report brings them together. Finally, the Chronicle made it sound as though the research implied here were less directed. The mention that it has “the potential to influence institutional practices and interventions” may be strategic, as applied research meant to influence “decision-makers” is more likely to sway them than the type of exploratory research we so badly need.
Tags
- Learning Analytics
- #CompetencyBasedEducation
- Discourse Analysis
- research ethics
- learner data
- Responsibility
- Malcolm Gladwell
- Academic Institutions
- Cognitive Science
- anonymity
- Open Source
- Responsible Use
- Applied Research
- Chronicle of Higher Education
- Time on task
- meta-annotation
- #ConfirmationBias
- re-identification
- Instructor Effect
- measurability
- stakeholders
- Education Research
- Quotables
- #LearnerAgency
- #privacy
- ethics
- de-anonymisation
Annotators
URL
-
-
www.chronicle.com www.chronicle.com
-
often private companies whose technologies power the systems universities use for predictive analytics and adaptive courseware
-
the use of data in scholarly research about student learning; the use of data in systems like the admissions process or predictive-analytics programs that colleges use to spot students who should be referred to an academic counselor; and the ways colleges should treat nontraditional transcript data, alternative credentials, and other forms of documentation about students’ activities, such as badges, that recognize them for nonacademic skills.
Useful breakdown. Research, predictive models, and recognition are quite distinct from one another and the approaches to data that they imply are quite different. In a way, the “personalized learning” model at the core of the second topic is close to the Big Data attitude (collect all the things and sense will come through eventually) with corresponding ethical problems. Through projects vary greatly, research has a much more solid base in both ethics and epistemology than the kind of Big Data approach used by technocentric outlets. The part about recognition, though, opens the most interesting door. Microcredentials and badges are a part of a broader picture. The data shared in those cases need not be so comprehensive and learners have a lot of agency in the matter. In fact, when then-Ashoka Charles Tsai interviewed Mozilla executive director Mark Surman about badges, the message was quite clear: badges are a way to rethink education as a learner-driven “create your own path” adventure. The contrast between the three models reveals a lot. From the abstract world of research, to the top-down models of Minority Report-style predictive educating, all the way to a form of heutagogy. Lots to chew on.
-
-
www.theguardian.com www.theguardian.com
-
“We need much more honesty, about what data is being collected and about the inferences that they’re going to make about people. We need to be able to ask the university ‘What do you think you know about me?’”
-
- Jul 2016
-
hybridpedagogy.org hybridpedagogy.org
-
what do we do with that information?
Interestingly enough, a lot of teachers either don’t know that such data might be available or perceive very little value in monitoring learners in such a way. But a lot of this can be negotiated with learners themselves.
-
turn students and faculty into data points
Data=New Oil
-
E-texts could record how much time is spent in textbook study. All such data could be accessed by the LMS or various other applications for use in analytics for faculty and students.”
-
not as a way to monitor and regulate
-
-
hackeducation.com hackeducation.com
-
demanded by education policies — for more data
-
-
medium.com medium.com
-
data being collected about individuals for purposes unknown to these individuals
-
-
www.businessinsider.com www.businessinsider.com
-
Data collection on students should be considered a joint venture, with all parties — students, parents, instructors, administrators — on the same page about how the information is being used.
-
-
www.educationdive.com www.educationdive.com
-
there is some disparity and implicit bias
-