- Nov 2019
-
elearningindustry.com elearningindustry.com
-
Author Mary Burns discusses the key elements of computer adaptive testing (CAT). CAT is defined as assessments that use algorithms to progressively adjust test difficulty based upon learner's correct or incorrect responses. The benefits of CAT include more immediate data and are often more reliable. Types of test items are also covered to illustrate how the test can meet various levels of cognition and measure expertise. An issue related to CAT is the intensive time needed to develop multiple test items at multiple levels of cognition. Rating: 8/10
-
-
tech.ed.gov tech.ed.gov
-
The Office of Educational Technology website that is featured on Section 4: Measuring of Learning, discusses pedagogical implications of assessment shifts and how technology is part of enhancing assessment. The site places emphasis on using assessment as a real-time measurement of learning for real-life skills. The infographic that displays traditional models of assessment next to next-generation assessments is a quick reference for shifting ideology and practices. Ultimately, increased personalized learning serves the learner more efficiently and promotes increased achievement and mastery. Rating: 8/10
Tags
Annotators
URL
-
- Sep 2019
-
www.youtube.com www.youtube.com
-
Ep. 1 - Awakening from the Meaning Crisis - Introduction
Pscyho-technology ~ Mental Models ~ Mental Frameworks
-
- Aug 2019
-
www.dailykos.com www.dailykos.com
-
Mandate: The gov’t mandates that everyone buy health insurance, funding comes from payroll taxes.
3
-
2-Tier: The gov’t pays two-thirds, and the private sector pays one-third.
2
-
Single-Payer: The gov’t taxes its citizens to pay for health care.
1
-
Single-Payer, 2-Tier, and Mandate systems.
three definitive models for Universal Health Care
-
- Mar 2019
-
www.instructionaldesign.org www.instructionaldesign.org
-
Gagne's nine events of instruction I am including this page for myself because it is a nice reference back to Gagne's nine events and it gives both an example of each of the events as well as a list of four essential principles. It also includes some of his book titles. rating 4/5
-
-
faculty.coe.uh.edu faculty.coe.uh.edu
-
Edward Thorndike's three laws of learning. The page does not explain this, but his theories came out in about 1900. His three laws of learning appear to be relevant to our course work. This simple page features black text on a white page. It is brief and it simply describes the three laws of learning. rating 5/5
-
-
-
Jack Phillips and ROI. This page describes the Phillips Return on Investment model. The model as presented here is an alternative to Kirkpatrick's model. There's a bulleted list of the components of the model as well as a nice graphic that briefly describes the levels. There is an explanation about how to apply the model, though I think more information would be needed for real world practice. Rating 4/5
-
-
www.nwlink.com www.nwlink.com
-
This is Bloom's taxonomy of cognitive objectives. I selected this page because it explains both the old and new versions of the taxonomy. When writing instructional objectives for adult learning and training, one should identify the level of learning in Blooms that is needed. This is not the most attractive presentation but it is one of the more thorough ones. rating 4/5
-
-
deepblue.lib.umich.edu deepblue.lib.umich.edu
-
Human Performance Technology Model This page is an eight page PDF that gives an overview of the human performance technology model. This is a black and white PDF that is simply written and is accessible to the layperson. Authors are prominent writers in the field of performance technology. Rating 5/5
-
-
www.valpo.edu www.valpo.edu
-
This link is to a three-page PDF that describes Gagne's nine events of instruction, largely in in the form of a graphic. Text is minimized and descriptive text is color coded so it is easy to find underneath the graphic at the top. The layout is simple and easy to follow. A general description of Gagne's work is not part of this page. While this particular presentation does not have personal appeal to me, it is included here due to the quality of the page and because the presentation is more user friendly than most. Rating 4/5
-
-
www.instructionaldesign.org www.instructionaldesign.org
-
This page is a simply presented list of many learning theories, both popular and less well known. The layout is clean. The pages to which the listed items link are somewhat minimal in nature so this would give a basic tour or overview of the models and would allow viewers to review the names of some of the learning theories. This page does not prioritize learning theories or identify and establish those theories that are the most prominent.
-
-
www.k-state.edu www.k-state.edu
-
This 69 page PDF offers good advice on writing a variety of types of test questions. It is called "Is this a trick question?" Despite the length of the PDF, it is easy to browse if you are interested in writing a specific type of question. As may be suggested by the length, this resource is more comprehensive than others. Rating 5/5
-
-
elearningindustry.com elearningindustry.com
-
This page describes a method of teaching designed specifically for adults. The instructional design theory is Keller's "ARCS," which stands for attention, relevance, confidence, and satisfaction--all features that adult learning experiences should be characterized by. The text on this page is readable but the popups and graphics are a bit annoying. rating 3/5
-
-
drive.google.com drive.google.com
-
This is a description of the form of backward design referred to as Understanding by Design. In its simplest form, this is a three step process in which instructional designers first specify desired outcomes and acceptable evidence before specifying learning activities. This presentation may be a little boring to read as it is text-heavy and black and white, but those same attributes make it printer friendly. rating 3/5
-
- Feb 2019
-
stratechery.com stratechery.com
-
What happened is that Spotify dragged the record labels into a completely new business model that relied on Internet assumptions, instead of fighting them: if duplicating and distributing digital media is free (on a marginal basis), don’t try to make it scarce, but instead make it abundant and charge for the convenience of accessing just about all of it.
-
- Jan 2019
-
indieweb.org indieweb.org
-
This is the current model of Perkeep.
First mention: https://youtu.be/PlAU_da_U4s?start=630&end=651
More discussion including a very interesting note on what they call Namespace later: https://youtu.be/PlAU_da_U4s?start=2677&end=2787
Tags
Annotators
URL
-
-
cognitivemedium.com cognitivemedium.com
-
A powerful way of thinking about one-dimensional motion is largely absent from our shared conversations. The reason is that traditional media are poorly adapted to working with such representations.
Tags
Annotators
URL
-
-
-
A comment at the bottom by Barbara H Partee, another panelist alongside Chomsky:
I'd like to see inclusion of a version of the interpretation problem that reflects my own work as a working formal semanticist and is not inherently more probabilistic than the formal 'generation task' (which, by the way, has very little in common with the real-world sentence production task, a task that is probably just as probabilistic as the real-world interpretation task).
-
There is a notion of success ... which I think is novel in the history of science. It interprets success as approximating unanalyzed data.
This article makes a solid argument for why statistical and probabilistic models are useful, not only for prediction, but also for understanding. Perhaps this is a key point that Noam misses, but the quote narrows the definition to models that approximate "unanalyzed data".
However, it seems clear from this article that the successes of ML models have gone beyond approximating unanalyzed data.
-
But O'Reilly realizes that it doesn't matter what his detractors think of his astronomical ignorance, because his supporters think he has gotten exactly to the key issue: why? He doesn't care how the tides work, tell him why they work. Why is the moon at the right distance to provide a gentle tide, and exert a stabilizing effect on earth's axis of rotation, thus protecting life here? Why does gravity work the way it does? Why does anything at all exist rather than not exist? O'Reilly is correct that these questions can only be addressed by mythmaking, religion or philosophy, not by science.
Scientific insight isn't the same as metaphysical questions, in spite of having the same question word. Asking, "Why do epidemics have a peak?" is not the same as asking "Why does life exist?". Actually, that second question can be interested in two different ways, one metaphysically and one physically. The latter interpretation means that "why" is looking for a material cause. So even simple and approximate models can have generalizing value, such as the Schelling Segregation model. There is difference between models to predict and models to explain, and both have value. As later mentioned in this document, theory and data are two feet and both are needed for each other.
-
This page discusses different types of models
- statistical models
- probabilistic models
- trained models
and explores the interaction between prediction and insight.
-
Chomsky (1991) shows that he is happy with a Mystical answer, although he shifts vocabulary from "soul" to "biological endowment."
Wasn't one of Chomsky's ideas that humans are uniquely suited to language? The counter-perspective espoused here appears to be that language emerges, and that humans are only distinguished by the magnitude of their capacity for language; other species probably have proto-language, and there is likely a smooth transition from one to the other. In fact, there isn't a "one" nor an "other" in a true qualitative sense.
So what if we discover something about the human that appears to be required for our language? Does this, then, lead us to knowledge of how human language is qualitatively different from other languages?
Can probabilistic models account for qualitative differences? If a very low, but not 0, probability is assigned to a given event that we know is impossible from our theory-based view, that doesn't make our probabilistic model useless. "All models are wrong, some are useful." But it seems that it does carry with it an assumption that there are no real categories, that categories change according to the needs, and are only useful in describing things. But the underlying nature of reality is of a continuum.
Tags
Annotators
URL
-
-
jasss.soc.surrey.ac.uk jasss.soc.surrey.ac.uk
-
To Guide Data Collection
This seems to be, essentially, that models are useful for prediction, but prediction of unknowns in the data instead of prediction of future system dynamics.
-
Without models, in other words, it is not always clear what data to collect!
Or how to interpret that data in the light of complex systems.
-
Plate tectonics surely explains earthquakes, but does not permit us to predict the time and place of their occurrence.
But how do you tell the value of an explanation? Should it not empower you to some new action or ability? It could be that the explanation is somewhat of a by-product of other prediction-making theories (like how plate tectonics relies on thermodynamics, fluid dynamics, and rock mechanics, which do make predictions).
It might also make predictions itself, such as that volcanoes not on clear plate boundaries might be somehow different (distribution of occurrence over time, correlation with earthquakes, content of magma, size of eruption...), or that understanding the explanation for lightning allows prediction that a grounded metal pole above the house might protect the house from lightning strikes. This might be a different kind of prediction, though, since it isn't predicting future dynamics. Knowing how epidemics works doesn't necessarily allow prediction of total infected counts or length of infection, but it does allow prediction of minimum vaccination rates to avert outbreaks.
Nonetheless, a theory as a tool to explain, with very poor predictive ability, can still be useful, though less valuable than one that also makes testable predictions.
But in general, it seems like data -> theory is the explanation. Theory -> data is the prediction. The strength of the prediction depends on the strength of the theory.
Tags
Annotators
URL
-
- Dec 2018
-
opencontent.org opencontent.org
-
Why, when we are so worried about preserving freedoms, do we prohibit choice on the part of downstream users as to how they can license derivatives works they make? Why don’t we want to protect that user’s freedom to choose how to license his derivative work, into which he put substantial effort? The copyleft approach of both the Free Software Foundation and Creative Commons makes creators of derivative works second-class citizens. And these are the people we claim to be primarily interested in empowering. I can’t stress this point enough: the ShareAlike clause of the CC licenses and the CopyLeft tack of the GFDL rob derivers of the basic freedom to choose which license they will apply to their derived work. ShareAlike and CopyLeft privilege creators while directing derivers to the back of the bus.
I think that license compatibility is one of the least user friendly areas in the Creative Commons process. Opening resources while being attributed sounds appealing to educators who are dipping their toes in these concepts. Then we pull out Compatibility Charts and people want to run for the hills! I think that the democracy and openness that Creative Commons embodies should be inclusive and I think it's hard for people to decipher these equations which are so crucial to responsible use.
-
- Nov 2018
-
educationaltechnology.net educationaltechnology.net
-
“The ADDIE model consists of five steps: analysis, design, development, implementation, and evaluation. It is a strategic plan for course design and may serve as a blueprint to design IL assignments and various other instructional activities.”
This article provides a well diagrammed and full explanation of the addie model and its' application to technology.
Also included on the site is a link to an online course delivered via diversityedu.com
RATING: 4/5 (rating based upon a score system 1 to 5, 1= lowest 5=highest in terms of content, veracity, easiness of use etc.)
-
-
www.iste.org www.iste.org
-
Using Model Strategies forIntegrating Technology into Teaching
In this pdf, there are many helpful tips and techniques in creating a foundation for technology. The introduction of model strategies are laid out with lots of supporting detail and examples and weblinks. It includes nearly 400 pages of peer-reviewed lessons, models and various strategies.
RATING: 5/5 (rating based upon a score system 1 to 5, 1= lowest 5=highest in terms of content, veracity, easiness of use etc.)
-
- Nov 2017
-
www.educause.edu www.educause.edu
-
the experimentation and possibility of the MOOC movement had become co-opted and rebranded by venture capitalists as a fully formed, disruptive solution to the broken model of higher education.11
-
Massive Open Online Courses (MOOCs), which have become the poster child of innovation in higher education over the last two to three years
-
social engagement, public knowledge, and the mission of promoting enlightenment and critical inquiry in society
-
recent promise of Web 2.0
A bit surprised by this “recent”. By that time, much of what has been lumped under the “Web 2.0” umbrella had already shifted a few times. In fact, the “Web 3.0” hype cycle was probably in the “Trough of Disillusionment” if not the Gartner-called “Slope of Enlightenment”.
-
institutional demands for enterprise services such as e-mail, student information systems, and the branded website become mission-critical
In context, these other dimensions of “online presence” in Higher Education take a special meaning. Reminds me of WPcampus. One might have thought that it was about using WordPress to enhance learning. While there are some presentations on leveraging WP as a kind of “Learning Management System”, much of it is about Higher Education as a sector for webwork (-development, -design, etc.).
Tags
- LMS (Learning Management System)
- #WPcampus
- Business Models for Higher Education
- Citizen Engagement
- Critical Thinking
- #LearnerData
- Web 2.0
- #WordPress
- ERP (Enterprise Resource Planning)
- #EdTech
- Public Intellectuals
- Corporate Identity
- MOOC Hype Cycle
- SIS (Student Information System)
- University Websites
- Open Knowledge
Annotators
URL
-
-
mfeldstein.com mfeldstein.com
-
Let's imagine a world in which universities, not vendors, designed and built our online learning environments.
-
-
halfanhour.blogspot.com halfanhour.blogspot.com
-
access to one-on-one (and possible small circle) consultations for a fee
-
We (had we ever been given the opportunity) would have created the business proposition very differently.
-
access to the top researchers in the field
-
I think that universities (especially the 'elite' universities) have lost the plot when it comes to their value proposition (or, at least, what they tell the world their value proposition is).
In some ways, the strongest indictment of the MOOC hype.
-
-
www.ht2labs.com www.ht2labs.com
-
Often our solutions must co-exist with existing systems. That’s why we also invest time and money in emerging standards, like xAPI or Open Badges, to help connect our platforms together into a single ecosystem for personal, social and data-driven learning.
-
-
www.edcast.com www.edcast.com
-
Interesting list of clients.
-
-
en.wikipedia.org en.wikipedia.org
-
Barnes & Noble Education, Inc. is now an independent public company and the parent of Barnes & Noble College, trading on the New York Stock Exchange under the ticker symbol, "BNED".
-
-
www.bnedloudcloud.com www.bnedloudcloud.com
-
a strategic partner and collaborator to the 770 stores on campuses nationwide
-
ABOUT BARNES & NOBLE EDUCATION
-
-
www.lynn.edu www.lynn.edu
-
Enhanced learning experience Graduate students now receive upgraded iPads, and all students access course materials with Canvas, a new learning management software. The School of Aeronautics is now the College of Aeronautics; and the College of Business and Management is hosting a business symposium Nov. 15.
This from a university which had dropped Blackboard for iTunes U.
-
-
unizin.org unizin.org
-
A bit of context.
-
Instructors can supplement traditional course materials with low-cost alternatives such as Open Educational Resources and faculty-generated content.
-
-
www.indianaeconomicdigest.net www.indianaeconomicdigest.net
-
“We’re between the now and the not yet of moving to digital textbooks. But the model has not been discerned,”
-
-
en.wikipedia.org en.wikipedia.org
-
Kroton has more than 1.9 million students
-
-
mfeldstein.com mfeldstein.com
-
Moodle Pty—more widely known within the Moodle community as Moodle HQ—does most of the development of the core Moodle code and maintains tight control over which code submitted by third parties gets accepted into the code base
-
-
opencontent.org opencontent.org
-
Publishers can compete with free textbooks by making their more-restrictive-than-all-right-reserved offerings 70% more affordable.
Sounds a bit like what Clay Shirky was trying to say about the Napster moment coming to Higher Education, five years ago. Skimmed the critique of Shirky’s piece and was mostly nodding in agreement with it. But there might be a discussion about industries having learnt from the Napster moment. After all, the recording industry has been able to withstand this pressure for close to twenty years. Also sounds like this could be a corollary to Chris Anderson’s (in)famous promotion of the “free” (as in profit) model for businesses, almost ten years ago. In other words, we might live another reshaping of “free” in the next 9-10 years.
-
- Apr 2017
-
creativecommons.org creativecommons.org
-
enst31501sp2017.courses.bucknell.edu enst31501sp2017.courses.bucknell.edu
-
As a threat to the socio-ecological systems to which Nenets have established and depended on throughout history, environmental impact within the region is met with continued opposition and resilience.
The presence and impact of cultural aspects which facilitate the resilience to changes in the Nenets’ socio-ecological systems is discussed in further detail in the following piece.
Bruce C. Forbes, “Cultural Resilience of Social-ecological Systems in the Nenets and Yamal- Nenets Autonomous Okrugs, Russia: A Focus on Reindeer Nomads of the Tundra.” Ecology & Society 18, no. 4 (December 2013): 1-16. GreenFILE, EBSCOhost (accessed March 26, 2017).
-
- Mar 2017
-
homebrewserver.club homebrewserver.club
-
his ongoing efforts show that it is possible to have a satisfying and safe user experience while using federated alternatives, this is only possible because, unlike any other XMPP client developers, he is in the position of working on this project full time. The problem has not been solved but shifted. If economically sustainable XMPP federation were to scale to the point of being as successful as the centralised solution offered by Signal, it would have to face the consequences of doing so in the context of a free market driven by competition. In that situation, each XMMP client's economic viability would depend heavily on its capacity to capture enough users that can provide income for their developers. The problem therefore is not so much a problem of the technical or economical sustainability of federation, but more a problem of the economic sustainability of open standards and protocols in a world saturated with solutionist business models
The inconvenient reality of open source: hungry devs.
Tags
Annotators
URL
-
- Feb 2017
-
www.dabapps.com www.dabapps.com
-
Never write to a model field or call save() directly. Always use model methods and manager methods for state changing operations.
-
- Dec 2016
-
www.usenix.org www.usenix.org
- Nov 2016
-
journals.plos.org journals.plos.org
-
My thoughts on Climatic Associations of British Species Distributions Show Good Transferability in Time but Low Predictive Accuracy for Range Change by Rapacciuolo et al. (2012).
-
Whilst the consensus method we used provided the best predictions under AUC assessment – seemingly confirming its potential for reducing model-based uncertainty in SDM predictions [58], [59] – its accuracy to predict changes in occupancy was lower than most single models. As a result, we advocate great care when selecting the ensemble of models from which to derive consensus predictions; as previously discussed by Araújo et al. [21], models should be chosen based on aspects of their individual performance pertinent to the research question being addressed, and not on the assumption that more models are better.
It's interesting that the ensembles perform best overall but more poorly for predicting changes in occupancy. It seems possible that ensembling multiple methods is basically resulting in a more static prediction, i.e., something closer to a naive baseline.
-
Finally, by assuming the non-detection of a species to indicate absence from a given grid cell, we introduced an extra level of error into our models. This error depends on the probability of false absence given imperfect detection (i.e., the probability that a species was present but remained undetected in a given grid cell [73]): the higher this probability, the higher the risk of incorrectly quantifying species-climate relationships [73].
This will be an ongoing challenge for species distribution modeling, because most of the data appropriate for these purposes is not collected in such a way as to allow the straightforward application of standard detection probability/occupancy models. This could potentially be addressed by developing models for detection probability based on species and habitat type. These models could be built on smaller/different datasets that include the required data for estimating detectability.
-
an average 87% of grid squares maintaining the same occupancy status; similarly, all climatic variables were also highly correlated between time periods (ρ>0.85, p<0.001 for all variables). As a result, models providing a good fit to early distribution records can be expected to return a reasonable fit to more recent records (and vice versa), regardless of whether relevant predictors of range shift have actually been captured. Previous studies have warned against taking strong model performance on calibration data to indicate high predictive accuracy to a different time period [20], [24]–[26]; our results indicate that strong model performance in a different time period, as measured by widespread metrics, may not indicate high predictive accuracy either.
This highlights the importance of comparing forecasts to baseline predictions to determine the skill of the forecast vs. the basic stability of the pattern.
-
Most variation in the prediction accuracy of SDMs – as measured by AUC, sensitivity, CCRstable, CCRchanged – was among species within a higher taxon, whilst the choice of modelling framework was as important a factor in explaining variation in specificity (Table 4 and Table S4). The effect of major taxonomic group on the accuracy of forecasts was relatively small.
This suggests that it will be difficult to know if a forecast for a particular species will be good or not, unless a model is developed that can predict which species will have what forecast qualities.
-
The correct classification rate of grid squares that remained occupied or remained unoccupied (CCRstable) was fairly high (mean±s.d. = 0.75±0.15), and did not covary with species’ observed proportional change in range size (Figure 3B). In contrast, the CCR of grid squares whose occupancy status changed between time periods (CCRchanged) was very low overall (0.51±0.14; guessing randomly would be expected to produce a mean of 0.5), with range expansions being slightly better predicted than range contractions (0.55±0.15 and 0.48±0.12, respectively; Figure 3C).
This is a really important result and my favorite figure in this ms. For cells that changed occupancy status (e.g., a cell that has occupied at t_1 and was unoccupied at t_2) most models had about a 50% chance of getting the change right (i.e., a coin flip).
-
The consensus method Mn(PA) produced the highest validation AUC values (Figure 1), generating good to excellent forecasts (AUC ≥0.80) for 60% of the 1823 species modelled.
Simple unweighted ensembles performed best in this comparison of forecasts from SDMs for 1823 species.
-
Quantifying the temporal transferability of SDMs by comparing the agreement between model predictions and observations for the predicted period using common metrics is not a sufficient test of whether models have actually captured relevant predictors of change. A single range-wide measure of prediction accuracy conflates accurately predicting species expansions and contractions to new areas with accurately predicting large parts of the distribution that have remained unchanged in time. Thus, to assess how well SDMs capture drivers of change in species distributions, we measured the agreement between observations and model predictions of each species’ (a) geographic range size in period t2, (b) overall change in geographic range size between time periods, and (c) grid square-level changes in occupancy status between time periods.
This is arguably the single most important point in this paper. It is equivalent to comparing forecasts to simple baseline forecasts as is typically done in weather forecasting. In weather forecasting it is typical to talk about the "skill" of the forecast, which is how much better it does than a simple baseline. In this case the the baseline is a species range that doesn't move at all. This would be equivalent to a "naive" forecast in traditional time-series analysis since we only have a single previous point in time and the baseline is simply the prediction based on this value not changing.
-
Although it is common knowledge that some of the modelling techniques we used (e.g., CTA, SRE) generally perform less well than others [32], [33], we believe that their transferability in time is not as well-established; therefore, we decided to include them in our analysis to test the hypothesis that simpler statistical models may have higher transferability in time than more complex ones.
The point that providing better/worse fits on held out spatial training data is not the same was providing better forecasts is important especially given the argument about simpler models having better transferability.
-
We also considered including additional environmental predictors of ecological relevance to our models. First, although changes in land use have been identified as fundamental drivers of change for many British species [48]–[52], we were unable to account for them in our models – like most other published accounts of temporal transferability of SDMs [20], [21], [24], [25] – due to the lack of data documenting habitat use in the earlier t1 period; detailed digitised maps of land use for the whole of Britain are not available until the UK Land Cover Map in 1990 [53].
The lack of dynamic land cover data is a challenge for most SDM and certainly for SDM validation using historical data. If would be interesting to know, in general, how much better modern SDMs become based on held out data when land cover is included.
-
Great Britain is an island with its own separate history of environmental change; environmental drivers of distribution size and change in British populations are thus likely to differ somewhat from those of continental populations of the same species. For this reason, we only used records at the British extent to predict distribution change across Great Britain.
This restriction to Great Britain for the model building is a meaningful limitation since Great Britain will typically represent a small fraction of the total species range for many of the species involved. However this is a common issue for SDMs and so I think it's a perfectly reasonable choice to make here given the data availability. It would be nice to see this analysis repeated using alternative data sources that cover spatial extents closer to that of the species range. This would help determine how well these results generalize to models built at larger scales.
-
- Oct 2016
-
www.businessinsider.com www.businessinsider.com
-
With figures like those, it's clear that the education system isn't going away anytime soon.
How so?
-
Capterra notes that an average school spends an average of $30,000 to $50,000 per year just on paper, but reusable tech would completely eliminate that cost.
-
-
motherboard.vice.com motherboard.vice.com
-
The resource-based economy goes like this: In the future robots will do all the jobs (including creating new robots and fixing broken one). Now, imagine the world is like a public library, where you can borrow any book you want but never own it. Fresco wants all enterprise like this, whether it’s groceries, new tech, gasoline, or alcohol. He wants everything free and eventually provided to us by robots, software, and automation.
I think this is achievable, if we emphasize specialized libraries and cooperative models around resources (i.e. tool/tech libraries, food banks/co-ops)
-
- Sep 2016
-
www.educationdive.com www.educationdive.com
-
As many universities are being queried by the federal government on how they spend their endowment money, and enrollment decreases among all institutions nationally, traditional campuses will need to look at these partnerships as a sign of where education is likely going in the future, and what the federal government may be willing to finance with its student loan programs going ahead.
To me, the most interesting about this program is that it sounds like it’s targeting post-secondary institutions. There are multiple programs to “teach kids to code”. Compulsory education (primary and secondary) can provide a great context for these, in part because the type of learning involved is so broad and pedagogical skills are so recognized. In post-secondary contexts, however, there’s a strong tendency to limit coding to very specific contexts, including Computer Science or individual programs. We probably take for granted that people who need broad coding skills can develop them outside of their college and university programs. In a way, this isn’t that surprising if we’re to compare coding to very basic skills, like typing. Though there are probably many universities and colleges where students can get trained in typing, it’s very separate from the curriculum. It might be “college prep”, but it’s not really a college prerequisite. And there isn’t that much support in post-secondary education. Of course, there are many programs, in any discipline, giving a lot of weight to coding skills. For instance, learners in Digital Humanities probably hone in their ability to code, at some point in their career. And it’s probably hard for most digital arts programs to avoid at least some training in programming languages. It’s just that these “general” programs in coding tend to focus almost exclusively on so-called “K–12 Education”. That this program focuses on diversity is also interesting. Not surprising, as many such initiatives have to do with inequalities, real or perceived. But it might be where something so general can have an impact in Higher Education. It’s also interesting to notice that there isn’t much in terms of branding or otherwise which explicitly connects this initiative with colleges and universities. Pictures on the site show (diverse) adults, presumably registered students at universities and colleges where “education partners” are to be found. But it sounds like the idea of a “school” is purposefully left quite broad or even ambiguous. Of course, these programs might also benefit adult learners who aren’t registered at a formal institution of higher learning. Which would make it closer to “para-educational” programs. In fact, there might something of a lesson for the future of universities and colleges.
-
As many universities are being queried by the federal government on how they spend their endowment money, and enrollment decreases among all institutions nationally, traditional campuses will need to look at these partnerships as a sign of where education is likely going in the future, and what the federal government may be willing to finance with its student loan programs going ahead.
To me, the most interesting about this program is that it sounds like it’s targeting post-secondary institutions. There are multiple programs to “teach kids to code”. Compulsory education (primary and secondary) can provide a great context for these, in part because the type of learning involved is so broad and pedagogical skills are so recognized. In post-secondary contexts, however, there’s a strong tendency to limit coding to very specific contexts, including Computer Science or individual programs. We probably take for granted that people who need broad coding skills can develop them outside of their college and university programs. In a way, this isn’t that surprising if we’re to compare coding to very basic skills, like typing. Though there are probably many universities and colleges where students can get trained in typing, it’s very separate from the curriculum. It might be “college prep”, but it’s not really a college prerequisite. And there isn’t that much support in post-secondary education. Of course, there are many programs, in any discipline, giving a lot of weight to coding skills. For instance, learners in Digital Humanities probably hone in their ability to code, at some point in their career. And it’s probably hard for most digital arts programs to avoid at least some training in programming languages. It’s just that these “general” programs in coding tend to focus almost exclusively on so-called “K–12 Education”. That this program focuses on diversity is also interesting. Not surprising, as many such initiatives have to do with inequalities, real or perceived. But it might be where something so general can have an impact in Higher Education. It’s also interesting to notice that there isn’t much in terms of branding or otherwise which explicitly connects this initiative with colleges and universities. Pictures on the site show (diverse) adults, presumably registered students at universities and colleges where “education partners” are to be found. But it sounds like the idea of a “school” is purposefully left quite broad or even ambiguous. Of course, these programs might also benefit adult learners who aren’t registered at a formal institution of higher learning. Which would make it closer to “para-educational” programs. In fact, there might something of a lesson for the future of universities and colleges.
-
-
www.educationdive.com www.educationdive.com
-
Interactive whiteboards were all the rage in ed tech purchases several years ago, costing schools millions of dollars but gaining little in the classroom.
-
-
www.educationdive.com www.educationdive.com
-
We commonly look at Ivy League institutions as the standard of higher education in America, but the truth is that the majority of the nation's workforce, innovation identity and manufacturing futures are tied to those institutions which graduate outside of the realm of high achievers from wealthy families.
-
-
hybridpedagogy.org hybridpedagogy.org
-
frame the purposes and value of education in purely economic terms
Sign of the times? One part is about economics as the discipline of decision-making. Economists often claim that their work is about any risk/benefit analysis and isn’t purely about money. But the whole thing is still about “resources” or “exchange value”, in one way or another. So, it could be undue influence from this way of thinking. A second part is that, as this piece made clear at the onset, “education is big business”. In some ways, “education” is mostly a term for a sector or market. Schooling, Higher Education, Teaching, and Learning are all related. Corporate training may not belong to the same sector even though many of the aforementioned EdTech players bet big on this. So there’s a logic to focus on the money involved in “education”. Has little to do with learning experiences, but it’s an entrenched system.
Finally, there’s something about efficiency, regardless of effectiveness. It’s somewhat related to economics, but it’s often at a much shallower level. The kind of “your tax dollars at work” thinking which is so common in the United States. “It’s the economy, silly!”
-
-
www.chronicle.com www.chronicle.com
-
often private companies whose technologies power the systems universities use for predictive analytics and adaptive courseware
-
the use of data in scholarly research about student learning; the use of data in systems like the admissions process or predictive-analytics programs that colleges use to spot students who should be referred to an academic counselor; and the ways colleges should treat nontraditional transcript data, alternative credentials, and other forms of documentation about students’ activities, such as badges, that recognize them for nonacademic skills.
Useful breakdown. Research, predictive models, and recognition are quite distinct from one another and the approaches to data that they imply are quite different. In a way, the “personalized learning” model at the core of the second topic is close to the Big Data attitude (collect all the things and sense will come through eventually) with corresponding ethical problems. Through projects vary greatly, research has a much more solid base in both ethics and epistemology than the kind of Big Data approach used by technocentric outlets. The part about recognition, though, opens the most interesting door. Microcredentials and badges are a part of a broader picture. The data shared in those cases need not be so comprehensive and learners have a lot of agency in the matter. In fact, when then-Ashoka Charles Tsai interviewed Mozilla executive director Mark Surman about badges, the message was quite clear: badges are a way to rethink education as a learner-driven “create your own path” adventure. The contrast between the three models reveals a lot. From the abstract world of research, to the top-down models of Minority Report-style predictive educating, all the way to a form of heutagogy. Lots to chew on.
-
-
tressiemc.com tressiemc.com
-
mis-read or failed to read the labor market for different degree types.
Sounds fairly damning for a business based on helping diverse students with the labour market…
-
The aggressive recruiting did not extend to aggressive retainment and debt management.
-
If an organization works — and extracting billions of dollars in federal student aid money suggests ITT worked for a long time — then who it most frequently and efficiently works best for is one way to understand the organization.
-
-
campustechnology.com campustechnology.com
-
The news on the self-paced e-learning industry is so bad, Ambient Insight will no longer publish commercial syndicated reports on the industry
-
-
www.educationdive.com www.educationdive.com
-
E-learning systems revenues in the United States and China are expected to drop by more than $6 billion annually, according to a new study.
-
-
-
The importance of models may need to be underscored in this age of “big data” and “data mining”. Data, no matter how big, can only tell you what happened in the past. Unless you’re a historian, you actually care about the future — what will happen, what could happen, what would happen if you did this or that. Exploring these questions will always require models. Let’s get over “big data” — it’s time for “big modeling”.
Tags
Annotators
URL
-
-
www.edsurge.com www.edsurge.com
-
University staff who are buying data-driven technology
-
-
www.fastcompany.com www.fastcompany.com
-
"At the end of the day, the true value proposition of education is employment,"
-
- Aug 2016
-
www.elearnspace.org www.elearnspace.org
-
Content is the least stable and least valuable part of education.
-
- Jul 2016
-
www.thewpcrowd.com www.thewpcrowd.com
-
there’s only a fine line between “WordPress in Higher Education” and “WordPress in the Enterprise”.
-
-
www.noshelfrequired.com www.noshelfrequired.com
-
market for
-
institutions that are preparing tomorrow’s leaders
-
-
www.nacubo.org www.nacubo.org
-
Montreal: Melding of Old and New
-
-
-
education vertical
-
-
hackeducation.com hackeducation.com
-
I could have easily chosen a different prepositional phrase. "Convivial Tools in an Age of Big Data.” Or “Convivial Tools in an Age of DRM.” Or “Convivial Tools in an Age of Venture-Funded Education Technology Startups.” Or “Convivial Tools in an Age of Doxxing and Trolls."
The Others.
-
education technology has become about control, surveillance, and data extraction
-
-
medium.com medium.com
-
efforts to expand worldwide
At the risk of sounding cynical (which is a very real thing with annotations), reaching a global market can be very imperialistic a move, regardless of who makes it.
-
ironically while continuing to employ adjunct faculty
Much hiding in this passing comment. As adjuncts, our contributions to the system are perceived through the exploitation lens.
-
afford a university education
-
-
hackeducation.com hackeducation.com
-
The military’s contributions to education technology are often overlooked
Though that may not really be the core argument of the piece, it’s more than a passing point. Watters’s raising awareness of this other type of “military-industrial complex” could have a deep impact on many a discussion, including the whole hype about VR (and AR). It’s not just Carnegie-Mellon and Paris’s Polytechnique («l’X») which have strong ties to the military. Or (D)ARPANET. Reminds me of IU’s Dorson getting money for the Folklore Institute during the Cold War by arguing that the Soviets were funding folklore. Even the head of the NEH in 2000 talked about Sputnik and used the language of “beating Europe at culture” when discussing plans for the agency. Not that it means the funding or “innovation” would come directly from the military but it’s all part of the Cold War-era “ideology”. In education, it’s about competing with India or Finland. In other words, the military is part of a much larger plan for “world domination”.
-
-
www.theguardian.com www.theguardian.com
-
For-profits typically take those funds and spend way more on advertising and profit distribution than on teaching.
Don’t know what the stats are for “non-profit universities and colleges” but it does feel like an increasing portion of their budgets go to marketing, advertising, PR, and strategic positioning (at least in the United States and Canada).
-
The phrase “diploma mills” came into popular usage during the era.
-
A similar conclusion was reached by the medical (pdf) and legal professions of the late-19th and early-20th centuries.
Somewhat surprising, in the current context.
-
This model might make sense if our goal was to produce cars, clothing, and some other commodity more efficiently. But a university education doesn’t fit into this paradigm. It isn’t just a commodity.
In education as in health, things get really complex when people have an incentive for people not to improve.
-
The idea is that higher education is like any other industry.
-
-
www.seattletimes.com www.seattletimes.com
-
“In five to 10 years, most students will buy their postsecondary education differently from the way they buy it now,”
-
-
www.businessinsider.com www.businessinsider.com
-
"We know the day before the course starts which students are highly unlikely to succeed,"
Easier to do with a strict model for success.
-
-
medium.com medium.com
-
improving teaching, not amplifying learning.
Though it’s not exactly the same thing, you could call this “instrumental” or “pragmatic”. Of course, you could have something very practical to amplify learning, and #EdTech is predicated on that idea. But when you do, you make learning so goal-oriented that it shifts its meaning. Very hard to have a “solution” for open-ended learning, though it’s very easy to have tools which can enhance open approaches to learning. Teachers have a tough time and it doesn’t feel so strange to make teachers’ lives easier. Teachers typically don’t make big purchasing decisions but there’s a level of influence from teachers when a “solution” imposes itself. At least, based on the insistence of #BigEdTech on trying to influence teachers (who then pressure administrators to make purchases), one might think that teachers have a say in the matter. If something makes a teaching-related task easier, administrators are likely to perceive the value. Comes down to figures, dollars, expense, expenditures, supplies, HR, budgets… Pedagogy may not even come into play.
-
-
www.alfiekohn.org www.alfiekohn.org
-
districts are pouring money into computers and software programs—money that’s badly needed for, say, hiring teachers
-
despite the fact that it’s remarkably expensive
-
spend oodles of money
-
-
www.washingtonpost.com www.washingtonpost.com
-
the largest consumer of college graduates
-
there is widespread agreement among college officials and policymakers that the current accreditation system is broken
-
-
www.insidehighered.com www.insidehighered.com
-
Funding for humanities labs
-
our faculty are evaluated on not just research, teaching and service but also collaboration
Valuing the teaching profession.
-
- Jun 2016
-
www.educationdive.com www.educationdive.com
-
failure to find revenue and support from unconventional sources
-
-
www.socrative.com www.socrative.com
-
However, you may be required to pay fees to use certain features or content made available through the Site and Services.
Wish they said more. No-cost solutions are neat for one-offs, but pedagogues should be wary of building their practice on services which may start requiring payment.
-
-
www.eschoolnews.com www.eschoolnews.com
-
timely
Time-sensitive, mission-critical, just-in-time, realtime, 24/7…
-
-
listedtech.com listedtech.com
-
Am I asking too many questions?
No.
-
-
opencontent.org opencontent.org
-
many more people understand cost than understand pedagogy
While this may be true, it sure is sad. Especially as the emphasis on cost is likely to have negative impacts in the long run.
-
- May 2016
-
adamcroom.com adamcroom.com
-
To me, this is what OER for the web should start to reflect.
You mean it’s not just about the price of textbooks??
-
- Jan 2016
-
www.readability.com www.readability.com
-
At what point does payment occur, and are you concerned with the possible perception that this is pay-to-publish? Payment occurs as soon as you post your paper online. I am not overly concerned with the perception that this is pay-to-publish because it is. What makes The Winnower different is the price we charge. Our price is much much lower than what other journals charge and we are clear as to what its use will be: the sustainability and growth of the website. arXiv, a site we are very much modeled after does not charge anything for their preprint service but I would argue their sustainability based on grants is questionable. We believe that authors should buy into this system and we think that the price we will charge is more than fair. Ultimately, if a critical mass is reached in The Winnower and other revenue sources can be generated than we would love to make publishing free but at this moment it is not possible.
Tags
Annotators
URL
-
-
www.linkedin.com www.linkedin.com
-
Job functionOther,Marketing
But, but… Eric said it wasn’t marketing!
-
-
www.huffingtonpost.com www.huffingtonpost.com
-
In 2014, U.S. based education technology (EdTech) companies raised $1.2 billion in funding across 357 venture rounds.
-
-
pressblog.uchicago.edu pressblog.uchicago.edu
-
one wonders about the relationships between scholarship, technology, and the academic institution that engendered that turn from printing materials to printing ideas.
One sure does.
-
- Dec 2015
-
larrycuban.wordpress.com larrycuban.wordpress.com
-
numbers have to be interpreted by those who do the daily work of classroom teaching
-
-
bits.blogs.nytimes.com bits.blogs.nytimes.com
-
nearly $8 billion prekindergarten through 12th-grade education technology software market
-
-
www.knewton.com www.knewton.com
-
purchasable à la carte
How many units of learning per dollar?
-
no research
In direct opposition with the model for most universities, these days. So that may be the fork in the road. But there are more than two paths.
-
Universities bundle services like mad
Who came up with such a scheme? A mad scientist? We’re far from Bologna.
-
perfect storm of bundling
-
only unbundling health clubs suffer
There might be something about the connection between learning and “health & wellness”.
-
Unbundling has played out in almost every media industry.
And the shift away from “access to content” is still going on, a decade and a half after Napster. If education is a “content industry” and “content industries” are being disrupted, then education will be disrupted… by becoming even more “industrial”.
-
consumer choice will inevitably force them to unbundle.
The battle is raging on, but the issue is predetermined.
-
-
edumorphology.com edumorphology.com
-
Yes, my intention was to show the most easily replaced in dark and move it to the least easily replaced.
One linear model, represented in something of a spiral… Agreed that the transformative experience is tough to “disrupt”, but the whole “content delivery” emphasis shows that the disruption isn’t so quick.
-
-
www.forbes.com www.forbes.com
-
customers become less willing to pay
There are a few key cases, here. a) Public Education (much of the planet) b) Parent-Funded Higher Education (US-centric model) c) Corporate Training (emphasis for most learning platforms, these days) d) For-Profit Universities (Apollo Group and such) e) xMOOCs (learning as a startup idea, with freemium models) f) Ad-Supported Apps & Games (Hey! Some of them are “educational”!)
-
In every industry, the early successful products and services often have an interdependent architecture—meaning that they tend to be proprietary and bundled.
The idea that there’s a “Great Unbundling of (Higher) Education” needs not be restricted to the business side of things, but it’s partly driven by those who perceive education as an “industry”. Producing… graduates?
-
-
mfeldstein.com mfeldstein.com
-
More consolidation in the LMS market?
-
-
mfeldstein.com mfeldstein.com
-
course design is more important than the LMS
In all the platform news, we can talk about “learning management” in view of instructional and course design. But maybe it even goes further than design into a variety of practices which aren´t through-designed.
-
-
mfeldstein.com mfeldstein.com
-
It is possible to achieve a more humane and personal education at scale
Important claim, probably coming from the need for reports which answer the “But does it scale?” question.
-
-
robinderosa.net robinderosa.net
-
The goal of education is for the educator to become less and less needed for learners to learn.
The reverse of the typical “goal displacement”. Instead of focusing on ensuring our continued employment as “instructors”, we want to make sure learning happens. Deep down, we know we’ll find ways to work, no matter what happens. The comparison with health can be interesting. If doctors had an incentive to keep people sick, society wouldn’t benefit much. Allegedly, Chinese healthcare provides incentives for doctors to help people stay healthy. Sounds like it’d make sense, somehow. Yet education and health are both treated like industries. We produce graduates, future employees, etc. Doctors produce people who fit a pattern of what it means to be healthy in a given social context. There’s even a factory-chain metaphor used when some people apply “lean management” to hospitals or colleges. Not that the problem is with the management philosophy itself. But focusing so much on resource allocation blinds us from a deep reality: as we are getting healthier and more “learned”, roles are shifting.
-
- Nov 2015
-
www.nytimes.com www.nytimes.com
-
Only four years old, Twitch already has 100 million viewers who consume 20 billion minutes of gaming every month. According to one 2014 study, Twitch is the fourth-most-visited site on the Internet during peak traffic periods, after Netflix, Google and Apple and above Facebook and Amazon. (Amazon bought Twitch in 2014 for about $1 billion, all of it cash.) And there is money in it for the gamers themselves, called ‘‘streamers’’: Fans can subscribe to channels for extra access, or they can send donations of any amount. Streamers with modest followings can make respectable incomes — hundreds or thousands of dollars a month — and the very top streamers are getting rich.
(This is entirely peripheral to the subject of the article. I am making note of it because I have barely heard of Twitch until recently.)
-
-
-
PC gaming has enthusiastically embraced crowdfunding. On Kickstarter, video games (most of of which PC games) is the highest-funded category
-
Another form of video game remixing happens on broadcasting sites like Twitch, where you can watch live videos of people playing games (while they chat with the audience — the end result is an interesting mix between video games and talk radio).
-
Remixing books is popular on services like Wattpad where users write fanfiction inspired by books, celebrities, movies, etc. From a legal perspective, some fanfiction could be seen as copyright or trademark infringement. From a business perspective, the book industry would be smart to learn from the PC gaming business. Instead of fighting over pieces of a shrinking pie, try to grow the pie by getting more people to read and write books.
-
In the gaming world, “mods” are user created versions of games or elements of games. Steam has about 4500 games but about 400 million pieces of user-generated content. Dota itself was originally a user-created mod of another game, Warcraft 3.Contrast this to the music industry, which relies on litigation to aggressively stifle remixing and experimentation.
-
PC games are so popular they can also make money from live events. Live gaming competitions have become huge: over 32M people watched the League of Legends championship this year, almost double the number of people who watched the NBA finals.
-
The types of games on Steam vary widely, as do the business models. The most popular game, Dota 2, is free. It makes money selling in-app items, mostly “cosmetic items” that alter the appearance of characters.
-
-
www.edsurge.com www.edsurge.com
-
Non-Traditional Students: The New Majority
Education, she sure is changing.
-
- Apr 2015
-
-
2. Is it reasonable to compare the costs of xMOOCs to the costs of online credit courses? Are they competing for the same funds, or are they categorically different in their funding source and goals? If so, how?
MOOCs is a community service for which, I expect, every university has a budget. It is the universities' moral obligation to serve the interested groups\communities\society with MOOCs. It is mutually beneficial - the universities get their brand, research and teaching practices distributed, while the public shares with them personal data and comments, and opinions (which are extremely costly, compare this with the cost of those massive public opinion surveys conducted prior to the election campaigns, or market research) ... Hopefully the universities and academia can add ethical rigor to the way the big massives of private data is used.
-
it is difficult to see how publicly funded higher education institutions can develop sustainable business models for MOOCs;
-