893 Matching Annotations
  1. May 2021
  2. Apr 2021
    1. keeping a reading journal to write to yourself about how the reading and your own thinking and purposes/plans are connecting and speaking to one another.

      The benefit of a reading journal is that it's all in one place; the citation, your thoughts on the topic, links to other readings and thoughts.

      For me, it goes like this:

      1. Read things (web, ebook, audio book, physical book, PDF, etc.) and make notes through annotation.
      2. Capture those things in Zotero (extracts metadata automatically, add tags, relate items, add literature notes).
      3. Iteratively add what I learn into Obsidian.
    1. We might think of this as citation(1)=writer steers.

      The writer is adding their own voice to the text.

    2. One of the most prominent ways that authority is signalled in an academic text is via citation.

      This seems counter-intuitive; we demonstrate our authority by referring to the writing of others. I suppose it shows our understanding and command of the body of literature surrounding the topic.

    1. the most complex object in the universe, the brain.

      It's a bit presumptuous to assume that the most complex object in the universe just happens to be in our heads.

    1. Now that more work is being done online, many people face global competition. Mark Ritson tells about his wife’s yoga instructor, Gary, who went online during a pandemic lock-down. He previously offered at-home personalized instruction, driving to his mostly rural clientele. When the lock-down was over he decided to only offer his services online and avoid all the time and expense of driving. But Mark’s wife now realized that she was no longer limited to Gary. “In the old, pre-Covid world of yoga my wife was limited to Gary or an elderly woman who creaked a lot and smelled of cheese. But with the opening of this new virtual yoga window, she now has a dizzying array of practitioners keen to work with her from all corners of the globe.” By moving his business online, Gary had unknowingly increased his competition to the entire world

      We're going to see the same issues when health professionals start offering their services online.

    1. The core problem here is that we really don’t know exactly how the brain learns information or skills. And for what we do know, we don’t have the ability to directly observe when it is happening in the brain. That would be painful and dangerous. So we have to rely on something external to the brain serving as evidence that learning happened.

      What we call assessment is really an attempt to create a proxy indicator for what we call learning.

      It seems weird to think of it that way; we don't really understand learning so we create tasks for students to complete in the hope those tasks somehow give us some insight into the thing that we don't really understand.

    2. quizzes, tests, exams, assignments – none of those can measure learning or skill mastery. Not directly at least.
    3. exams or tests themselves are not essential
    4. If you think that most of the students in your course would probably cheat

      What would it say about you if this is what you think of your students?

    5. The main reason for all of this confusion about the research is that there is little consistency in the time frame, the definition of what counts as “cheating,” and how the frequency of cheating is measured.

      This is in keeping with the trend of poorly designed - and poorly reported - educational research.

    6. there is the claim that McFarland makes that “a separate, peer-reviewed research paper published in May of 2020 in the Journal of the National College Testing Association also confirmed the link between online classes and dishonesty.” But that is not what the paper said at all. That paper looks at the differences between proctored and unproctored exams, and makes a lot of claims about how online learning has the potential for more dishonesty. But it does not confirm a link between dishonesty and online courses, because it was not looking for that.

      I hate it when people do this; link to marginally articles in support of their dodgy claims.

    7. In fact, the research is actually all over the place. You will see numbers anywhere between 2% and 95%. As one research paper puts it: “the precipitating factors of academic misconduct vary across the literature … The research of academic integrity is often unsystematic and the reports are confusing.”

      I wonder how much of this pivots on the definition of cheating. If students talk to each other about their assignments over lunch, is that "cheating"? Obviously not. But if this is OK then why can't they collaborate in other ways?

    8. they can get feedback until they know they are going to score what they want

      This is the idea behind contract grading.

    9. This article is ostensibly a response to the use of proctoring software in higher education.

      But in order to do that properly the author has also delved into learning and assessment.

      It's a well-written piece that questions some of our taken-for-granted assumptions around assessment.

    10. How many of us sit around answering test questions (of any kind) all day long for our jobs? If you look to the areas of universal design for learning and authentic assessment, you can find better ways to assess learning. This involves thinking about ways to create real world assignments that match what students will see on the job or in life.

      And this is often far more challenging for students to do well than simply memorising the content for the test.

    1. This post articulates a lot of what I've been thinking about for the past 18 months or so, but it adds the additional concept of community integration.

      Interestingly, this aligns with the early, tentative ideas around what the future of In Beta might look like as a learning community, rather than a repository of content.

    2. The potential to build community-curated knowledge networks remains largely untapped. There are reasons to be optimistic; the economic feasibility of paid communities, a renewed interest in curation, a slow move away from big social, and an improved understanding of platform incentives. All combined, this will lead to communities that are more sustainable, aligned, and intentional.

      I agree with all of this.

    3. given the chat-based nature of these platforms, it’s easy to miss the best content

      It can't be sorted by topic, though. If it's a not a chronological stream then what is it?

      Mike Caulfield introduced (to me anyway) the concept of streams and gardens, which I've found to be a valuable way of thinking about my own curation practices.

      What would this "garden" look like? A place where you could serendipitously find something interesting.

    4. diagram

      I know that these kinds of diagrams can't include every tool but I'm surprised that Obsidian isn't in the list of knowledge management tools.

    5. relation

      In the table below, I'm not sure how you can say that we have poor search today; any basic search engine is pretty good even at natural language queries.

    6. The conversation around curation thus far has focused too much on reducing the amount of information

      This isn't completely true. There's also been a big emphasis on increasing the quality of the information you consume.

    7. we should be able to reference it if we’re building a company in the design tools space

      You don't need to read and process everything that's relevant now; you only need to have it on hand for when you need it.

      But why wouldn't you just search for what you need, when you need it? Maybe the value of this approach is that you've already got a small set of high value, information dense, useful resources that you - and your community - have curated.

    8. The architecture of digital platforms encourage us to consume information because it’s in front of us, not because it’s relevant

      But this can change with a different algorithm.

    9. the goal is not to consume more information

      My goal is filter from a smaller number of sources that provide "better" information.

      Better = information that is more closely aligned with achieving goals that are important to me.

    10. “how do we collect, store, and contextualize the information we consume?”

      This is the essence of the personal knowledge management movement that's been growing in the last 2-3 years.

    11. we will pay people with good taste to help us sort through the ever-growing mass of information

      I don't know if I'm quite here yet. What will these paid curators offer that I don't get right now from following selected people on social media?

    12. our brains are not  equipped to deal with this abundance. 

      We'll also see machine curation of this content, algorithmically filtered to suit our preferences. I know that this has a bad rap at the moment but it's not going to go away and it's only going to get better.

    1. It stores fragments of the text you’re working on, based on the research you completed and fed into its archive.

      My workflow is:

      1. Filter resources into Zotero.
      2. Create a single note in Zotero with all of my thoughts the resource, based on excerpts.
      3. Use those notes as raw material to prepare a series of permanent, atomic, linked notes in Obsidian.
    2. expect your first draft to be imperfect. It will require work and polishing anyway, so it’s only reasonable to complete the first draft quickly: the sooner you finish, the sooner you’ll know what’s left to do

      See McPhee, J. Draft no. 4.

      Basically, expect to write 4 drafts of your work before it's good enough. This obviously doesn't relate to blog posts, etc.

    3. prepare research and the most of your writing before you compile your first draft

      This is connected to the idea that you should "always be writing".

    1. What do you think of the concept of the "Open Scholar"? If you are an academic, does this appeal to you? Would you dare publish your preliminary thinking, your drafts, your experiments, your data, your false starts and failures? If you are not an academic, does Open Scholarship sound like it would be a greater benefit for the advancement of knowledge or the improvement of teaching?

      I do all of these things, and I publish in journals that generate the subsidies that provide a 3rd stream income for my institution. Again, why are these activities positioned in opposition to each other?

    2. He or she is not impatient with amateurs.

      Some of the kindest and most generous colleagues I've come across have all been full professors. To set up "traditional" scholars as being unresponsive, etc. is unfair.

    3. ut it does of course clash with traditional ways of publishing knowledge

      Why, "of course"? Surely you can share your process and preprint in the open, and submit your final article to a journal, preferably one that's open access. Why are these processes positioned as being in conflict?

    4. there is great value to others to see the methods used in pursuing knowledge, the various attempts in pursuing solutions (failures as much as successes), the data generated (especially beyond the subset of data used for drawing conclusions in the study at hand), and the various resources used to mount the investigation (whether that is lab equipment, social resources, bibliography, theory, or protocols). Again, there is great value in others being allowed to see this whole context of inquiry, not just the final outcome for the specific study at hand.

      This is great but it's a sideshow as far as universities are concerned. Because they're subsidised by publication of the final product. A researcher who only shared their process and even products in the open doesn't generate the subsidy that universities rely on.

    5. the Open Scholar is someone who makes their intellectual projects and processes digitally visible and who invites and encourages ongoing criticism of their work and secondary uses of any or all parts of it--at any stage of its development.

      Yes, I agree. But this is never going to get you tenure. You have to do this AND publish in high profile journals.

    6. Because consequential intellectual work takes place in myriad ways outside of traditional scholarly genres, that's why, and the digital realm is ready to capture, organize, value, and disseminate those other ways of generating knowledge.

      But your institution is funded - significantly - by government subsidies linked to where academics publish. To suggest that sharing scholarly work on blogs is the endgame is to deliberately obscure the point. Academics don't want to settle for anything other than the broadest reach for their work.

    7. There have been some exceptions along the way, but generally speaking, the traditional scholar truly doesn't care about reaching anyone except those peers whose judgment determines his or her reputation.

      This is just wrong. I have no idea who thinks this but in 15 years of academia I've never come across this opinion. Every scholar I know wants their work to reach as wide an audience as possible.

    8. It's as unethical as it is unnecessary, but it will continue until institutions learn to be more publicly responsible with their intellectual resources, or until scholars reject the restrictive identity they are held to through the traditional reward system.

      No, it'll only change when the funding model for research dissemination changes. Universities don't want their scholarship hidden behind paywalls; it's that the most prestigious journals have paywalls. Happily, this is changing, with more institutions (and journals) supporting open access publication.

    9. institutions of higher education are invested in keeping their scholars and those scholars' intellectual products limited and cloistered.

      I certainly don't believe that this is common, at least not in the last 10 years. My own experience (granted, it's only my experience) has been that my institution is looking to actively promote scholarship more broadly.

    1. The value of companies will diminish in the short-term, too, though they will continue to perform quite well over time.

      I'm confused. Companies will control the IP of AI, as well as the robots creating all the basic goods and offering services. How will companies not be astronomically wealthy?

    2. Poverty would be greatly reduced and many more people would have a shot at the life they want.

      But I nonetheless think that this is true.

    3. Every citizen would therefore increasingly partake of the freedoms, powers, autonomies, and opportunities that come with economic self-determination.

      This is a bit too rose-tinted in my opinion.

    4. American Equity Fund

      I guess I can see the point of making this "Americans-only" but I wonder how the rest of the world will respond. It makes me think of a dystopian world where those with the capacity to create AI and high-end robots will be OK, and everyone else will be left to survive as best as they can. I don't see how this ends us as anything other than a massively stratified planet, where there are only two strata.

    5. Rising costs in government-funded industries would face real pressure as more people chose their own services in a competitive marketplace.

      The high cost of healthcare in the US isn't a problem of increasing costs in government-funded systems; it's private industry that's causing the massive costs in healthcare.

    6. All citizens over 18 would get an annual distribution, in dollars and company shares, into their accounts. People would be entrusted to use the money however they needed or wanted—for better education, healthcare, housing, starting a company, whatever.

      I don't see how this leads to lower costs for education, housing, and healthcare. We assume that the companies who have become fantastically wealthy are doing so because they control the algorithms responsible for providing these services. Who's to say that they lower costs?

    7. Similarly, we can imagine AI doctors that can diagnose health problems better than any human, and AI teachers that can diagnose and explain exactly what a student doesn’t understand.

      I'm not sure I follow this line of reasoning. I think the point being made is that, when we remove humans from the process, it gets cheaper. But I don't see insurance companies, who basically run healthcare in the US, giving up their profits simply because software is making more choices. Similarly, the education sector is currently seen as a profit-generating machine. Why would companies not do everything they can to make more money, simply because software is doing the teaching?

    8. The best way to increase societal wealth is to decrease the cost of goods

      See McAfee, A. (2019). More from Less: How We Finally Stopped Using up the World - And What Happens Next. Simon & Schuster, Limited.

    9. To the three great technological revolutions–the agricultural, the industrial, and the computational–we will add a fourth: the AI revolution

      See Schwab, K. (2017). The Fourth Industrial Revolution. Currency. for more context.

      Maybe also Brynjolfsson, E., & McAfee, A. (2016). The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (1 edition). W. W. Norton & Company.

    10. Compare how the world looked 15 years ago (no smartphones, really), 150 years ago (no combustion engine, no home electricity), 1,500 years ago (no industrial machines), and 15,000 years ago (no agriculture)

      Such a difficult comparison to make, especially since there are technological innovations happening all the time and to put them into a list like this seems a bit like cherry picking.

    1. The metadata will also contain the type of review, stating whether it is a referee report, author response, or community comment, etc. This allows accurate reporting on whether the peer review is happening within a traditional editorial process or elsewhere.

      The metadata is very granular, specifying the type of review down to individual components of the entire process.

    1. There definitely seems to be quite a bit of innovation going on in the academic publishing space, which is encouraging.

      But I'm still reluctant to outsource a journal's workflow, including review and hosting options, to a 3rd party, like Scholastica or anyone else. The benefits are great but what's to stop them going away, along with your workflow?

    2. The overlay model presents a lean publishing approach that eliminates the need for most journal production processes like PDF formatting.

      I wonder how many authors still expect a version of the final article to be available as a PDF?

    3. Overlay journals are peer-reviewed digital Open Access publications that host all of their articles on a preprint server rather than a journal website.

      Would this also work in the other direction i.e. having a preprint hosted on the journal website?

    4. Crossref announced a new schema for registering preprint DOIs in 2016, it also helped pave the way for more widespread preprint acceptance. The schema makes it possible to register DOIs for preprints and link them to the DOIs of published articles, setting a precedent that preprints will remain permanently available and creating a more formal means for publishers to distinguish preprints from published articles.

      The preprint and the final version are separate entities.

    5. But it has also magnified the potential for unvetted manuscripts, or worse, pseudo-science, to fuel the spread of misinformation.

      Indeed.

    6. The pandemic brought the most extreme surge in preprint posting to date, with a Nature analysis finding that “more than 30,000 of the COVID-19 articles published in 2020 were preprints — between 17% and 30%” of the total.

      For me the most interesting point in this sentence is the fact that there were 30 000 articles published on Covid. I wonder how many were truly useful?

    1. the sensitive nature and business value of such healthcare data also means its usage is highly regulated and not likely to be shared freely. Even if access were granted, it would still require considerable time, effort and expense to curate and maintain the kind of quality desired by fellow developers to train AI models

      Health data is time consuming and expensive to collect, annotate/label, and maintain.

    1. The insertion of an algorithm’s predictions into the patient-physician relationship also introduces a third party, turning the relationship into one between the patient and the health care system. It also means significant changes in terms of a patient’s expectation of confidentiality. “Once machine-learning-based decision support is integrated into clinical care, withholding information from electronic records will become increasingly difficult, since patients whose data aren’t recorded can’t benefit from machine-learning analyses,” the authors wrote.

      There is some work being done on federated learning, where the algorithm works on decentralised data that stays in place with the patient and the ML model is brought to the patient so that their data remains private.

    2. The one thing people can do that machines can’t do is step aside from our ideas and evaluate them critically.

      Surely not? Human beings are terrible at objectively evaluating our own thinking.

      See Sherbino, J., Kulasegaram, K., Howey, E., & Norman, G. (2014). Ineffectiveness of cognitive forcing strategies to reduce biases in diagnostic reasoning: A controlled trial. CJEM, 16(01), 34–40. https://doi.org/10.2310/8000.2013.130860

      Also, see anything by Daniel Kahneman.

    1. However, I would advise making time regularly, whatever that looks like for you, and putting it into your diary as a meeting with your writing.

      Set aside time to write every day and accept that some days it's not going to work out. But by starting with that time set aside, it's easier to actually do it.

    2. I don’t think we find writing time – I think we have to make it.

      You have to start by setting aside the time you want to write (or read, or think, or take notes) and then fit everything else around that. Begin with, "What is valuable to me?" and then work from there.

    3. Writing time is less about hours and minutes, I find, and more about space in my head

      Indeed. When you have the space to figure things out, the writing can happen quickly.

    4. I don’t have time to do the things I need to do to make it possible for me to write. I don’t have time to read, and to make notes. I don’t have time to think about all I have read and make connections and have realisations and see a paper structure emerging from that thinking, scribbling and reading. I may have physical time, but my head is so full of all these other things that I find I need more than just an hour or two here and there to get into the right headspace and create writing time.

      Maybe it helps to not think of these things as separate and isolated activities. When they're all part of a continuum that represents knowledge work then the reading/thinking/note-taking/writing aspects all merge into each other.

    1. There was so much reading and thinking that went on during my PhD and early postdoc years and not many papers, but then there were a few years of several papers and a book. But this “productivity” was only possible because I took that time before (and also to a lesser extent during) all the writing and publishing to read, scribble, think with others and for myself.

      A few years ago I had this same, troubling thought. I saw the significant output that came from my PhD and worried about how I wasn't hitting those same targets. But then I remembered how much of my daily work was devoted to the PhD. Without kids. Without the admin overhead of running a department. I can't put the same amount of time into a project now as I did then. Not in a sustainable way.

    2. We might be seen to not be doing anything. How many times have I spent a day reading and writing in my reading journal and thinking and then caught myself thinking that I have nothing to show for a day’s work?

      I've spent the past 18 months thinking (and not writing very much) about the process of knowledge work and how to more effectively "do" it. I don't have much to show for it other than a reasonably comprehensive set of notes. That no-one else sees. It definitely makes me feel a bit nervous.

    3. We often think with others – “real” others like supervisors, co-researchers and critical friends and “imagined” others like the authors of the texts we are reading and working with.

      See Ahrens, S. (2017). How to take smart notes: One simple technique to boost writing, learning and thinking-- for students, academics and nonfiction book writers. Createspace Independent Publishing.

    4. It is an active process of engaging with an argument, with evidence, with methodology and findings, with underlying principles that shape what counts as valid knowledge and also credible ways of sharing that knowledge with other researchers and readers. It involves connecting what we are reading to the research or the practice work we are doing and either fitting the new knowledge into an existing frame of reference, or adjusting our frame of reference if it is challenged by this new knowledge.

      Reading IS thinking. Like this process of annotation is an interaction with the author.

    5. We talk and talk about writing, but we spend far less time, comparatively, talking about reading and thinking

      One of your tasks should be "spend an hour reading every day".

    6. write in a slower, quieter way

      I've tried building simple outlines for pieces in my head while walking (or running). I've enjoyed it.

    7. There is no time. There are just tasks, one after another.

      What if one of those tasks was "spend an hour walking in the mountain, not thinking of anything specific, just walking" as a way to create space for those creative connections to bubble up?

    8. Thinking is part of everything we do. But, thinking is not as visible as writing and so we often underestimate how important it is and how much time we need to make for this vital labour.

      The example I always give is that if I'm walking on the mountain on the weekend and have a novel insight that I incorporate into a paper, am I "working"? Do I bill my university for overtime? Of course not. The whole idea of a Mon-Fri, 9-5 doesn't really fit. How do we help academics create more space in their days for those creative insights?

    9. ‘How much time do we need to think, and how do we use this time effectively?’

      I don't think of writing, reading, and thinking as being different activities. I think of writing and reading as thinking.

    10. Do we also need to actively make and protect time to read, and crucially, time to think?

      I've been arguing for this in academia for a few years now. How do I not only carve out time to think for myself, but for the staff members in my department? How do we reduce the admin load so that academics can find time to think?

    1. the more I know about a topic, the longer integrating a note properly takes.

      In my experience this is because the note I'm working on - which I know something about - is often connected to other notes. And so the process of integrating the new information sometimes requires quite far-reaching edits across other notes in order to capture the nuance of the new note.

    2. When we process all the stuff we’ve collected, we can allow ourselves to be picky.

      Be liberal with what you initially capture...you can decide whether or not it's all useful later, when analysing what you've captured.

    3. Because the step of integration is accompanied by creating connections, you have to find suitable associations both in your note archive and in your head.

      This process of creating notes not only helps create a long-term archive of information but includes encoding and consolidation of information in memory.

    4. I pull out the notes and create clusters before I add anything to my digital note archive

      There is some kind of analysis process - maybe a bit like qualitative/content analysis - before creating notes.

  3. Mar 2021
    1. The idea is that by the time I finish reading a book, I've already done the majority of my thinking, reflecting, writing, and synthesising on it. And created a readable summary n+1 other people might find helpful.I also like that the system allows me to take my sweet time with a book. I can read it deeply, think about it properly, and let my notes on it mature without ending up with a huge backlog of them.

      I really like the idea of benefiting from a book while reading it, rather than waiting until the end and then having to do a massive review of my excerpts and notes.

      I don't have a good system of exporting highlights and annotations from books (I read .epub using Moon Reader on Android, other than to manually export.

    2. Whenever I find an article, academic paper, podcast, or video I want to save, I'm able to hit a specific hotkey that snaps up the page title and URL, then formats it into Roam-friendly markdown link.

      I use Zotero to capture these sources because I'm still in the mindset that I want my sources (and literature notes associated with them) captured separately from my permanent notes (which I store in Obsidian).

      I want there to be a relatively significant amount of friction to getting information into my knowledge database because otherwise I'll just dump anything and everything into it.

    1. “What if the algorithm is designed around the goal of saving money? What if different treatment decisions about patients are made depending on insurance status or their ability to pay?”

      This makes no sense. Insurance companies do this all the time and we have no problem with it. Every single decision made in the American health system is based on a patient's ability to pay. Every decision made my management is about saving money.

      I'm baffled as to why we would have a problem with this is done by algorithm.

    2. n discussing designer intent, which is one source of bias, the authors pointed to private-sector examples of algorithms meant to ensure specific outcomes, such as Volkswagen’s algorithm that allowed vehicles to pass emissions tests by reducing their nitrogen oxide emissions during the tests. 

      This was illegal and had nothing to do with machine learning at all. Unscrupulous managers at VW made a decision to use a tool to bypass the law.

      This is like saying that we should be wary of all doctors because Harold Shipman was a serial killer. Yes, he was, but that has nothing to do with all of the doctors who aren't.

    3. Machine-learning-based clinical guidance may introduce a third-party “actor” into the physician-patient relationship, challenging the dynamics of responsibility in the relationship and the expectation of confidentiality.

      It will introduce a 3rd party actor into the relationship; tell us why that's a bad thing. Tell us why the doctor alone should enjoy this privilege.

    4. Also, algorithms might be designed to skew results, depending on who’s developing them and on the motives of the programmers, companies or health care systems deploying them.

      Same thing here with managed care. There's strong evidence that private ownership of nursing homes in the United States has led to an increase in mortality, as management use medication to keep fewer staff on the payroll.

      See More from Less (2019), by Andrew McAfee.

      We want our algorithms to be perfect while we turn a blind eye to what human beings are doing in the health system.

    5. Data used to create algorithms can contain bias that is reflected in the algorithms and in the clinical recommendations they generate.

      The humans who create protocols can contain bias that is reflected in the protocols and in the clinical recommendations they generate.

      Why do we have a problem with the original text but not with this text?

    1. war is peace, freedom is slavery and ignorance is strength.

      From Orwell's 1984.

    2. In essence, the call is to dumb down mathematics so even the worst performers can earn a pass, because that will resolve the perceived problem that minorities struggle with mathematics.

      See The unintentional racism of anti-racism advocates. (2021, March 3). Really good excerpt of a longer conversation on the same topic.

    1. So how do we build technology with the right politics?

      This is the problem, isn't it? Who decides what are the "right politics"?

    2. otherwise, we're going to zigzag and not really solve the wicked problem

      If it's a wicked problem, it can't be solved. If it's not a wicked problem, then you should at least change the title.

    3. We have not yet started thinking about how will humans react to those machines?

      You haven't, but many others have. It's called science fiction.

    4. They wouldn't say those things if they knew some machine was watching them.

      This isn't true. People say and do stupid things all the time, whether people or machines are watching them. This isn't an issue that will suddenly emerge when we have AI tutors.

    5. How do we guarantee its security over long periods of time

      You can't.

    6. We're just so fascinated by our own small things that we haven't started thinking about the possibility that we're going to fall and hurt ourselves or hurt someone else

      This simply isn't true. There are plenty of examples of institutions and organisations that are already putting resources into the ethical implications of AI in a variety of industries.

    7. Issues of student privacy. Do I have the right of looking at your introduction and someone else's introduction and connecting you? That might work out, that might not work out.

      It's not really an issue. Students would be required to give you consent in the ToS when they login. They probably won't read it but they'll have given consent. If they haven't then you can't read their introductions. This isn't a difficult problem.

    8. If you think of Facebook … I work with Facebook one-on-one, Facebook is in front of me and on the screen, I'm working on it. But truly, I use it to connect with other people. I don't care about Facebook, but I care about my family and friends and contacts. That's what I'm interested in.

      People think of FB as a place they go to interact with friends. No-one thinks of interacting with FB.

    9. where AI is not helping individual humans as much as AI is helping human-human interaction

      Which is what helps each of those humans. Individually.

    10. is the most accessible and widespread

      OK now you're losing me. In what world is online education the "most accessible and widespread"?

    11. If you build a model, it automatically sets up the simulation for you. So you don't have to know any programming or mathematical equations. And then you can look at the results of this simulation.

      I'm not sure how this addresses the problem of not having access to physical labs; the problem that this project is supposed to address.

      We can run simulations from anywhere; they don't need to be in a lab. So why introduce the lab problem?

    12. guarantee that quality

      The context in which we find wicked problems aren't amenable to guaranteeing anything. The entire point of wicked problems is that they're unpredictable because of interacting variables that can't be tracked.

      You don't get to define something as a wicked problem and then talk about it as if it isn't one.

    13. solutions to these problems?

      If you're trying to make the point that education is a wicked problem then they don't have solutions.

    14. If you look at almost all of the work coming [out] on so-called cognitive tutors, or intelligent tutors, you put a child in front of a machine and you expect your child to learn.

      OK, but this what a tutor does. You can't use tutoring as the example that shows how the technology focuses on 1:1 relationships, when the example is only about 1:1 relationships. You could just as easily cite an example of how LMS developers are working on how to connect students with each other.

    15. how does technology help with social learning also?

      I'm sure that Facebook would have something to say here.

    16. There is another tension. What makes education really wicked and then connects with the two points you are making about policy and technology is that, on one side, when learning occurs, it occurs one-to-one. So, there is a teacher, there is a student. On the other side, learning is a fundamentally social process.

      I'm not sure that this is real. We always learn "in community". Sometimes we're engaging with a text (actually, another person; the author), or a resource, or one person, or a community of people. We're always learning in relationship. And technology can mediate that relationship. I don't know if this is a real tension.

    17. So education is a wicked problem because you and I can think of multiple goals which are in conflict with each other. We want education to be accessible. We want it to be affordable by everyone. We want it to be achievable, by which I mean, if I register for a class, I should be able to achieve the goals that I want to achieve. Even if it's accessible and affordable, but I can't achieve it, it's not of much use to me. At the same time, we want learning to be very efficient. I should be able to learn what I need very quickly. And it should be very effective in the sense I can make use of it. The difficulty is we know how to make it very efficient, very effective. All we have to do is to do it individually—one-to-one tutoring—and we know that works very well. But that breaks the point about accessibility [and] affordability, because we cannot have one teacher for every student for every subject in the world. It's just not going to happen. So those two sets of goals are in conflict. Accessibility [and] affordability is in conflict with efficiency and effectiveness. That's what makes it wicked.

      I hadn't thought of education as the more or less constant tension between goals that are at odds with each other.

    18. It's the engineering of learning. And the idea is really quite revolutionary, in a way, in the sense that we have always tried to model how people learn—we already tried to build technologies that can help people learn—but we have never viewed learning, until recently, as something that you could completely engineer.

      Initially I was sceptical about this idea. Don't we need to learn more about how the mind works, before we can engineer learning?

      But then I thought, you can build a bridge that doesn't fall down without knowing why it doesn't fall down. Over time you get better at building bridges as you learn more about the mechanics of it.

      Why shouldn't the same be true of learning?

    19. every student and researcher should have access to artificially intelligent assistants that not only help them study facts and figures, but also collaborate more closely with other humans.

      We will use AI to mediate our relationships with people and ideas.

    1. And this would all happen while doing fewer hours of work than I had been doing before.

      Maybe. But maybe you also needed to do all the previous work to establish a foundation that could then be refined.

    2. although I traveled constantly, I rarely took “vacations” per se. It was more like, “hey, that beach looks like a really beautiful place to check my email for the next two hours.”

      Indeed.

    3. solving problems is like food for your mind. It makes your mind happy

      See Willingham, D. T. (2009). Why Don’t Students Like School. John Wiley & Sons, Ltd.

      We really do enjoy problem-solving.

    4. Micromanaging the hell out of your employees won’t only not make them more productive, they’ll come to hate you and be even less motivated to produce results for you in the future.

      This is true but it's not really related to time on task, like the rest of the piece.

    5. I banged out a new draft of the book in two months flat.

      I doubt that this would have been possible had you not already had the substance to work with. In Draft No. 4, John McPhee talks about how the first draft is rubbish and it's only when he gets to the 4th draft that it's getting somewhere.

    1. In the next 10 years, I expect at least five billion people worldwide to own smartphones, giving every individual with such a phone instant access to the full power of the Internet, every moment of every day.

      A bit simplistic, since "full power of the internet" would depend a lot on other factors e.g. stability of the cell network, availability of cheap electricity, etc. So simply owning a phone is a necessary, but not sufficient, piece of the puzzle.

    2. We believe that many of the prominent new Internet companies are building real, high-growth, high-margin, highly defensible businesses

      This in itself isn't necessarily a good thing. Maybe in terms of generating a profit but not in terms of contributing anything useful to society.

    1. Experts who study the phenomenon say it's due, at least in part, to the widening role of technology.

      Skill-biased technological change is a shift in the production technology that favours skilled over unskilled labour by increasing its relative productivity and, therefore, its relative demand. Traditionally, technical change is viewed as factor-neutral. However, recent technological change has been skill-biased.

      See:

      • Violante, G.L. (2016). Skill-Biased Technical Change. In The New Palgrave Dictionary of Economics, 1–6. London: Palgrave Macmillan UK.
      • Siegel, D. S. (1999). Introduction to Skill-Biased Technological Change. In Skill-Biased Technological Change: Evidence from a Firm-Level Survey.
    2. A version of this shift is present in just about any other industry you can name. “As more automation came in, there was more demand on these workers to display social skills. What you now needed was someone who could talk to a customer, who could articulate the problem and problem-solve,” says Raman. But rather than look for candidates with those specific qualifications, “many companies took the easy route of using the four-year college degree as a proxy: ‘I know if they have a degree, they’ll be able to use an iPad. They’ll be able to use Excel’.”

      See Wiblin, R. (n.d.). Economist Bryan Caplan thinks education is mostly pointless showing off. We test the strength of his case. Retrieved March 5, 2021, from https://80000hours.org/podcast/episodes/bryan-caplan-case-for-and-against-education/

    1. If the exam is designed in a way that accounts for collegial discourse and use of resources that problem is solved

      I agree. We change what it means "to cheat".

      I always thought it was odd that you can walk past 2 or 3 healthcare professionals talking about a difficult clinical case and you think, "What a great example of professional development and collegiality". But when we see students doing the same thing, we call it cheating.

    1. if a student has nowhere in their university assessment a chance to recognise their work as socially useful, and to see others recognise it as such, then this is a very diminished educational experience

      Again, see the work of Freire and Giroux for more insight.

    2. I extend the notion of student involvement to think in terms of the whole student: to think philosophically about the relationship between assessment and students’ self-realisation. The issue thus becomes, the extent to which assessment is involved in our students: how it contributes, or not, to their wellbeing, personal and intellectual growth and their development as constructive members of society.

      I'm increasingly drawn to the deep insights of Paulo Freire (Pedagogy of the Oppressed) and Henry Giroux (On Critical Pedagogy), among many others. There is a strong body of literature that presents learning and teaching as a practice that revolves around relationship.

    3. students must understand an assessment system if they are expected to flourish within it

      Think about professional development where it would be obscene to tell someone that they're not allowed to know anything about the process of development.

    4. considering the possibility that a different assessment method may have enabled a better engagement with knowledge.

      Maybe students do better in these assessments because we're getting out of their way and giving them space to learn.

    5. many universities have pursued technocratic solutions that reproduce the trusted orthodoxy of the time-limited, unseen exam as closely as possible.

      Nice presentation by Jesse Stommel on the rise of surveillance: Stommel, J. (2020). Against Surveillance. https://www.beautiful.ai/player/-MNUceT2mRb7ZhRJ_hL7/Against-Surveillance

    6. but fundamental principles not rethought.

      An example of what I thought was a rethinking of assessment: Killam, L. (2020, April 6). Exam Design: Promoting Integrity Through Trust and Flexibility. Insights from Nurse Killam. http://insights.nursekillam.com/reflect/exam-design/

    7. The assessment challenges induced by Covid-19 opened many possibilities to fundamentally rethink why and how we assess, but I see little evidence of this actually happening.

      I think most people are hoping for things to end so that they can just go back to "normal".

    1. we reward people who solve problems while ignoring those who prevent them in the first place

      It's hard to know when a problem has been prevented.

  4. Feb 2021
    1. first considered how we could write a piece for the public later

      This is an interesting suggestion; begin by planning how you will communicate your idea to the public.

    2. the first step in improving academic writing is to learn to reduce the jargon academics use and express concepts clearly

      See Pinker, S. (2015). The Sense of Style: The Thinking Person’s Guide to Writing in the 21st Century (Reprint Edition). Penguin Books.

      Also, Thomas, F.-N., & Turner, M. (2011). Clear and Simple as the Truth: Writing Classic Prose (2nd Edition). Princeton University Press.

    1. Greater reach leads to far greater exposure. This can take the form of comments from academics around the world, invitations to collaborate, and TV and radio interviews.

      Possible, but unlikely.

    1. Academics need to start playing a more prominent role in society instead of largely remaining observers who write about the world from within ivory towers and publish their findings in journals hidden behind expensive digital paywalls.

      This is another thing we need to be better at; publishing in open access journals.

    2. This can help develop creative non-fiction writing skills.

      We don't really think of ourselves as non-fiction writers.

    3. Academics have no choice but to go along with this system. Their careers and promotions depend almost entirely on their journal publication record, so why even consider engaging with the general public?

      Well, there are also good reasons to believe that blogging and other forms of interaction on social media can enhance an academic's formal research output.

    4. Universities also don’t do a great deal to encourage academics to step beyond lecture halls and laboratories. There are globally very few institutions that offer incentives to their academics to write in the popular media, appear on TV or radio, or share their research findings and opinions with the public via these platforms.

      This demonstrates a lack of understanding on the part of institutions. By being more public and engaged - through, for example, blogging and social media interaction, academics do add value to their institutions through their affiliation.

    5. Some academics insist that it’s not their job to write for the general public. They suggest that doing so would mean they’re “abandoning their mission as intellectuals”. They don’t want to feel like they’re “dumbing down” complex thinking and arguments.

      And this is why society is increasingly hostile to academics.

    6. many potentially world altering ideas are not getting into the public domain

      Alternatively, we should ask if the work being done, and never being read, is of any use at all?

      When we consider the number of duplicated studies, or studies that don't contribute anything to the broader literature, we should probably acknowledge the possibility that most published research isn't very useful.

    7. an average journal article is “read completely by no more than ten people”. They write: Up to 1.5 million peer-reviewed articles are published annually. However, many are ignored even within scientific communities – 82% of articles published in humanities [journals] are not even cited once.

      When you think about the enormous amount of time and intellectual energy that goes into the process of getting the project proposal accepted, gathering data, analysing it, writing it up, and getting it through the peer review process, this seems like an awfully big waste of time.

      How is this good for anyone?

    8. Research and creative thinking can change the world. This means that academics have enormous power. But, as academics Asit Biswas and Julian Kirchherr have warned, the overwhelming majority are not shaping today’s public debates.

      I have a real concern that my "value" to my institution lies in how many other academics cite my work. It's like we all live in a bubble where we're just talking to each other.

      Surely it matters more if my work is useful to the much larger public?

    1. forced

      How about "...given the opportunity..."?

    2. By forcing students to write ‘publicly’, their writing rapidly improves

      I don't love the phrasing here: if this is "forcing" then everything we ask our students to do is "forced", in the sense that it is a curriculum requirement. It doesn't fit with building a teaching-learning relationship, or of learning being student-centred.

    3. there is no waste – what starts as a blog, ends as an academic output

      You could also take the position that the blog post is itself an academic output, albeit one that the academy doesn't (yet) formally recognise.

    4. By building blogging, Twitter, flickr, and shared libraries in Zotero, in to our research programmes – into the way we work anyway – we both get more research done, and build a community of engaged readers for the work itself.

      This is linked to the concept of an open scholar: Burton, G. (2009, August 11). The Open Scholar. Academic Evolution. https://www.academicevolution.com/2009/08/the-open-scholar.html

      The open scholar is someone who makes their intellectual projects and processes digitally visible and who invites and encourages ongoing criticism of their work and secondary uses of any or all parts of it–at any stage of its development.

    5. when journal articles proliferate beyond number because they serve the needs of big publishing, rather than academic dialogue – we need to think harder about how we do the job of the humanities

      This also serves the academy though, who are remunerated for the publications of their employees. While citation "counts" for the individual academic, it's number of publications that "counts" for the institution.

    1. Don't bend yourself to fit the world.

      The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man. - George Bernard Shaw

    2. Don't bend yourself to fit the world.

      The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man. - George Bernard Shaw

    1. most students did not report study strategies that correlated with their VARK assessment, and that student performance in anatomy was not correlated with their score in any VARK categories. Rather, some specific study strategies (irrespective of VARK results), such as use of the virtual microscope, were found to be positively correlated with final class grade. However, the alignment of these study strategies with VARK results had no correlation with anatomy course outcomes. Thus, this research provides further evidence that the conventional wisdom about learning styles should be rejected by educators and students alike.

      It's unusual that researchers will make such definitive claims about the outcome of a study.

    1. If we write blogs, we are told, we can communicate our research more effectively. Blogs enhance impact, they are a medium for public engagement. The advocacy goes on… Blogs (and other social media) can point readers to our (real) academic publications, particularly if they are held on open repositories. Blogging it seems is a kind of essential add-on to the usual academic writing and academic publication that we do.

      i.e. we can use blogs to point readers to our "real" academic work.

    2. Blogging helps you to get to the point The blog post is a small text, not an extended essay.  It’s simply not possible to introduce lots and lots of different ideas and make multiple points in a post of a thousand words or less.

      Branchaud, J. (2020, February 27). Write More, Write Small. DEV Community.

    3. Of course, some people do argue – and I’m in this camp – that blogging is in and of itself academic writing and academic publication. It’s not an add-on. It’s now part and parcel of the academic writing landscape.  As such, it is of no less value than any other form of writing. Even though audit regimes do not count blogs – yet – this does not lessen its value. And therefore those of us who engage in bloggery need to stop justifying it as a necessary accompaniment to the Real Work  of Serious Academic Writing. Blogs are their own worthwhile thing.

      i.e. blogs are the academic work.

    1. How do you find time to write? I’ve become fascinated by this question in recent months. Implicit within it is an understanding of ‘writing’ which I’m coming to see as deeply problematic. It treats the creative activity of writing as a matter of temporal budgeting. But how much time does writing take? It obviously depends on what we mean by ‘writing’

      See Golash-Boza, T. (2010, September 4). Get a Life, PhD: Ten ways you can write every day. Get a Life, PhD for 10 examples of what "writing" might include.

    2. perhaps it’s getting into the routine of responding to ideas in this way as and when you encounter them.

      Write when the idea arrives instead of simply making a note of it.

      This happens to me all the time. An idea arrives and I'm excited by it. Maybe I'm busy cooking supper or I'm in the shower, so I can't immediately write it down. But I spend 15 minutes exploring it. It's still exciting.

      I finish what I'm doing and write a paragraph to keep track of the idea.

      But when I return to it in a few days it seems a pale replica of the original idea. Less relevant and interesting.

      And I invariably end up deleting the paragraph.

    1. By writing regularly, and for shorter periods (2-3 hours a day),

      This is my ideal; I try to write from 8-10 every morning, Mon-Fri.

    2. One study suggests that academics who write daily and set goals with someone weekly write nearly ten times as many pages as those without regular writing habits.

      See Silvia, P. J. (2018). How to Write a Lot: A Practical Guide to Productive Academic Writing (Second Edition). APA LifeTools.

    3. An important part of being an academic researcher is remembering that you are an author.

      I don't think that many academics think of themselves as authors.

    1. This left branching sentence forces the brain to ‘hold’ a lot of information about what the academic managers are doing before applying it to the action. It’s the kind of sentence that forces the reader to go back to the start after they have finished in order to really understand what is going on.

      See Pinker, S. (2015). The Sense of Style: The Thinking Person’s Guide to Writing in the 21st Century (Reprint Edition). Penguin Books for a more detailed discussion of these points.

    1. academic colleagues find out I’m blogging?

      Then they'll be jealous.

    2. I try to keep a note of what I read, which I probably would not do if I was not writing a blog

      I've recently shifted into a frame of mind where I think that, if I'm reading something (something that isn't obviously news or entertainment), I should be making notes. If I'm not making notes, then I'm probably wasting my time reading that particular piece of content.

    3. admitting and correcting mistakes does you no harm

      If anything, it's vital. Journals aren't going to actively seek this out.

    4. But I should be doing research, or reading papers, rather than writing blogs. The main activity that blogging has displaced for me is watching TV.

      It's true that blogging shouldn't take a lot of time. It also shouldn't purely be something that you do in your own time. It's academic work and the benefits of blogging accrue to the institution as well through; directly through mention of the institution, but also indirectly in additional skills and networks developed by the blogging academic.

      Having said that, it does take time, which needs to fit in somewhere.

    5. jargon comes so naturally

      See Pinker, S. (2014, September 26). Why Academics Stink at Writing. Chronicle of Higher Education, 16.

    6. writing about contentious issues like austerity it is perhaps too easy to be rude

      I think this is good advice; you can get to a place where you forget that others are reading what you write. And tone, especially for people who don't know you in person, can be hard to convey in your writing.

    7. I do not fancy getting into online debating contests

      No-one does. In more than 10 years of blogging I've never had this happen. Of course, you can move into a space where it's more likely to happen, but I think that would need to be a choice you're making.

    8. I thought my posts would mainly be a useful resource for my students. Things I did not have time to say or elaborate on in lectures

      Blog posts as an addendum to your lectures.

    1. Rather than reducing scholarship to blogging

      This is a straw man; I don't believe that any academics sharing their ideas on blogs had/have any concerns that their scholarship is being "reduced" to anything.

    2. Far from subjugating research to journalism

      Is/was anyone actually saying this? Perhaps an example would give this some credibility.

    3. I share many of the fundamental concerns which I hear expressed about impact and public engagement – particularly the entirely justified fear that this agenda, as well as the broader changes within higher education within which it is unavoidably implicated, threaten the autonomy of academic work. I think there’s a risk that the production of academic knowledge (in the broadest sense of the term) becomes subjugated to the contingencies of the political cycle, particularly as its mediated by funding bodies and other intermediaries.

      Is the author really saying that academics sharing their work as part of a contribution to the public discourse is problematic? And this this threatens academic autonomy? Or am I misunderstanding what's being said? I can't imagine how an academic who shares their work in a more accessible format, is putting their autonomy at risk.

    4. space between academic research and journalism

      Maybe it would be useful to explain what the author means by "journalism"? Because if it's "writing for the mainstream media e.g. the New York Times, etc., then academics have been journalists for a long time". Maybe I'm missing something but I'd have liked for the author to go a bit deeper into these two spaces and the continuum between them.

    5. lead many, when confronted with the advocation of academic blogging, to see ‘blogging’ as corrupting ‘academic’

      This hasn't aged well.

    6. does academic blogging dangerously blur the boundary between research and journalism?

      I have no idea why this is "dangerous", nor why this blurring is problematic.

    1. Find the Most Creative People in Your Field and Steal From Them

      Kleon, A. (2012). Steal Like an Artist: 10 Things Nobody Told You About Being Creative (Illustrated edition). Workman Publishing. See Brainpickings article on the book.

    2. There’s almost a direct correlation between how much someone created and how original their work ended up being.

      If you want to have good ideas, start by having lots of ideas. I think that Linus Pauling said this (or something like it).

    3. Art shares a lot of similarities with undervalued stocks.17 At first, when people hear of a novel idea, a lot of them will laugh it off as ridiculous, outlandish, unnecessary, or just plain dumb. It’s here that the artist “buys” the idea at its low value, then finds a way to refurbish and “flip it” into something of higher value that the world understands and appreciates.

      Look for ideas that are under-valued, invest resources (time, money, energy) into them, so that when their true value is appreciated, you're in a good position to get a return on the investment.

    4. Steve Jobs didn’t invent the personal computer. He didn’t invent the mouse or graphical interfaces. He didn’t invent MP3 players or smartphones. He didn’t invent tablets or laptops or wearables. He literally invented nothing. He just did old things better

      Dixon, C. (2020, October 18). Doing Old Things Better Vs. Doing Brand New Things. Andreessen Horowitz. https://a16z.com/2020/10/18/doing-old-things-better-vs-doing-brand-new-things/

    5. Focus on Doing the Work, Not Flashes of Inspiration

      Johnson, S. (2011). Where Good Ideas Come from: The Seven Patterns of Innovation. Penguin Books.

      Syed, M. (2015). Black box thinking: Why most people never learn from their mistakes--but some do.

      Pressfield, S., & Godin, S. (2015). Do the Work: Overcome Resistance and Get Out of Your Own Way. Black Irish Entertainment LLC.

      Some great books on creativity and doing the hard work that comes after inspiration.

    6. the second part of creative work: adding value

      Try to be useful.

    1. When I’m writing on non-housing topics is that “academic blogging”? Or just an academic blogging?

      Identity again. Is your identity influenced by what you're writing about? Or does what you're writing about influence your identity?

    2. Some academic bloggers leaven the mix by interspersing their ‘academic’ posts with more personal posts about family, biography or travel. I’m not at all averse to that approach, but it isn’t really my style.

      I also find it challenging to share personal information on my blog.

    3. And if the world is going to grasp what’s happening then our writing needs to be digestible.

      You need to use different language when writing on your blog, compared to writing papers. You don't need references. You should write in first person. Spell checking is optional.

    4. An academic blogger may feel constrained to topics only related to his or her academic research, whereas a blogger who is also an academic is free to explore wider fields of discussion.

      This idea of "identity" is important. Many academics don't even think of themselves as authors let alone bloggers.

    1. To achieve a position in the top tier of wealth, power and privilege, in short, it helps enormously to start there. “American meritocracy,” the Yale law professor Daniel Markovits argues, has “become precisely what it was invented to combat: a mechanism for the dynastic transmission of wealth and privilege across generations.”

      Really good interview with Markovits and Sam Harris on the topic on meritocracy.

    1. the best solution for creative blocks isn’t to try to think in front of an empty page and simply wait for thoughts to arrive, but actually to continue to speak and write (anything), trusting this generative process

      I believe that this is something that I experience fairly regularly (but I have no control group, so it's hard to be certain); I write and read and write and read and creative ideas come. I don't go looking for them and I don't wait for them to arrive.

    2. It’s not thought that produces speech but, rather, speech is a creative process that in turn generates thought

      This is a bit like Feynman's suggestion that teaching a concept to someone else an excellent way for you to learn it yourself.

    3. Speaking out loud is not only a medium of communication, but a technology of thinking: it encourages the formation and processing of thoughts.

      Talking out loud is way to develop their thinking. You don't speak fully developed thoughts...you develop your thoughts while speaking.

      See also Matuschak, A., & Nielsen, M. (2019). How can we develop transformative tools for thought? and Victor, B. (2014, December 22). The Humane Representation of Thought.

    1. 3.) Who can cite blogs? Okay, now here comes the real hypocrisy. Although I cite blogs within academic writing, I explicitly forbid my undergraduate students from doing so. Their papers must include only peer-reviewed work unless I specifically approve of a non-peer-reviewed source. Oh, hi Privilege, nice to see you again. The key difference between my students and me (besides, of course, our taste in music and repertoire of Seinfeld quotes), is that I have a Ph.D. and they are working on Bachelor’s degrees. That is, we are differentiated by levels of education, and having a higher level of education gives me the privilege and power to determine the value of piece of writing, and denies this power and privilege to those with less formal education. To say it out loud feels like the academic equivalent of “Because I Said So.” At the same time, I have been trained in a particular field for several years. I have read the jargon-ridden journal articles, trudged through the 5-chapters-too-long books, and even contributed a few pieces of my own. Moreover, I have been a peer-reviewer, charged with making formal decisions about what is, and is not, a publishable piece of research. And so I take this training and I use it, again imperfectly, as a privilege, allowing myself to discern quality while urging others to wait until they have enough knowledge and practice to make such discernments. What “enough” is, however, remains quite nebulous.

      I agree with this general argument. Again, not all opinions are equal.

    2. one can counter the former point by noting the poor quality of some published articles, problematizing the false-security that comes along with a legitimizing label of “peer-reviewed.”

      Peer review is, in itself, not a guarantee of quality.

    3. What would such an anything-goes literature review look like?

      What indeed? Why not go a bit further and explore what, in fact, this kind of review might look like?

    4. More ambiguous, of course, is the question of using content from these blogs, in their own right, as building blocks or even a foundation for, theoretical arguments.

      Blog content can be used as "data" as part of an analysis, but we should be more careful when using the same content to develop theoretical arguments. I guess because we have to be more cautious when making knowledge claims about the world.

    5. Some of their work is published only in blog form

      Blog posts can be great for pushing the boundaries of thinking and for presenting arguments that fall outside the scope of traditional academic practice.

    6. Should writers cite blog posts in formal academic writing (i.e. journal articles and books)?

      I suppose it depends on who you cite? Not all opinions are equal.

    1. AI agents can acquire novel behaviors as they interact with the world around them and with other agents. The behaviors learned from such interactions are virtually impossible to predict, and even when solutions can be described mathematically, they can be “so lengthy and complex as to be indecipherable,” according to the paper.

      The sheer number of interacting variables that you'd need to track makes it impossible to make any accurate predictions.

    2. it might be killing three times the number of cyclists over a million rides than another model

      Fair enough. But then we should also make the counter-argument...how many motorists did the self-driving car save in the same period?

      I know that this is a tricky ethical scenario and I'm not trivialising it, but these arguments are overly simplistic and one-sided.

    3. Say, for instance, a hypothetical self-driving car is sold as being the safest on the market. One of the factors that makes it safer is that it “knows” when a big truck pulls up along its left side and automatically moves itself three inches to the right while still remaining in its own lane. But what if a cyclist or motorcycle happens to be pulling up on the right at the same time and is thus killed because of this safety feature?

      I think that an algorithm that's "smart" enough to move away from a truck is also "smart" enough to know that it cannot physically occupy the same space as the motorcycle.

    4. A co-author of the paper, Alan Mislove of Northeastern University, is among a group of academic and media plaintiffs in a lawsuit challenging the constitutionality of a provision of the Computer Fraud and Abuse Act that makes it criminal to conduct research with the goal of determining whether algorithms produce illegal discrimination in areas like housing and employment.

      So you're a criminal if you're doing research to determine if an algorithm is doing something criminal? That seems...wrong.

    1. We are all one giant human-machine system,” says Obradovich. “We need to acknowledge that and start studying it that way.

      A socio-technical system.

    2. move away from viewing AI systems as passive tools that can be assessed purely through their technical architecture, performance, and capabilities. They should instead be considered as active actors that change and influence their environments and the people and machines around them.

      Agents don't have free will but they are influenced by their surroundings, making it hard to predict how they will respond, especially in real-world contexts where interactions are complex and can't be controlled.

    3. propose to create a new academic discipline called “machine behavior.” It approaches studying AI systems in the same way we’ve always studied animals and humans: through empirical observation and experimentation

      We do this all the time; observe people's behaviour and then make inferences about their intentions.

    4. As algorithms have come to mediate everything from our social and cultural to economic and political interactions, computer scientists have attempted to respond to rising demands for their explainability by developing technical methods to understand their behaviors.

      It's completely bizarre that we have such a high standard for trusting the predictions of algorithms when, up until recently, we trusted human beings to "mediate everything from our social and cultural to economic and political interactions" and had absolutely zero expectation that we needed to understand the reasoning processes in those human beings.

    5. We've developed scientific methods to study black boxes for hundreds of years now, but these methods have primarily been applied to [living beings] up to this point

      It's called psychology.

    1. Koo's discovery makes it possible to peek inside the black box and identify some key features that lead to the computer's decision-making process.

      Moving towards "explainable AI".

    2. Neural nets learn and make decisions independently of their human programmers. Researchers refer to this hidden process as a "black box." It is hard to trust the machine's outputs if we don't know what is happening in the box.

      Counter-argument: Why do we trust a human being's decisions if we don't know what is happening inside their brain? Yes, we can question the human being but we then have to trust that what they tell us about their rationale is true.

    1. You see it in education. We have top-end universities, yes, but with the capacity to teach only a microscopic percentage of the 4 million new 18 year olds in the U.S. each year, or the 120 million new 18 year olds in the world each year. Why not educate every 18 year old? Isn’t that the most important thing we can possibly do? Why not build a far larger number of universities, or scale the ones we have way up?

      Higher education is still an elite institution, for the elite. And despite all the rhetoric about opening up access to it, the fundamental structure of universities prevents it. We can't scale learning.

    2. A government that collects money from all its citizens and businesses each year has never built a system to distribute money to us when it’s needed most.

      Implementing a universal basic income would force governments to build this system where money can flow to citizens.