893 Matching Annotations
  1. Aug 2021
    1. Across the world, universities have become obsessed with their position in global rankings (such as the Shanghai Ranking and Times Higher Education's list), even when such lists are based on what are, in our view, inaccurate data and arbitrary indicators.

      I think there's a difference between a genuine concern for the question of how we evaluate the impact of research (after all, this has very important real world implications), and the competition among universities to increase their ranking. These are two different things entirely. We can have evaluations of research impact without having university ranking. I think we need to be careful of conflating the two.

    2. Lately, metrics related to social usage and online comment have gained momentum — F1000Prime was established in 2002, Mendeley in 2008, and Altmetric.com (supported by Macmillan Science and Education, which owns Nature Publishing Group) in 2011.

      See altmetrics.

    3. The problem is that evaluation is now led by the data rather than by judgement.

      But what is 'judgement' based on? I feel like there's a specific definition of 'data' being used here that's not made explicit. We all rely on data, albeit in different forms and different weightings, to make judgements.

    1. Assessing staff solely on the basis of quantitative metrics is never acceptable, no matter what type of metric is being used

      See Goodhart's Law and some background on why these kinds of measurements are difficult.

    2. The two metrics used were a five-year average of research income compared with that of researchers at similar universities, and a score called field-weighted citation impact, which measures how often research papers are cited relative to the rest of the papers in their field.

      I'm concerned with the funding requirement more than the citation one. Producing high quality research that impacts society is one thing, but not all high quality research requires funding grants. By mandating that funding is tied to job security, you encourage researchers to focus on grant applications - which is very time consuming - rather than other potentially valuable endeavours e.g. postgraduate research capacity development.

    3. membership of external bodies

      How is 'membership' a factor that could help you keep your job?

    4. The university declined to specify what these metrics were.

      Indeed. On the one hand, this comes across as suspicious. On the other, there may be concerns around researcher's gaming the system. My money's on the former.

    5. The debate highlights broader unease about the use of metrics in science as more data are collected to assess the quality of researchers’ work. Some say these quantitative measures of performance concentrate too much on publication records while failing to acknowledge other types of work, including teaching, committee work and peer review.

      This is true. However, researchers and institutions all work within an ecosystem that prioritises publication and funding grants. No single institution can change this. The whole system is undermined by perverse incentives.

    6. broader unease about the use of metrics in science

      I agree that we shouldn't use inappropriate and unreliable metrics to make decisions about who to fire. However, the use of metrics in science must surely be something we keep?

    1. researchers are already encouraging improved practices in research assessment

      See the UK Royal Society's Résumé for Researchers.

    2. The outputs from scientific research are many and varied, including: research articles reporting new knowledge, data, reagents, and software; intellectual property; and highly trained young scientists.

      This still seems to have a focus on the traditional academic outputs that would fit into the structure of a journal. Does this also include recognition of creative outputs e.g. from the Arts and Humanities?

  2. Jul 2021
    1. Short interview that covers the findings of a systematic review that aimed to identify the number of studies trying to replicate the outcomes of clinical decision support systems.

      Article being discussed in this piece: Coiera, E., & Tong, H. L. (2021). Replication studies in the clinical decision support literature-frequency, fidelity, and impact. Journal of the American Medical Informatics Association: JAMIA, ocab049. https://doi.org/10/gmb35n

    2. What we find as we crawl out of the hole is the challenge: Will we find that there are things that we believe that aren’t true? That’s quite possible.

      We believe that machine learning and clinical decision support is going to help us make fewer mistakes. What if we find out that's not true?

      On the other hand, what if we find out that human beings are really poor judges of what's most appropriate in complex clinical situations, and that algorithms make better decisions? Will we ask our doctors to stand aside and make way for algorithms?

    3. One of the responses to the last paper was, “Oh, my God, all you’ll do is end up flooding the literature with cheap useless replications” — well, that’s not a problem right now. If it was the case that every Ph.D. had done replications as part of training, yes, we’d have many hundreds of them. Fantastic. You could have a special open access journal called informatics replication, for goodness’ sake, to put the less major ones in. These are just non-problems.

      Are they worried that we're going to run out of paper? Not every replication study needs to be published in a high-end journal. Start an open-access archive server and include replication studies as part of postgraduate training.

    4. We don’t have a culture that recognizes or rewards replication work, so there’s no good practice around what good replication looks like. Whereas I think in psychology, for example, you’ve seen great progress since the early concerns. But that took a lot of really public failures of studies.

      I like the analogy to engineering. In the early days of engineering as a discipline there would have been many failures. Now, we don't aim to build bridges that don't fall down; we just assume that we know how to build bridges that don't fall down. But it took many falling bridges for us to learn how to do this consistently. We're at a similar stage with machine learning in healthcare; we're going to see a lot of failures as we try to create a system for using machine learning in health.

    5. It’s a near universal challenge. The question is whether those differences are a foundational barrier to science, or whether they’re manageable. And I think most people who look at this area would say, look, they’re manageable.

      If we can't test interventions across multiple sites then we're in a lot of trouble when it comes to this technology.

    6. The first reaction that’s really strong is, “Well, you can’t do replication in digital health because it’s all so special.” To which I say, I’m sorry, this is not the case. It’s a good excuse, but it won’t hold. And also, if it was the case, my God, what a disaster it would be, because that would mean there was no science to what we’re doing.

      It's tempting to say that each institution is so contextually different, that any system you use has to be modified to fit that context. So the argument is that you can't test that system across multiple sites.

    7. the field has yet to build a culture that values scientific best practices

      It's all about being first to publish.

    8. Over six months, he and colleague Huong Ly Tong dredged up all the journal-published papers they could find analyzing the outcomes of clinical decision support systems. They found 4,063 — and of those, only 12 were replications.

      There are more and more studies that take a critical position with respect to the methods used to report outcomes of machine learning in healthcare.

      See also the article below, that found serious methodological flaws in reporting the outcomes of medical imaging studies.

      Aggarwal, R., Sounderajah, V., Martin, G., Ting, D. S. W., Karthikesalingam, A., King, D., Ashrafian, H., & Darzi, A. (2021). Diagnostic accuracy of deep learning in medical imaging: A systematic review and meta-analysis. Npj Digital Medicine, 4(1), 1–23. https://doi.org/10/gkqxjg

    9. Most recently, work pointed to flaws in an algorithm to predict the risk of sepsis, integrated into Epic’s electronic health record platform. A recent STAT investigation found those shortcomings extend to other Epic algorithms, including those used by hospitals to predict how long patients will be hospitalized or who will miss appointments.

      The STAT news article linked to here is behind a paywall, but this short piece by The Verge covers some of what's going on.

    1. Despite the problems with the algorithms, Epic incentivizes hospitals to adopt them.

      To be honest though, this seems a lot like pharmaceutical companies 'encouraging' physicians to promote certain medications that have limited utility.

    2. Like many other groups that build health algorithms, Epic doesn’t publicly share details of how the algorithms are built. Researchers at hospitals that use Epic are able to scrutinize the tools, but any investigations are challenging: they can’t disclose proprietary information

      Imagine if this approach were taken with cardiac or oxygent sats monitors i.e. if we weren't allowed to see how they work. Why do these companies get a pass?

    1. one could imagine a world in which failure to produce a machine-readable structured procedure report precluded being paid at all

      This is probably unrealistic in most places in the world.

    2. If we had truly robust standards for electronic data interchange and less anxiety about privacy, these kinds of data could be moved around more freely in a structured format. Of course, there are regional exchanges where they do. The data could also be created in structured format to begin with.

      This does exist. Fast Healthcare Interoperability Resources (FHIR; pronounced 'fire') is an open standard that describes data formats and elements (the 'resources' in the name), as well as an application programming interface (API) for exchanging electronic health records.

      See more here: https://hl7.org/fhir/

    1. Supply chains—starting with the factories upstream, running through the ports and rail yards and warehouses, and ending with retail—are large and complex systems. These systems need to be adaptive, and yet the news shows us they are not. 

      We need supply chains to route around problems in the same way that packets on the internet route around bottlenecks and broken connections.

    2. Most decision-making systems have trouble with unexpected shifts in data. They are trained to make decisions in some contexts, and they break when something unforeseen happens. They are brittle.

      Rule-based systems are brittle and cannot handle changes in context.

    3. If we say someone is smart, we rarely mean that they can recognize faces. We very often mean that they know what to do to reach their goal.

      We have certain intuitive notions of what it means to 'be intelligent'.

    4. Most machine learning algorithms are good at perceptive tasks that would take a person under a second to perform, such as recognizing a voice or a face. But deep reinforcement learning can learn tactical sequences of actions, things like winning a board game or delivering a package.

      'Simple' recognition vs sequences or patterns of behaviour.

    1. the STARD guidelines were updated in 2015 and are a set of 30 essential items that should be part of any diagnostic accuracy study; from sample size calculations to cross tabulation of results. The meta-analysis found that only 24/273 studies mentioned adherence to guidelines (interestingly the authors don't say if they actually were adherent or not) or contained a STARD flow diagram.

      Note that this doesn't mean that the studies were inaccurate, or that authors are deceiving readers. It only means that we can't be super confident in the findings and conclusions.

    2. Unfortunately the findings from the second point almost completely undermine the first, and so that's what I'll be focusing on.

      If authors don't report their methods accurately, we can't have confidence in the findings and conclusions.

    1. A deep learning model was used to render a prediction 24 hours after a patient was admitted to the hospital. The timeline (top of figure) contains months of historical data and the most recent data is shown enlarged in the middle. The model "attended" to information highlighted in red that was in the patient's chart to "explain" its prediction. In this case-study, the model highlighted pieces of information that make sense clinically.

      This kind of articulation of "reasoning" is likely to help develop trusting relationships between clinicians and AI.

    2. An “attention map” of each prediction shows the important data points considered by the models as they make that prediction.

      This gets us closer to explainable AI, in that the model is showing the clinician which variables were important in informing the prediction.

    3. We emphasize that the model is not diagnosing patients — it picks up signals about the patient, their treatments and notes written by their clinicians, so the model is more like a good listener than a master diagnostician.

      This sounds a lot like a diagnosis to me. In what way is this not a diagnosis?

    4. Before we could even apply machine learning, we needed a consistent way to represent patient records, which we built on top of the open Fast Healthcare Interoperability Resources (FHIR) standard as described in an earlier blog post.

      FHIR is what enabled scalability.

    1. No matter an AI's final Turing test score, a script built to imitate human conversation or recognize patterns isn't something we'd ever describe as being truly intelligent. And that goes for other major AI milestones: IBM's Deep Blue is better at chess than any human and Watson proved it could outsmart Jeopardy world champions, but they don't have any consciousness of their own.

      The writer is conflating intelligence (in the context of AI) with consciousness. No-one is suggesting that algorithms are conscious or sentient. And as for the throwaway statement: "truly intelligent" what does that even mean? What does it mean for something to be "truly intelligent"? To display human-level intelligence? There's no reason to think that there is anything special about human-level intelligence, and in fact, we're already far behind machines in many areas (e.g. recall, storage, calculation, pattern recognition, etc.)

    2. With Ex Machina, the directorial debut of 28 Days Later and Sunshine writer Alex Garland, we can finally put the Turing test to rest. You've likely heard of it -- developed by legendary computer scientist Alan Turing (recently featured in The Imitation Game), it's a test meant to prove artificial intelligence in machines. But, given just how easy it is to trick, as well as the existence of more rigorous alternatives for proving consciousness, passing a test developed in the '50s isn't much of a feat to AI researchers today.

      This is not true. Turing never said anything about "consciousness". He actually asked, “Can machines communicate in natural language in a manner indistinguishable from that of a human being?” The Turing test is not a test of artificial intelligence. And it's definitely not a test aimed at "proving" consciousness.

    3. As originally conceived, the Turing test involves a natural language conversation between a machine and human conducted through typed messages from separate rooms. A machine is deemed sentient if it manages to convince the human that it's also a person; that it can "think."

      This is not even wrong.

    1. That's why we will always stay smarter than AI.

      This is such confused writing that does a disservice to the reader by using terms and phrases inconsistently, straw man arguments, and cherry-picked examples.

    2. People will always be faster to adjust than computers, because that's what humans are optimized to do

      This is another different context. Now you're talking about being able to "adjust" to different contexts; your title talks about being "smarter" than computers. This is sloppy writing that's all over the place.

    3. For Booking.com, those new categories could be defined in advance, but a more general-purpose AI would have to be capable of defining its own categories. That's a goal Hofstadter has spent six decades working towards, and is still not even close.

      This is true. AGI is a long way away. But that's not the point. AI and machine learning are nonetheless making significant advances in narrowly constrained domains.

    4. perception is far more than the recognition of members of already-established categories — it involves the spontaneous manufacture of new categories at arbitrary levels of abstraction
      1. This is true. Hofstadter is talking about perception while the developer in the previous example is simply talking about recognition or identification. It's painful having to keep track of how often you're doing this bait-and-switch.
    5. This concept of context is one that is central to Hofstadter's lifetime of work to figure out AI

      This is about creating artificial general intelligence i.e. a general intelligence that's analogous to human intelligence. But that's not what machine learning, Google Translate, or image processing is about.

      This is another straw man; swap out narrowly constrained machine learning tasks for generalised intelligence, and then explain why we're not even close to machine general intelligence.

    6. They may identify attributes such as 'ocean', 'nature', 'apartment', but Booking.com needs to know whether there's a sea view, is there a balcony and does it have a seating area, is there a bed in the room, what size is it, and so on. Dua and his colleagues have had to train the machines to work with a more detailed set of tags that matches their specific context.

      Context is important. But that dataset that's now been labelled with additional context is now capable of automating the processing of every new image it comes across. This is why machine learning is so impressive. That algorithm the developer was talking about needed a lot of work to do what they wanted it to. But now it can. And it will never get worse at identifying what's in a picture.

    7. A friend asked me whether Google Translate’s level of skill isn’t merely a function of the program’s database. He figured that if you multiplied the database by a factor of, say, a million or a billion, eventually it would be able to translate anything thrown at it, and essentially perfectly. I don’t think so. Having ever more 'big data' won’t bring you any closer to understanding, since understanding involves having ideas, and lack of ideas is the root of all the problems for machine translation today.

      This is exactly what I just said. You've swapped out "translate" and made it "understand", then argued for why Translate will never "understand". This is terrible writing.

      The fact is, Translate will get to the point where it's translations are essentially perfect for 99% of the use cases thrown at it. And that all depends on having more data. And while it's true that "more data" on it's own may not get us to machines understanding human language, that's simply not what anyone is suggesting Translate actually does.

    8. The bailingual engine isn’t reading anything — not in the normal human sense of the verb 'to read'. It’s processing text. The symbols it’s processing are disconnected from experiences in the world. It has no memories on which to draw, no imagery, no understanding, no meaning residing behind the words it so rapidly flings around.

      This is true. Machines don't "understand" us. Who cares? Google isn't making the claim that Translate is capable of being your friend. Google is saying that Translate can help you move between languages. This is a bullshit straw man argument. You're swapping out what the system does for something else, and then attacking the "something else".

    9. Clearly Google Translate didn’t catch my meaning; it merely came out with a heap of bull. 'Il sortait simplement avec un tas de taureau.' 'He just went out with a pile of bulls.' 'Il vient de sortir avec un tas de taureaux.' Please pardon my French — or rather, Google Translate’s pseudo-French.

      This is a bit like making fun of a 5 year old for how poorly she speaks. But 5 years ago Translate was much worse. In 5 years time these mistakes will be solved. And Translate will be helping millions of people every day. Why would we make fun of that?

    10. Humans are optimized for learning unlimited patterns, and then selecting the patterns we need to apply to deal with whatever situation we find ourselves in

      I don't think that this is true. I think it's more likely the case that we have the capacity to learn lots of patterns (not "unlimited") that we can generalise to many scenarios. We can extrapolate what we've learned in one context to many others. Algorithms will get there too.

    11. Computers are much better than us at only one thing — at matching known patterns.

      This is weird because there's a ton of what we do that's just pattern matching. In fact, you could probably make a decent argument that most of what we do that we call "intelligence" is "just" pattern matching. If this is the only thing that computers are better at, then I'd say that's pretty close to saying that the game is over.

    12. underestimate our own performance because we rarely stop to think how much we already know

      This, at least, is true. But you seem to be making the argument about what ought to be, based on what is, which doesn't work. Yes, we're very good at some things (walking around a room and not bumping into anything, for example) that we don't even think about, and that machines find very difficult to do. Things that are easy, are hard.

    13. apparent successes

      How is the fact that I can talk to my phone ("OK Google, take me to my appointment") and it responds by giving me turn-by-turn directions to the place, an "apparent success"?

    14. Machine intelligence is still pretty dumb, most of the time. It's far too early for the human race to throw in the towel.

      This is very different from the "always" claim in the title. Disingenuous writing.

    15. apparently

      No, it's just "impressive". We've gone from machines being unable to do things like translate human language, to being able to do it "reasonably well". That's like going from not being able to fly, to being able to fly poorly. Pretty impressive.

    16. Not for the first time in its history, artificial intelligence is rising on a tide of hype.

      Not so; it's rising in importance because it's accomplishing real things in the real world. Yes, there's some hype around what it will be able to do, but the fact is that the hype around what it can already do isn't hype, it's just stating the facts.

      It's not often that a writer establishes their bias in the first sentence of a piece.

    1. The researchers started with 140,000 hours of YouTube videos of people talking in diverse situations. Then, they designed a program that created clips a few seconds long with the mouth movement for each phoneme, or word sound, annotated. The program filtered out non-English speech, nonspeaking faces, low-quality video, and video that wasn’t shot straight ahead. Then, they cropped the videos around the mouth. That yielded nearly 4000 hours of footage, including more than 127,000 English words.

      The time and effort required to put together this dataset is significant in itself. So much of the data we need to train algorithms simply doesn't exist in a useful format. However, the more we need to manipulate the raw information, the more likely we are to insert our own biases.

    2. They fed their system thousands of hours of videos along with transcripts, and had the computer solve the task for itself.

      Seriously, this is going to be how we move forward. We don't need to understand how it works; only that it really does work. Yes, it'll make mistakes but apparently it'll make fewer mistakes than the best human interpreters. Why would you be against this?

    1. Easy reading only makes you informed; hard reading makes you competent. Easy reading prevents you from being ignorant; hard reading makes you smarter.

      Anecdotally, I'd agree with this. There are a few books where simply reading a few paragraphs has changed my worldview. Those books took a long time to work my way through.

    2. reading easy texts exclusively is prone to confirmation bias and exacerbates our blind spots. After all, if you’re ignorant (partially or entirely) of an opposing view, surely you wouldn’t think you’re “objective”?

      Mortimer Adler's concept of syntopical reading goes some way to address the issue, in that it actively encourages you to provide multiple perspectives on the topic of interest.

    3. Pocket is also free, but I never really got into using it. (Doesn’t fit my workflow)

      I use Pocket to save almost all of what comes across my feed. I read it first in Pocket and if it deserves more attention, I share it to an Inbox in Zotero for later processing. Granted, this isn't as frictionless as capturing and processing in Notion, for example, but I actually want some friction for my workflow because it slows me down enough to ask if something really is worth keeping.

    4. Because that keeps us paying attention. That’s right, we’re still paying for “free” information.

      Nothing is free. Even using open source software isn't truly free because there's an opportunity cost of not using something else.

    5. The third rule is you should always make what you read your own.

      This is linked to elaboration i.e. rewriting concepts in your own words without reference to the source material.

    6. to solve information overload — or more appropriately, attention overload — we need to create a reading workflow

      We may just be using different words but I imagine this to be more broad than simply reading. Since it also note-taking and elaboration (see a few paragraphs later), I would put this under something like "managing information" rather than "reading".

    7. we should do three things: Manage what we pay attention toManage how we pay attentionProcess them deeply

      I usually think of filtering incoming information, capturing information that matters to me, processing that information as part of creating new value, and then sharing the resulting output.

    1. Ultimately, my dream—similar to that of Bush’s—is for individual commonplace books to be able to communicate not only with their users in the Luhmann-esqe sense, but also communicate with each other.

      What does "communicate" mean here? I pull in pieces of other texts (similar to transclusion) or more like an API that my PLE interacts with and manipulates? What advantages do we each get from this that I don't have now?

    2. IndieWeb friendly building blocks like Webmention, feeds (RSS, JSON Feed, h-feed), Micropub, and Microsub integrations may come the closest to this ideal.

      I've experimented with some aspects of the IndieWeb, trying to incorporate it into my blog but I still find it too complicated. Maybe that's just me though.

    3. The idea of planting a knowledge “seed” (a note), tending it gradually over time with regular watering and feeding in a progression of Seedlings → Budding → Evergreen is a common feature.

      Just the idea of managing the tags and icons of this process feels exhausting.

    4. Mike Caulfield’s essays including The Garden and the Stream: A Technopastoral

      Such a great read.

    5. Second brain is a marketing term

      Indeed. After having spent some time going through posts and videos produced by this crowd, I realised that none of them use their 'systems' for anything other than telling people about their systems; Forte's Second brain is a product.

    6. one might consider some of the ephemeral social media stream platforms like Twitter to be a digital version of a waste book

      I like the idea of your Tweets being 'captured' in a space that you control, but not of them becoming a fixed part of it. Maybe an archive of your short notes and bookmarks of things you've shared. Would also be interesting to analyse over time.

    7. They have generally been physical books written by hand that contain notes which are categorized by headings (or in a modern context categories or tags. Often they’re created with an index to help their creators find and organize their notes.

      Describes the kind of physical notebooks I kept when I was younger; quotes, pictures, passages of text, etc. Anything that caught my attention.

    1. Some of the examples you describe – the extraordinary variance seen in sentencing for the same crimes (even influenced by such external matters as the weather, or the weekend football results), say, or the massive discrepancies in insurance underwriting or medical diagnosis or job interviews based on the same baseline information – are shocking. The driver of that noise often seems to lie with the protected status of the “experts” doing the choosing. No judge, I imagine, wants to acknowledge that an algorithm would be fairer at delivering justice?The judicial system, I think, is special in a way, because it’s some “wise” person who is deciding. You have a lot of noise in medicine, but in medicine, there is an objective criterion of truth.

      Sometimes. But in many cases everyone can do exactly the right thing and the outcome is still bad. In other cases, the entire team can be on the wrong track and the patient can improve, despite their interventions. Trying to establish cause and effect relationships in clinical practice is hard.

    1. If we look at the arc of the 20th century, heavier than air flight transformed our world in major ways.

      In other words, deep learning techniques, while insufficient to achieve human level AI, will nonetheless have a massive impact on society.

    1. Don’t be afraid to take courses first.

      This is a really good idea if you have the time, especially since some courses will include the relevant readings and main concepts. However, if the course rolls out over 3 months on a schedule, and you have a 2 month deadline, it may not be useful.

    2. the goal is not to get every fact and detail inside your head, but to have a good map of the area so you know where to look when you need to find it again

      And IMO, this is exactly what a zettelkasten gives you.

    3. Read everything you can, including making highlights of sections you think you may need to revisit later. If I finish a book or longer paper, I’ll often make a new document where I’ll pull notes and quotes from my original reading, as well as do my best to summarize what I read from memory. The goal here is partly to practice retrieval and understanding, but also partly to give yourself breadcrumbs so you can find things more easily later.

      I think that this is the issue right here. If you're reading "just-in-case" i.e. reading everything you can, it may not make sense to spend the extra effort in converting the highlights to permanent notes, since you may never come back to them. However, once you've decided that the highlights have value, you'll return to the source and review them as part of working on the project.

    4. I don’t find those methods very helpful, but it’s possible that I’m simply inept at them.

      I've spent about a year developing a zettelkasten, and now that I'm approaching 2000 individual notes on discrete concepts, I can say that I'm only starting to see some of the benefits. My point is, it might take a long time with a lot of effort, before the system starts paying off.

    5. it’s better to follow a breadth-first rather than depth-first search, since you can easily spend too much time going down one rabbit hole and miss alternate perspectives

      It's better to get an overview first so that you can identify promising concepts that need more attention.

    6. When following citations, I look for two factors: frequency and relevance. Works that are cited frequently are more central to a field.

      Google Scholar will provide a reasonably accurate citation count for works, although it means searching for each source separately.

    7. After reading about two dozen Kindle previews for the most relevant seeming ones

      Use book reviews and summaries to get a sense of what books are worth reading. A book-related interview with the author is another way to get some good insights before deciding on whether or not to read the whole book. Sometimes the answer you're looking for might be in the interview.

    8. Literature Review, Meta-Analysis and Textbooks

      These usually provide a broad overview of a topic, although a meta-analysis might only be relevant for certain kinds of research e.g. randomised controlled trials or other experimental designs. Scoping reviews are increasingly popular for broad overviews that don't necessarily drill down into the details.

    9. Wikipedia is usually a good starting point, because it tends to bridge the ordinary language way of talking about phenomena and expert concepts and hypotheses. Type your idea into Wikipedia in plain English, and then note the words and concepts used by experts

      The key is that Wikipedia provides structure on multiple levels, from a short article summary, to seb-sections of more fine-grained information, to key concepts, to reference lists of canonical works.

    10. Open-ended activity often languishes from a lack of completeness

      Unless it's something like learning in general, which is never complete.

    11. Setting Up Scope and Topic

      You need to establish boundaries with respect to what you want to learn, otherwise you'll keep going towards whatever catches your attention in the moment.

  3. Jun 2021
    1. A brief overview of predictive processing.

    2. If your predictions don’t fit the actual data, you get a high prediction error that updates your internal model—to reduce further discrepancies between expectation and evidence, between model and reality. Your brain hates unfulfilled expectations, so it structures its model of the world and motivates action in such a way that more of its predictions come truer.

      Does the high prediction error manifest as surprise? How do we perceive this prediction error?

    3. Your brain runs an internal model of the causal order of world that continually creates predictions about what you expect to perceive. These predictions are then matched with what you actually perceive, and the divergence between predicted sensory data and actual sensory data yields a prediction error.

      Why does it do this? Does this reduce cognitive workload or something?

    4. If your brain is Bayesian, however, it doesn’t process sensory data like that. Instead, it uses predictive processing (also known as predictive coding)2 to predict what your eyes will see before you get the actual data from the retina.

      Mental.

    5. Your brain is a prediction machine.

      See also Agrawal, A., Gans, J., & Goldfarb, A. (2018). Prediction Machines: The Simple Economics of Artificial Intelligence. Harvard Business Review Press.

    1. There are obvious benefits to AI systems that are able to automatically learn better ways of representing data and, in so doing, develop models that correspond to humans’ values. When humans can’t determine how to map, and subsequently model, values, AI systems could identify patterns and create appropriate models by themselves. However, the opposite could also happen — an AI agent could construct something that seems like an accurate model of human associations and values but is, in reality, dangerously misaligned.

      We don't tell AI systems about our values; we let it observe our behaviour and make inferences about our values. The author goes on to explain why this probably wouldn't work (e.g. the system makes us happy by stimulating pleasure centres of our brains) but surely a comprehensive set of observations would inform the system that humans also value choice and freedom, and that these might compete with other preferences? We might also value short-term pain for long-term benefits (e.g. exercising to increase cardiorespiratory fitness).

    2. Sometimes humans even value things that may, in some respects, cause harm. Consider an adult who values privacy but whose doctor or therapist may need access to intimate and deeply personal information — information that may be lifesaving. Should the AI agent reveal the private information or not?

      This doesn't seem like a good example. How is saving a life potentially harmful?

      Maybe a better example would be someone who wants to smoke?

    3. A thermostat, for example, is a type of reflex agent. It knows when to start heating a house because of a set, predetermined temperature — the thermostat turns the heating system on when it falls below a certain temperature and turns it off when it goes above a certain temperature. Goal-based agents, on the other hand, make decisions based on achieving specific goals. For example, an agent whose goal is to buy everything on a shopping list will continue its search until it has found every item. Utility-based agents are a step above goal-based agents. They can deal with tradeoffs like the following: Getting milk is more important than getting new shoes today. However, I’m closer to the shoe store than the grocery store, and both stores are about to close. I’m more likely to get the shoes in time than the milk.” At each decision point, goal-based agents are presented with a number of options that they must choose from. Every option is associated with a specific “utility” or reward. To reach their goal, the agents follow the decision path that will maximize the total rewards.

      Types of agents.

    4. As data-driven learning systems continue to advance, it would be easy enough to define “success” according to technical improvements, such as increasing the amount of data algorithms can synthesize and, thereby, improving the efficacy of their pattern identifications. However, for ML systems to truly be successful, they need to understand human values. More to the point, they need to be able to weigh our competing desires and demands, understand what outcomes we value most, and act accordingly.

      Are we good at this? Maybe on a personal level this might be true (e.g. I may prefer speed over safety but only up to a certain point, after which my preference would switch to safety). But at a social level? How do you weigh the competing interests and values of cultures or religions?

    1. objective function that tries to describe your ethics

      We can't define ethics and human values in objective terms.

    2. The problem is, algorithms were never designed to handle such tough choices. They are built to pursue a single mathematical goal, such as maximizing the number of soldiers’ lives saved or minimizing the number of civilian deaths. When you start dealing with multiple, often competing, objectives or try to account for intangibles like “freedom” and “well-being,” a satisfactory mathematical solution doesn’t always exist.

      We do better with algorithms where the utility function can be expressed mathematically. When we try to design for utility/goals that include human values, it's much more difficult.

    3. many other systems that are already here or not far off will have to make all sorts of real ethical trade-offs

      And the problem is that, even human beings are not very sensitive to how this can be done well. Because there is such diversity in human cultures, preferences, and norms, deciding whose values to prioritise is problematic.

    1. focusing on more conventional issues, since they’ll be what you’re most likely to come across. But these are unlikely to be your highest-impact options

      Optimise your decision-making to privilege high impact.

    1. Another much-debated question has been, ‘How does the agent’s choice of macro-block placements survive subsequent steps in the chip-design process?’ As mentioned earlier, human engineers must iteratively adjust their floorplans as the logic-circuit design evolves. The trained agent’s macro-block placements somehow evade such landmines in the design process, achieving superhuman outcomes for timing (ensuring that signals produced in the chip arrive at their destinations on time) and for the feasibility and efficiency with which wiring can be routed between components.

      You'd expect that the placement needs to be adjusted later on, as the design process unfolds and other blocks are added. It seems as if the algorithm is looking into the future and predicting what will need to go where, which enables it to place blocks now that won't need to be adjusted later.

    2. Mirhoseini et al. estimate that the number of possible configurations (the state space) of macro blocks in the floorplanning problems solved in their study is about 102,500. By comparison, the state space of the black and white stones used in the board game Go is just 10360.

      Again, just crazy complexity.

    3. Modern chips are a miracle of technology and economics, with billions of transistors laid out and interconnected on a piece of silicon the size of a fingernail. Each chip can contain tens of millions of logic gates, called standard cells, along with thousands of memory blocks, known as macro blocks, or macros. The cells and macro blocks are interconnected by tens of kilometres of wiring to achieve the designed functionality.

      Insane. I had no idea there was this much going on in a modern chip.

    1. Persistent Identifiers (PIDs) beyond the DOI

      Research other persistent identifiers besides DOI.

    2. To get closer to attaining coveted “rich” metadata and ultimately contribute to a “richer” scholarly communication ecosystem, journals first need to have machine-readable metadata that is clean, consistent, and as interoperable as possible.

      What WordPress plugins provide structured metadata functionality for OpenPhysio.

    1. To see this in action, visit somebody’s Open Ledger and recognize them for a scholarly contribution. For example, you could visit my Open Ledger: https://rescognito.com/0000-0002-9217-0407 and click the “Recognize” button at the top of the page to recognize me for a scholarly activity such as “Positive public impact”.

      Is Rescognito a service that authors will use independently of journals? If so, how does it reduce the cost of publishing? I think that I can see some value in using the service but it appears to be free and, even though the assertions being captured on the website are interesting, they're not obviously linked to journals, which means that the systems aren't connected and journals will keep having to create the assertions (which I'm made clear I don't believe they actually do). Is the idea that Rescognito will create a database of assertions that publishers will subscribe to, thus alleviating them from yet another role that we supposedly pay for?

    2. We also make it possible to generate and store assertions about activities that have no proximate physical or digital corollary, such as “mentoring” and “committee work”

      Now this looks interesting.

    3. No data transformations, no XML manipulation, no data synchronization, no DTD updates, no transfer between vendors, no coordination, no training, no management time! In addition, the assertion outputs are superior in granularity, provenance, specificity, display, and usability.

      No disagreement here. I'm just not convinced that this updated workflow justifies the expense of publishing. If anything, it should be further argument for the fact that publishing should be free.

    4. Another benefit of structured assertions is that they can be accessed via APIs

      True, but none of the assertions is actually generated by the publisher/journal, so why wouldn't authors simply be able to do this themselves?

    5. Because contributors are verifiably and unambiguously identified by their ORCID iD

      Again, a process that has nothing to do with the journal, other than asking authors for their ORCID links.

    6. The entire process has to be coordinated, managed, and synchronized — often over multiple continents

      Sure, but this coordination is almost always done by unpaid editorial staff and reviewers. Where is the expense for the journal?

    7. have the assertions be made by trusted, branded gatekeepers to guarantee their provenance

      I imagine that the increase in data embedded in various workflows will soon eliminate the need for even this very tenuous claim. Soon, when I publish something there will be embedded metadata (linked data) that verifies who I am. My institutional affiliation will be the same. Or something like blockchain could also take over this role. In addition, changes to the content will all be tracked over time and cryptographically signed to confirm that what was originally published has not been changed.

    8. Think of it this way: the value is not in the content, it is in the assertions — scholarly publishers don’t publish content, they publish assertions. This provides a coherent model to explain why publishers add value even though the source content and peer review are provided for free. Journals incur legitimate costs generating and curating important and valuable assertions.

      This is just bullshit. Most of the "assertions" from above are generated by unpaid peer reviewers, or include processes coordinated by unpaid editorial staff. What exactly are we paying publishers for?

    9. The authors have these conflicts of interest

      Again, simply a statement made by the person submitting the article. The journal does no verification.

    10. The findings are supported by the data”, “The work is novel”, “The work builds-on (i.e., cites) other work”

      This is all the work done by (unpaid) peer reviewers.

    11. The document was peer reviewed by anonymous reviewers

      The journal editorial staff do send the work out for review, but in the case of most journals the editorial staff are academics who aren't paid either.

    12. It was not plagiarized

      Peer reviewers again.

    13. The statistics are sound

      The peer reviewers do this and as you've already said, they're not paid.

    14. Who funded the work

      Again, this is a statement made by the person submitting. What verification does the journal do? Nothing.

    15. When was it released

      OK fine, the journal adds a publication date to the article.

    16. Where was the work done

      Same thing; the submitter basically tells the journal where they're from. The journal adds nothing here.

    17. Who created the document

      Publishers don't do any kind of identity verification, so "who created the document" is whoever the submitter says they are.

    18. Publisher costs usually include copyediting/formatting and organizing peer review. While these content transformations are fundamental and beneficial, they alone cannot justify the typical APC (Article Publication Charge), especially since peer reviewers are not paid.

      But peer reviewers are largely responsible for generating the assertions you talk about in the next paragraph, and which apparently, justify the cost of publishing.

    1. Journals like Science and Nature are financially viable and they create a kind of club. However, this is not a knowledge community in any meaningful sense. The authors of an article on the genome of an organism are not producing knowledge in concert with those of an article on the formation of stars. In these cases the “good” being produced is prestige, or brand value. Rather than being knowledge clubs, they are closer to “social network markets”, in which the choices that individuals make, such as where to seek to publish, are driven by the actions of those with higher prestige in the network. Such markets are effective means for extracting resources out of communities.

      I wonder if the profit margin of a journal ("community") could be used as a proxy indicator of the value that it creates for the community. Too much and it's focus is on making money. Is the ideal that the journal/community is breaking even?

    2. we propose that the value of a well-run journal does not lie simply in providing publication technologies, but in the user community itself. Journals should be seen as a technology of social production and not as a communication technology.

      Such a powerful shift.

    3. social life of journals and the knowledge communities they sustain

      Moves the emphasis from the article/PDF to the people themselves.

    1. You’ll be unlikely to stick with and excel in any path in the long term if you don’t enjoy it and it doesn’t fit with the rest of your life.

      You probably shouldn't strive to have an impactful career if it means derailing everything else you care about. Burning out is also something that you want to avoid.

    2. the main components of a satisfying job are:A sense of meaning or helping othersA sense of achievementEngaging work with autonomySupportive colleaguesSufficient ‘basic conditions’ such as fair pay and non-crazy working hours

      I'm fairly lucky in that I've found all of these in academia. Although I know that I'm in the minority here.

    3. your long-term goal is to maximise the product of these three factors over the remainder of your career

      So, increase your focus on the problem, take advantage of the opportunities you can find, and ensure that the domain is a good personal fit.

    4. we mean increasing wellbeing in the long term (the ‘positive impact’ part), and treating everyone’s interests as equal, no matter their gender, background, where they live, or even when they live (that’s the ‘impartial’ part).

      Bringing "time" into the discussion is especially impactful for me; it helps me to shift my thinking into the long-term so rather than planning for what might be good today, or this year, or the next 10 years, I'm trying to think further out than that.

    5. it’s hard to agree on what the terms ‘help people’, ‘positive impact’, and ‘personally fulfilling’ actually mean — let alone craft goals that will help you achieve these things

      It has to be about what these phrases mean to you.

    1. We think that very often, much of what matters about people’s actions is the difference they make over the long term — basically because we think that the welfare of those who live in the future matters; indeed, we think it matters no less than our own.

      If we consider that our species may exist for many hundreds of thousands of years into the future (if we don't completely stuff it up now), then future people may matter more than us because there will be so much more potential for well-being.

    2. What do we mean by “considered impartially”? In short, we mean that we strive to treat equal effects on different beings’ welfare as equally morally important, no matter who they are — including people who live far away or in the future, and including non-humans.

      I'm drawn to this aspect as it really broadens the scope of how we might think about this. I'm especially interested in the idea of future people and non-humans.

    3. most of the ideas for what welfare might consist in — for instance happiness or getting to live a life of your choosing– are tightly correlated with one another

      The specifics of the definition aren't very important for decision-making as the details are closely related anyway.

    4. we think it makes sense to use the following working definition of social impact: “Social impact” or “making a difference” is about promoting welfare, considered impartially, over the long term — without sacrificing anything that might be of comparable moral importance.

      I wonder if it makes sense to try and limit the scope of whose welfare we should consider. For example, I can potentially have a bigger impact on those in my community (and "community" can be quite broadly defined) than if I try to have an impact beyond that.

    1. When people value their attention and energy, they become valuable

      Related to the idea of career capital, which is the set of knowledge and skills that makes you hard to replace.

    1. the advertising-driven ones

      We want everything to be free but someone has to pay. Can we convince each other that good journalism is worth paying for? That social networks are worth paying for? That search is worth paying for? Someone has to pay and it feels like we've decided that we're OK with advertisers paying.

    2. Our world is shaped by humans who make decisions, and technology companies are no different…. So the assertion that technology companies can’t possibly be shaped or restrained with the public’s interest in mind is to argue that they are fundamentally different from any other industry

      We are part of sociotechnical systems.

  4. May 2021
    1. Career decision making involves so much uncertainty that it’s easy to feel paralysed. Instead, make some hypotheses about which option is best, then identify key uncertainties: what information would most change your best guess?

      We tend to think that uncertainties can't be weighted in our decision-making, but we bet on uncertainties all the time. Rather than throw your hands up and say, "I don't have enough information to make a call", how can we think deliberately about what information would reduce the uncertainty?

    2. One of the most useful steps is often to simply apply to lots of interesting jobs.

      Our fear of rejection may limit this path. One way to get over the fear of rejection may be to put yourself into the position where you're getting rejected a lot.

    3. by looking at how others made it

      Who is currently doing the job that I want to be doing?

    4. think of your career as a series of experiments designed to help you learn about yourself and test out potentially great longer-term paths

      I wonder if there's a connection here to Duke, A. (2019). Thinking in Bets: Making Smarter Decisions When You Don’t Have All the Facts. Portfolio.

      I haven't read the book but it's on my list.

    5. The returns of aiming high are usually bigger than the costs of switching to something else if it doesn’t work out, so it’s worth thinking broadly and ambitiously

      You’ve got to think about big things while you’re doing small things, so that all the small things go in the right direction. - Alvin Toffler

    6. ask what the world needs most

      Focuses attention on the fact that this isn't fundamentally about you. This is an act of service.

    7. Being opportunistic can be useful, but having a big positive impact often requires doing something unusual and on developing strong skills, which can take 10+ years.

      Academics (and other knowledge workers) tend not to focus too much attention on getting better. Skills development happens in an ad hoc way rather than a structured and focused approach to improvement.

    8. career capital

      You must first generate this capital by becoming good at something rare and valuable. It is something that makes you hard to replace and is therefore the result of putting effort into developing skills that differentiate you from others.

      Newport, C. (2016). Deep Work: Rules for Focused Success in a Distracted World (1 edition). Grand Central Publishing.

    9. what helps the most people live better lives in the long term, treating everyone’s interests as equal

      I wonder if it's problematic to focus your attention on the community closest to you. For example, I'm a lecturer in physiotherapy at a university. Should I be trying to make a near-insignificant difference to "the most people",or should I be trying to make a bigger positive difference to my little community?

    10. We’d encourage you to make your own definition of each.

      This needs to be a personally meaningful plan, so asking participants to create their own definitions is useful.

    1. Why educational technologies haven't transformed the college experience

      Interesting that Larry Cuban was saying something similar in 1992.

    1. ABSTRACT

      From the author on Twitter: "The main promises of data-driven education (real-time feedback, individualised nudges, self-regulated learning) remain incompatible with the entrenched bureaucratic & professional logics of mass schooling...at the moment we have 'schoolified' data rather than 'datafied' schools

    2. promises of digital “dataism” are thwarted by the entrenched temporal organisation of schooling, and teacher-centred understandings of students as coerced subjects

      The structures of schools, and beliefs of teachers, undermine attempts to use technology in ways outside of this frame.

    3. Using sociological theorisation of institutional logics

      The authors view the logic of institutions through a social theory lens.

    1. Perhaps for everyone, a moment or occasion of leadership will emerge, reveal itself, and call to us with the painful, necessary task of speaking up, patiently asking for alternatives, insistently rocking the boat

      Leaders - and teachers - must recognise those moments when we're called to do something courageous.

      And we must find or create opportunities for our students to do the same.

    2. Ivan Illich, no fan of schooling or authoritarian structures of any kind, writes movingly about the role of the true, deep teacher. So does George Steiner, using language of “master” and “disciple” that would make many open-web folks cringe–or worse. Yet even the great and greatly democratic poet Walt Whitman salutes his “eleves” at one point. And I have experienced and been very grateful for the wisdom of those teacher-leaders who brought me into a fuller experience and understanding of my own responsibilities as a leader.
    3. leading is risky business

      As is teaching.

    1. but is about writerly choices

      We don't often realise that writing is a creative act, and that all creative acts are about making choices.

    2. Getting to grips with structure means keeping your reader in mind.

      Always write with the reader in mind. Good writing isn't a vanity project i.e. it's not about you. If you can't get your message across clearly then you're letting down your reader.

    3. it’s more accurate to say that readers notice the absence of structures, and/or when we shift the logics of one structure to another mid-stream, without saying anything.

      I often see this in my undergraduate and postgraduate students; they make a conceptual move without signalling it to the reader, which leaves the reader feeling discombobulated.

    1. The more I used Roam, the more valuable my notes became. This increase in value made me realize that I needed a little more trust in the developers' approach to release management. It wasn't always clear how changes would affect my workflow.

      I have a principle when it comes to choosing software: the more time I spend using a tool, the lower the switching costs need to be.

    1. I also related old notes on similar topics to the Kanban concepts. In some cases, I saw a detail from Getting Things Done in a new light and took note about that

      This is why some people avoid the term, permanent note; it creates the impression that the notes are somehow fixed whereas they are constantly undergoing refinement and connection.

    2. What was your reading intent and how can you capture it best?

      You need to know why you're reading.

    1. Judgments made by different people are even more likely to diverge. Research has confirmed that in many tasks, experts’ decisions are highly variable: valuing stocks, appraising real estate, sentencing criminals, evaluating job performance, auditing financial statements, and more. The unavoidable conclusion is that professionals often make decisions that deviate significantly from those of their peers, from their own prior decisions, and from rules that they themselves claim to follow.

      As educators (and disciplinary "experts") we like to think that our judgements on student performance are objective. As if our decisions are free from noise. I often point out to my students that their grades on clinical placements may be more directly influenced by their assessors relationship with their spouse, than by the actual clinical performance.

    1. rationale for the decision taken

      Wait, I thought this was "decision support" and not "decision making"?

    2. particular benefit to authors for whom English is not a first language

      Indeed.

    3. an AI tool which screens papers prior to peer review could be used to advise authors to rework their paper before it is sent on for peer review.

      This seems reasonable; rather than using the AI to make a decision, it's being used to make a suggestion to authors, highlighting areas of potential weakness, and giving them an opportunity to rework those areas.

    4. more inclined to reject papers based on this negative first impression derived from what are arguably relatively superficial problems.

      When you train machine learning systems on humans, you're definitely building in our biases.

    5. One possible explanation for the success of this rather simplistic model is that if a paper is presented and reads badly, it is likely to be of lower quality in other, more substantial, ways, making these more superficial features proxy useful metrics for quality.

      This seems to assume that authors have English as a first language. If you're using "reads badly" as a proxy indicator of quality, aren't you potentially missing out on good ideas?

    1. most journals still insist on submissions in .docx format.

      We work within an ecosystem and it's hard to change your own behaviour when so much is determined by other nodes in the network.

    1. The most common way to stage an argument in the thesis goes something like this: Here is a puzzle/problem/question worth asking. If we know more about this puzzle/problem/question then something significant (policy, practice, more research) can happen.Here is what we already know about the puzzle/problem/question. I’ve used this existing knowledge (literatures) to help: my thinking and approach; my research design; make sense of my results; and establish where my scholarly contribution will be. Here is how I designed and did the research in order to come up with an “answer”.Here’s the one/two/three clusters of results.Missing stepNow here’s my (summarised) “answer” to the puzzle/problem/question I posed at the start. On the back of this answer, here’s what I claim as my contribution(s) to the field. Yes I didn’t do everything, but I did do something important. Because we now know my answer, and we didn’t before I did the research, then here are some possible actions that might arise in policy/practice/research/scholarship.
    1. we must shed our outdated concept of a document. We need to think in terms of flexible jumping and viewing options. The objects assembled into a document should be dealt with explicitly as representaions of kernel concepts in the authors' minds, and explicit structuring options have to be utilized to provide a much enhanced mapping of the source concept structures.

      This seems like the original concept that Microsoft's Fluid document framework is based on. And Apple's earlier OpenDoc project.

    2. It really gets hard when you start believing in your dreams.

      It's hard because of the emotional investment and subsequent pain when you see your dreams not being realised.

    3. Draft notes, E-mail, plans, source code, to-do lists, what have you

      The personal nature of this information means that users need control of their information. Tim Berners-Lee's Solid (Social Linked Data) project) looks like it could do some of this stuff.

    4. editor-browser tool sets

      This hasn't happened yet, and is unlikely to happen anytime soon. We seem to be moving away from a read/write web, with authors only being able to edit content they've created on domains that they control. The closest I've seen to this is the Beaker Browser.

    5. Many years ago, I dreamed that digital technology could greatly augment our collective human capabilities for dealing with complex, urgent problems. Computers, high-speed communications, displays, interfaces — it's as if suddenly, in an evolutionary sense, we're getting a super new nervous system to upgrade our collective social organisms. I dreamed that people were talking seriously about the potential of harnessing that technological and social nervous system to improve the collective IQ of our various organizations.

      And yet here we are, with the smartest computer scientists in the world spending all their time trying to figure out how to make us watch more videos so that we can be shown more ads.

    1. we miss the deep understanding that comes from dialogue and exploration.

      Knowledge emerges from interaction.

    1. The medium should allow people to think with their bodies, because we are more than fingers and hands.

      Embodied cognition.

    2. “With no one telling me what to work on, I had to decide for myself what was meaningful in this life. Because of how seriously I took my work, this process was very difficult for me,”

      A blank canvas can feel overwhelming. Some structure is better than no structure. Scaffolding is important for novice learners.

    3. most professional programmers today spend their days editing text files inside an 80-column-wide command line interface first designed in the mid-1960s.

      For a longer discussion of this concept, see Somers (2017, September 26). The Coming Software Apocalypse. The Atlantic.

    4. Commercial apps force us into ways of working with media that are tightly prescribed by a handful of people who design them

      We're constrained by the limits of the designers.

    5. “Every Representation of Everything (in progress),” showcasing examples of different musical notation systems, sign languages, mathematical representations, chemistry notations.

      Sounds a bit like Mathematica. See here for a basic overview of Mathematica in the context of academic publishing.

    6. Bret Victor, the engineer-designer who runs the lab, loves these information-rich posters because they break us out of the tyranny of our glassy rectangular screens.

      Seems odd. Glassy rectangular screens can also be "information rich". I also like posters but not because of the information density.

    1. “When we had electromechanical systems, we used to be able to test them exhaustively,” says Nancy Leveson, a professor of aeronautics and astronautics at the Massachusetts Institute of Technology who has been studying software safety for 35 years. She became known for her report on the Therac-25, a radiation-therapy machine that killed six patients because of a software error. “We used to be able to think through all the things it could do, all the states it could get into.” The electromechanical interlockings that controlled train movements at railroad crossings, for instance, only had so many configurations; a few sheets of paper could describe the whole system, and you could run physical trains against each configuration to see how it would behave. Once you’d built and tested it, you knew exactly what you were dealing with.

      The flexibility of software, relative to hardware, adds many layers of complexity, moving the capability of software beyond our capacity to understand.

    1. Leibniz’s notation, by making it easier to do calculus, expanded the space of what it was possible to think.

      See Bret Victor's presentation on Media for thinking the unthinkable, which expands on this idea.

    2. As science becomes more about computation, the skills required to be a good scientist become increasingly attractive in industry. Universities lose their best people to start-ups, to Google and Microsoft. “I have seen many talented colleagues leave academia in frustration over the last decade,” he wrote, “and I can’t think of a single one who wasn’t happier years later.”

      Well, this sucks to read (I'm an academic).

    3. Basically, the essay is about the difference between Wolfram Alpha / Mathematica, and Jupyter notebooks. One is a commercial product that's really complex and centrally designed; the other is open source, chaotic, and cobbled together from bits and pieces. But the scientific community seems to be moving towards open (i.e. Jupyter notebooks).

    4. maybe computational notebooks will only take root if they’re backed by a single super-language, or by a company with deep pockets and a vested interest in making them work. But it seems just as likely that the opposite is true. A federated effort, while more chaotic, might also be more robust—and the only way to win the trust of the scientific community.

      It's hard to suggest that scientific publishing move under the ultimate control of an individual.

    5. “Frankly, when you do something that is a nice clean Wolfram-language thing in a notebook, there’s no bullshit there. It is what it is, it does what it does. You don’t get to fudge your data,” Wolfram says.

      Although this clearly only works with a specific type of data.

    6. “The place where it really gets exciting,” he says, “is where you have the same transition that happened in the 1600s when people started to be able to read math notation. It becomes a form of communication which has the incredibly important extra piece that you can actually run it, too.”

      You can see Bret Victor describing this idea in more detail here.

    7. What I’m studying is something dynamic. So the representation should be dynamic.”

      Related to Victor's Dynamicland project, as well as his thoughts on a "dynamic medium. A note about “The Humane Representation of Thought.” Worrydream. http://worrydream.com/TheHumaneRepresentationOfThought/note.html )".

    8. Victor has long been convinced that scientists haven’t yet taken full advantage of the computer. “It’s not that different than looking at the printing press, and the evolution of the book,” he said. After Gutenberg, the printing press was mostly used to mimic the calligraphy in bibles. It took nearly 100 years of technical and conceptual improvements to invent the modern book. “There was this entire period where they had the new technology of printing, but they were just using it to emulate the old media.”

      In a similar way we started using the internet (and HTML) to mimic the user interfaces of CDs and DVDs. We still use HTML to create faux book interfaces for magazines, complete with page flip animations (although thankfully these are less common than they used to be).

    9. Papers may be posted online, but they’re still text and pictures on a page.

      We still call them "papers".

    10. There was no public forum for incremental advances.

      I've never thought of the academic paper as a format that enabled the documentation of incremental progress.

    1. The pandemic has forced everyone to become video editors and generally not very good ones.

      Although some teachers, like Michael Wesch, have made video content production their side-gig. We constantly expect our students to expand on their skill-set so is it unreasonable to expect teachers to do the same?

    2. providing context rather than content

      What if you use video that you've created as part of creating this context?

    3. focus on creating communities

      OK, so teachers shouldn't be expected to create video content, but it's reasonable to expect them to create communities? That seems weird.

    4. Teachers feel bound by tradition to deliver content and the students expect the teacher to deliver content and it's very hard to escape from this mindset.

      We're trapped in our traditions.

    1. My assertion is based on the observation that a great deal of learning does take place in connective environments on the world wide web, that these have scaled to large numbers, and that often they do not require any institutional or instructional support.
    1. community

      A community is not the same thing as a collection.

    2. Offense, insult, and hurt feelings are not particularly important

      Not only is it not important, you do not have the right to be offended.

      See here (Salman Rushdie), here (John Cleese), here (Jordan Petersen), here (Stephen Fry), and...well, you get the point.

    3. life contains more suffering than happiness

      An argument for anti-natalism.

    4. Some things get better and some things get worse.

      Exactly. Both can be true at the same time.

    5. generally up to you

      Although you can probably behave in ways that influence what others say about you.