10,000 Matching Annotations
  1. Sep 2022
    1. Many know from their own experience how uncontrollable and irretrievable the oftenvaluable notes and chains of thought are in note books and in the cabinets they are stored in

      Heyde indicates how "valuable notes and chains of thought are" but also points out "how uncontrollable and irretrievable" they are.

      This statement is strong evidence along with others in this chapter which may have inspired Niklas Luhmann to invent his iteration of the zettelkasten method of excerpting and making notes.

      (link to: Clemens /Heyde and Luhmann timeline: https://hypothes.is/a/4wxHdDqeEe2OKGMHXDKezA)

      Presumably he may have either heard or seen others talking about or using these general methods either during his undergraduate or law school experiences. Even with some scant experience, this line may have struck him significantly as an organization barrier of earlier methods.

      Why have notes strewn about in a box or notebook as Heyde says? Why spend the time indexing everything and then needing to search for it later? Why not take the time to actively place new ideas into one's box as close as possibly to ideas they directly relate to?

      But how do we manage this in a findable way? Since we can't index ideas based on tabs in a notebook or even notebook page numbers, we need to have some sort of handle on where ideas are in slips within our box. The development of European card catalog systems had started in the late 1700s, and further refinements of Melvil Dewey as well as standardization had come about by the early to mid 1900s. One could have used the Dewey Decimal System to index their notes using smaller decimals to infinitely intersperse cards on a growing basis.

      But Niklas Luhmann had gone to law school and spent time in civil administration. He would have been aware of aktenzeichen file numbers used in German law/court settings and public administration. He seems to have used a simplified version of this sort of filing system as the base of his numbering system. And why not? He would have likely been intimately familiar with its use and application, so why not adopt it or a simplified version of it for his use? Because it's extensible in a a branching tree fashion, one can add an infinite number of cards or files into the midst of a preexisting collection. And isn't this just the function aktenzeichen file numbers served within the German court system? Incidentally these file numbers began use around 1932, but were likely heavily influenced by the Austrian conscription numbers and house numbers of the late 1770s which also influenced library card cataloging numbers, so the whole system comes right back around. (Ref Krajewski here).

      (Cross reference/ see: https://hypothes.is/a/CqGhGvchEey6heekrEJ9WA

      Other pieces he may have been attempting to get around include the excessive work of additional copying involved in this piece as well as a lot of the additional work of indexing.

      One will note that Luhmann's index was much more sparse than without his methods. Often in books, a reader will find a reference or two in an index and then go right to the spot they need and read around it. Luhmann did exactly this in his sequence of cards. An index entry or two would send him to the general local and sifting through a handful of cards would place him in the correct vicinity. This results in a slight increase in time for some searches, but it pays off in massive savings of time of not needing to cross index everything onto cards as one goes, and it also dramatically increases the probability that one will serendipitously review over related cards and potentially generate new insights and links for new ideas going into one's slip box.

    2. Oftentimes they even refered to one another.

      An explicit reference in 1931 in a section on note taking to cross links between entries in accounting ledgers. This linking process is a a precursor to larger database processes seen in digital computing.

      Were there other earlier references that are this explicit within either note making or accounting contexts? Surely... (See also: Beatrice Webb's scientific note taking)


      Just the word "digital" computing defines that there must have been an "analog' computing which preceded it. However we think of digital computing in much broader terms than we may have of the analog process.

      Human thinking is heavily influenced by associative links, so it's only natural that we should want to link our notes together on paper as we've done for tens of thousands of years (at least.)

    1. So it’s not just that screen time changes well-being, it’s also the other way around, that well-being might change screen time.

      truth is, when I am in a good mood I do not use my phone....

    1. FULYL jpenosor 308 0) o8vurUl ydooxs Surgydue op Ayear 1,Upip | uve T ‘YM SMOUY ssoupooS ‘our UO pasndoy [Te Sem I] “USUIOM stp mnogqe Surpdue payse Apogon Mo puy o1 MOTT Mouy 1,uOp J pry ‘syoN.g ayy UT wet Super s1am Aoyp oraq NOGY “UOUIOM UI Moe payse Josou Ao TYEY-OoU, eypaw aun [Te UNM ‘st aur sqimistp Appear yeup axed Aju0 eq (‘umop nep ay} sng Geojs aALLot “mwag) soneys ayy pure asour osTey UO “speLlayeWl OU} UL (aM8, OU $919) ‘rp mew $3] ~ sjjop urepaciod asayy nq eq yur sounog isa Aarp ‘sonserd wrapour 9y) WM Suryidue op ued NOX, -gpSesy Os are sfopoul Jopjo sse4L (qop 07 Bususnjas) YIOM 03 YDeq 123 0} Addex uty ‘Aqreat ‘on — Houream ysySug arqyzey au pur STi JO sotd 01 ypeq ‘ouoy eq 01 per sal ory Ajeet apy Jo Iset Odd o} Wuaszagip yeu) ION “YBnosrp peyppnut pue ssout & ur papury isn 7 ayy] aroun Te ye Ie} HUT Way 19000 J “ysiqqns very [Te pue LOLI} Suppey UaZND oavig,, “UB -UdNe VIPSU at} [fe OF poredaad 3,usem ysnfy prem pip Apurersso y pury ‘asl UT peaToaut 398 01 st ajdoad Bunse alayut yoour 07 dem ATero atp {yeas ng “uosiod areanid & aymb uly ‘ssng sttp Te poem Jasau T ‘mouy nox

      How it's so easy to return back to your normal life after expereincing such horrific things because they're so distant. Even though they're happening every day to people just like us. It's saying that after this show is over, you still won't do anything to make a difference. It's a challenge to check yourself.

    1. accessibility is a skill set it's not one of those things where you can learn it and then just say hey i'm done i'm now an accessible science communicator my job here is done it's a lifelong process where you need 00:46:29 to continuously learn and seek out material to make sure that the practices still are accessible and that social media and other things are not changing you can start small so maybe just by adding alt text challenge yourself to 00:46:42 say i'm going to add alt text to all of my images i post from now on this is just starting small and as you build one accessible practice it leads the way to having more confidence to include other accessible practices in the future

      "Accessibility is a skillset" is a very strong point. It is very easy to do something once and believe that it's is added to your identity, but it's not that simple. Going back in the Kearns reading I see parallels in the practice of inclusivity and accessibility because it will continuously change and will need to be altered. Just like how one interviewee stated she hates the word "inclusivity" because after they add oe POC into the mix they have reached their quota to be considered "inclusive". Accessibility, like inclusivity must be practice consistently and thoroughly.

    1. This one is for students. Did you know you can annotate articles on Snapchat? Yes, it’s true!All you need to do is take a screenshot of the article you want to annotate.Then, open up the image editor and paste the article. Once you've done th

      What a watermeleon- I just made that up to say.

      Why did I not know this before... So I used to literally call out my students for Snap-chatting in class.

      If I knew about this educational use of Snap,, I would have used it already.

    1. Here are some of the major benefits associated with headless: Speed. Rolling out any new changes, new features, UI changes, business logic changes, promotions, cosmetic changes, is faster with headless. With a traditional architecture, small adjustments and minor tweaks to anything require testing major parts of the back-end to make sure everything is working properly. More customization and personalization. Headless grants maximum customization freedom across the board, which allows the freedom to create industry-leading, personalized brand experiences.The front-end is more accessible. Since front-end updates don’t have to be optimized for the back-end, they take less time to build and are cheaper to implement. Accessing, using, and updating the front-end no longer requires any advanced IT skills, so it’s easier to find people to do the job. You also no longer have to write JSPs to make cosmetic edits. You still can, but you can also use React and several other systems that are not usable under a traditional eCommerce model. Integration of non-web channels. Through the focused use of APIs, brands can create a coordinated, seamless, and personalized brand experience across all channels. Future platform changes are easy as well. If Google Glass takes off, or Tik Tok comes out with a shopping feature, a shopping experience can be quickly created and implemented without changing the back end. Just plug it into an API and start selling.  Saved time and money across IT. Front-end changes no longer require significant IT support, so you save developers time on cosmetic adjustments. Commerce apps can be created and implemented faster than monolithic eCommerce platforms. Quick changes to be made to the front or back-end, without disturbing your taking resources away from the other side. Room for experimentation. A headless structure allows your system to become much more open to experimentation. Marketers can test new designs without affecting the back-end. Developers can make changes and tests while customers are still making purchases. Brands can phase in innovations and prevent front-end errors in production environments.Performance. When you control the front-end, you control performance. Shopify not loading fast enough? Tough cookies. They own the front-end and the servers. Your headless React code not loading fast enough? Just write better code and boost server performance.Time to market. Businesses can swiftly introduce any and all front-end experiences with no back-end development required. Whether it’s reacting to a new trend, entering into a new device or channel, or adjusting to events like COVID-19, headless makes it as easy as possible. 
    1. Moving away from the front of the room also helps prevent behavioral issues from building

      I think there's something to be said for re-thinking how classrooms are laid out entirely. The entire concept of a font and back of the classroom assumes that everything important is happening only on one end. If there was a way to keep everyone equidistant from the teacher/presentation materials, that would be ideal. Not saying I know how to do that, just that it's something I've been thinking about.

    2. Leaving some space open leaves flexibility to respond to ideas and curriculum needs that emerge after the year is underway.

      While I agree that open spaces make a room more inviting as well as more productive - it's also important to think about how not all teachers/schools/classes have the luxury of open spaces. Many classes are packed so tight that it's hard to just walk between desks.

    1. Bob works for TechCorp and discovered a few years ago that using a tool installed from Homebrew results in a 90% speedup on an otherwise boring, manual task he has to perform regularly. This tool is increasingly integrated into tooling, documentation and process at TechCorp and everyone is happy, particularly Bob. Bob receives a good performance review

      Directly related to a question I posed a few years ago about who should really be funding open source. My conclusion: professional developers who are are most directly involved with how the source is put to work—and who benefit from this (in the form of increased stature, high salaries and bonuses, etc., in comparison to the case where the FOSS solution hadn't been available). This runs counter to the popular narrative that frames the employer as a "leech" and silent on the social and moral obligations of the employee who successfully captured value for personal gain.

      It's like this: the company has some goal (or "roadmap") that involves moving forward from point A to point B. The company really only cares about arriving at the desired destination B. They negotiate with a developer, possibly one who has already signed an employment contract, but someone who is made aware of the task at hand nonetheless. The developer agrees to do the work meant to advance the company towards its goals, which potentially involves doing everything manually—that is, handling all the work themselves. They notice, though, that there is some open source software that exists and that can be used as a shortcut, which means they won't have to do all the work. So they use that shortcut, and in the end their company is happy with them, and they're rewarded as agreed (not necessarily at the end, but rewarded nonetheless with e.g. regular paychecks, but also possibly receiving a bonus), and they advance in their career. Who's extracted value from the work of the open source creator/maintainer here? Is it really just the company?

      McQuaid seems to agree with my view, going by the way he (later) identifies both Bob and TechCorp as benefitting from Reem's work; cf https://hypothes.is/a/MBN0aDnuEe2aF8s2kWTPrg

    1. Anders Hejlsberg: Let's start with versioning, because the issues are pretty easy to see there. Let's say I create a method foo that declares it throws exceptions A, B, and C. In version two of foo, I want to add a bunch of features, and now foo might throw exception D. It is a breaking change for me to add D to the throws clause of that method, because existing caller of that method will almost certainly not handle that exception. Adding a new exception to a throws clause in a new version breaks client code. It's like adding a method to an interface. After you publish an interface, it is for all practical purposes immutable, because any implementation of it might have the methods that you want to add in the next version. So you've got to create a new interface instead. Similarly with exceptions, you would either have to create a whole new method called foo2 that throws more exceptions, or you would have to catch exception D in the new foo, and transform the D into an A, B, or C. Bill Venners: But aren't you breaking their code in that case anyway, even in a language without checked exceptions? If the new version of foo is going to throw a new exception that clients should think about handling, isn't their code broken just by the fact that they didn't expect that exception when they wrote the code? Anders Hejlsberg: No, because in a lot of cases, people don't care. They're not going to handle any of these exceptions. There's a bottom level exception handler around their message loop. That handler is just going to bring up a dialog that says what went wrong and continue. The programmers protect their code by writing try finally's everywhere, so they'll back out correctly if an exception occurs, but they're not actually interested in handling the exceptions. The throws clause, at least the way it's implemented in Java, doesn't necessarily force you to handle the exceptions, but if you don't handle them, it forces you to acknowledge precisely which exceptions might pass through. It requires you to either catch declared exceptions or put them in your own throws clause. To work around this requirement, people do ridiculous things. For example, they decorate every method with, "throws Exception." That just completely defeats the feature, and you just made the programmer write more gobbledy gunk. That doesn't help anybody.

      The issue here seems to be the transitivity issue. If method A calls B which in turn calls C, then if C adds a new checked exception B needs to add it even if it is just proxying it and A is already handling it via "finally". This seems like an issue of inference to me. If method B could dynamically infer its checked exceptions this wouldn't be as big of an issue.

      You also probably want effect polymorphism for the exceptions so you can handle it for higher order functions.

    1. figures, and many contain shapes like waves. The annotations are written in an ink that changes now that it’s exposed to air and sun: the inscriptions—signs, lines, circles, arrows, figures—boil on the surface

      Just as the words are not translatable, they change based on environment, based on context, the ocean is also unknown, untranslatable.

    1. Even if you don’t remember all those details, just know that metacognition is understanding your thought processes and emotions and the patterns behind them. It’s the highest level of mentalisation — an ability that is part of what makes us human.

      Antes de lanzarse a lo "kamikaze" a estudiar algo, armar un plan que, considerando todos los aspectos (internos y externos a ti), te guíe en la mejor forma de hacer el aprendizaje.

    1. wse the Constitution Annotated Thirteenth Amendment  Abolition of Slavery Amdt13.1Overview of the Thirteenth AmendmentAmdt13.2Slavery and the Civil WarAmdt13.3Drafting of the Thirteenth AmendmentAmdt13.4Ratification of the Thirteenth AmendmentSection 1 Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall have been duly convicted, shall exist within the United States, or any place subject to their jurisdiction.

      where are you? did they ever have the perception of a right to free communication or privacy?

      i am sure that it's not really barred, though we've seen with our own eyes that someone "had to remind me" that they told me on purpose that it was. i have taken steps to ensure that this will come as no surprise to anyone;

      i think we'd be smarter-looking if i was happier with this world. i'm not totally unhappy with it. i just miss thinking i was surrounded by people, and i was until that guy beat all the tennis scores. i mean golf

      jack

  2. mail-attachment.googleusercontent.com mail-attachment.googleusercontent.com
    1. to understand the relationship between feminism and revolution is it enough toonly get the women's side ofthe story? Why did the author choose not tointerview any men, or at least a few, to understand the way they viewed therelationship between revolution and feminism?

      Okay look I sort of get the sentiment but at the same time.... are you kidding me??? There are so many interviews of men at war and male revolutionaries (hell they were allowed to publish books on the matter when women couldn't even go to school in many places) that it just sounds a little ridiculous and also... if no good changes were being made do you really think men were thinking about this? Idk it's just the whole acting like two sides should have the same weight in a story when one group actually has a lot more at stake than the other and the other group has gotten so many chances to talk and write and act that it's just like damn maybe she just wanted to talk to women and people don't question it as much when you only talk to men so like it's one book where she didn't interview them

    1. Almost all good writing begins with terrible first efforts. You need to startsomewhere. Start by getting something -- anything -- down on paper. A friend ofmine says that the first draft is the down draft -- you just get it down. The seconddraft is the up draft -- you fix it up. You try to say what you have to say moreaccurately. And the third draft is the dental draft, where you check every tooth, tosee if it's loose or cramped or decayed, or even, God help

      No matter your level of experience, according to Lamott, when you start something new, you may feel as though you are yanking teeth.

    1. The reason for this is that a major way humans lose heat is by the evaporation of sweat.It takes energy to evaporate sweat, just like it takes energy to boil water. When sweat evaporatesfrom their skin, humans have to give up energy and this cools them off. When the surroundingair is dry, it’s easy to evaporate water from sweat into it and it’s easy for humans to cool off

      wet bulb temperature is related to the fact that when the surrounding air is dry, it is easier to evaporate water from sweat so it is less laborious

    Annotators

    1. To find out, I spoke with Maryellen MacDonald, a cognitive psychologist at the University of Wisconsin-Madison, who studies how the brain processes language. She said that even though your brain knows the grammar rules, other forces override that knowledge. The brain doesn’t just store words like a dictionary does for easy retrieval, it’s more of a network. You start with a concept you want to express and then unconsciously consider several options from its associative grouping

      It's really interesting to find out how our brain truly processes these words and why we make those annoying mistakes.

    2. I am a writer, which is why it’s particularly embarrassing that I sometimes type the word “right” when I mean to type “write.” Shouldn’t I know better?

      My brain constantly gets those mixed up, just like I get to and too mixed up as well.

    1. Thirteenth Amendment | Browse | Constitution Annotated | Congress.gov | Library of Congress

      I say so many "gibberish like things" that remind me of "VENCD in Savannah" that OECD it's hard to "quid pro quo" that I feel as if I am aflight and lighted from on high, "in the Spirit" is what I read it as ..

      I hope my strange way of communicating hasn't contributed or caused the lack of response to it in this place. This is a very important task we have here; and I have to make sure we understand that even if there is nobody else "just here" I will need to see it "here too"

      on the constitution speaking, it does. so does "the doll of our heart" and i have heard that the israeli's are coming around to "gnoshing" the coptic seal of enlightenment

      ... they gnashed ... said that book--that they would curse the name of god in this day. fuck i hope it's not that bad. :) Reaffirming that I believe the catholic Church and the Guttenburg Bible show us that Christianity has gone far and away out of it's way to help us understand "Alexa Glasses" and the need for "interpretation in the vernacular and the lingua franca" ...

      I mean bringing literacy to the galaxy is a big deal. It's like "bringing instant oligarch status" to the kids who are staring at me wondering if I think they too "must be Julian"

    1. This was my mom’s shirt. She got it in Paris in the late ‘70s, I think from some discount designer shop where they cut the labels out. I wore it in high school until I busted out the side seam, when I got too big for it. I chose this because it’s a reminder that clothes mean so much just as artifacts: They’re these souvenirs that get to function, you don’t have to put them in drawers, you can keep using them, but with all these memories attached. That’s my highest hope for the clothing I make.

      clothes retain so much history. are there any other objects that can contain as much personal history as them? by necessity they are “intimate”

    2. Blackbird Spyplane: I like that you use the word ‘weight,’ because there’s a false sense of weightlessness to how we tend to encounter clothes. Especially online, where you can tap on some .jpgs and a garment arrives at yr door like magic. So much of the consumer culture is about making us treat clothes like they just materialized into existence. Evan Kinori: “The shift, just in our generation, has been so extreme, even from what was considered ‘fast fashion’ in the ‘90s — that was wholesome compared to what happened when H&M and Zara blew up. Things got so much faster, so much more about making us clueless about how things came to be in front of us, focused entirely on the consumption end of the experience. In the ‘50s there were, like, Home Ec classes telling you how to darn socks or sew on a button. It’s sad that no one knows how to do things like that anymore — a crazy success of capitalism.”

      Digitization, capitalization and fast fashion remove a sense of “weight” when thinking/feeling about clothes (metaphorically, literally).

    1. Essentially, this is the great trick of pragmatic naturalism. And like many such tricks it unravels quickly if you simply ask the right questions. Since the vast majority of scientists don’t know what inferentialism is, we have to assume this inventing is implicit, that we play ‘the game of giving and asking for reasons’ without knowing. But why don’t we know? And if we don’t know, who’s to say that we’re ‘playing’ any sort of ‘game’ at all, let alone the one posited by Sellars and refined and utilized by the likes of Ray? Perhaps we’re doing something radically different that only resembles a ‘game’ for want of any substantive information. This has certainly been the case with the vast majority of our nonscientific theoretical claims.

      Ray Brassier: norms are not real, but it's necessary for doing science, because only agents that can give reasons, and be moved by reasons, can do science. They must play the game of giving and receiving reasons. To play a game, they must follow norms as if they are real.

      This is "inferentialism".

      Bakker: Fancy, but scientists don't know what is "inferentialism". They simply take many courses and practice their craft for years, and they end up doing science. You, a philosopher, looks at what they do and describes it as a game of giving and receiving reasons, but maybe that's just a superficial theory.

    1. 4) Why does this reification come to dominate again and again? Because of PIA, once again. Absent any information regarding the informatic distortion pertaining to all reflection on conscious experience, symptosis must remain invisible, and that reflection must seem sufficient.

      Heidegger: Something about Aristotle, probably.

      Bakker: Because it's an unknown unknown. Blind-anosognosic brains are not only blind, they don't have the mental representation of their blindness, and thus default to assuming that they are not blind.

      Brains in general are blind to their blindness. It takes a very long time and external shocks to even get the brains to see their blindness. Just think, brains have been introspecting for thousands of years, but it took until 1977 for introspective blindness to be conclusively experimentally demonstrated!

    2. 3) Why is being ‘initially’ ‘conceived’ in terms of what is objectively present, and not in terms of things at hand that do, after all, lie still nearer to us? Because conceptualizing being requires reflection, and reflection necessitates symptosis.

      Why did people usually think of Being as a thing like tables and chairs?

      Heidegger: It's all Aristotle's fault. He introduced the wrong ideas, like treating Being as a mere thing ("reifying Being"). We should go back to the Pre-Socratics, like Heraclitus.

      Bakker: Because when brains do philosophy-of-mind and philosophy-of-existence (other philosophies, like philosophy-of-physics, don't need to involve reflection), they do reflective thinking, and when brains reflect, they are really simulating their own behavior by a simplistic model, the same way it simulates other things like tables and chairs. This means that Being is modelled just like tables and chairs, and it's no surprise that philosophers just assumed that Being is a thing, too.

      It is like how introspection "what am I feeling?" is really just a special kind of guessing at what other people are thinking and feeling -- introspection is just extro-spection turned to yourself.

    1. Posted byu/jackbaty4 hours agoCard sizes .t3_xib133._2FCtq-QzlfuN-SwVMUZMM3 { --postTitle-VisitedLinkColor: #9b9b9b; --postTitleLink-VisitedLinkColor: #9b9b9b; --postBodyLink-VisitedLinkColor: #989898; } I've been on-again/off-again with paper for PKM, but one thing remains consistent each time: I don't enjoy using 4x6 index cards. I much prefer 3x5-inch cards. I realize that it's irrational, but there it is.My question is if I dive into building an antinet, will I regret using 3x5 cards? I already have hundreds of them. I have dividers, holders, and storage boxes for them. I just prefer how they _feel_, as weird as that sounds.I'd like to hear if people are using 3x5 cards successfully or if you've come to regret it.

      While it may be slightly more difficult to find larger metal/wood cases for the 4x6 or 5x8 cards, it's a minor nuisance and anyone who wants them will eventually find the right thing for them. Beyond this, choose the card size that feels right to you.

      If you don't have an idea of what you need or like, try things out for 10-20 cards and see how it works for you, your handwriting size, and general needs. People have been using 3x5, 4x6, and even larger for hundreds of years without complaining about any major issues. If Carl Linnaeus managed to be okay with 3x5, which he hand cut by the way, I suspect you'll manage too.

      Of course I won't mention to the Americans the cleverness of the A6, A5, A4 paper standards which allows you to fold the larger sizes in half to get the exact next smaller size down. Then you might get the benefit of the smaller size as well as the larger which could be folded into your collection of smaller cards, you just have to watch out for accidentally wrapping ("taco-ing") a smaller card inside of a larger one and losing it. I suppose you could hand cut your own 5" x 6" larger cards to do this if you found that you occasionally needed them.

      For the pocketbook conscious, 3x5 does have the benefit of lower cost as well as many more options and flexibility than larger sizes.

      At least commercial card sizes are now largely standardized, so you don't have deal with changing sizes the way Roland Barthes did over his lifetime.

      My personal experience and a long history of so many manuals on the topic saying "cards of the same size" indicates that you assuredly won't have fun mixing different sized slips together. I personally use 3x5" cards in a waste book sense, but my main/permanent collection is in 4x6" format. Sometimes I think I should have done 3 x 5, but it's more like jealousy than regret, particularly when it comes to the potential of a restored fine furniture card catalog. But then again...

    1. It’s the same with these hypothetical mass extinctions, as if that’s anything to do with climate change. It’s just opportunistic cherry picking by these cynical and manipulative scientists. Harsh fact is, a lot of species go extinct, but that’s just nature. Dinosaurs went extinct millions of years before we humans ever appeared, are we supposed to take the blame for that too? Doesn’t mean I’m happy that we lost the elephants, tigers, pandas, salamanders, cows and poodles, but I don’t feel guilty about it either.

      The publisher is mainly blaming scientists

    2. Climate change is a myth. We all know this, deep down. Some of you reading this may have been taken in by the fear-mongering governments or corrupt scientists so have been brainwashed into thinking climate change is a real thing that “threatens all of humanity” or some other nonsense, but it’s just that: nonsense. When you look closely at it, the so-called evidence for climate change, or “global warming” or “warmageddon” or “planetary death spiral” or whatever they’re calling it these days, it doesn’t stand up to scrutiny.

      Overall in this paragraph it seems that the person published this is assuming a lot which in my opinion I do not like.

    3. What’s more likely; that human industrial activity actually does lead to climate change, or that it’s all a massive meticulous centuries-long ruse to convince people that leaving Earth is a good idea?

      God, this is so real, though. The depths of conspiracy that people will go to in order to confirm their pre-held beliefs. Climate change deniers are really just the tip of the ice burg (ha), but once you get into flat earthers and QANON stuff it's really quite appalling what people are willing to believe, and how wrapped up they can get in these warped models of reality.

    4. Climate change is a myth. We all know this, deep down. Some of you reading this may have been taken in by the fear-mongering governments or corrupt scientists so have been brainwashed into thinking climate change is a real thing that “threatens all of humanity” or some other nonsense, but it’s just that: nonsense. When you look closely at it, the so-called evidence for climate change, or “global warming” or “warmageddon” or “planetary death spiral” or whatever they’re calling it these days, it doesn’t stand up to scrutiny.

      I assume that because this piece is in The Guardian this is meant to be some kind of satire. It seems to be mocking the ridiculousness of climate denial by presenting it firsthand in a stupid light.

    1. eighth grader Joslyn Diffenbaugh formed a banned book club last fall that began with a reading of George Orwell’s “Animal Farm.”

      Yeah, this bugs me. I loved reading Animal Farm.

      I would like to play devil's advocate for a moment, however. There are some good examples of stupid banning attempts in this article. However, I think it's important to also consider that 1) there are stupid people everyone, and it stands to reason that somewhere, someone is going to attempt to ban a book that shouldn't be banned, leading to 2) as anyone can find a handful of examples to support any stance, it is worth having more data to discern just how widespread issues like these book bannings are. For example, I would like to know a) how many public schools there are in the USA, and b) of that number, what percentage of those schools are having book banning issues similar to the cases brought up in this article. I think having this data could help us differentiate just how widespread this issue is.

      I'm not saying book banning isn't an issue we need to be concerned of (this Animal Farm case clearly suggests we should be concerned), but I would like to see the data in order to better understand the situation.

    1. u can be certain of in this world, it’s that your hand is your hand,” says Ehrsson. Yet Ehrsson’s illusions have shown that such cer-tainties, built on a lifetime of experience, can be disrupted with just ten seconds of visual and tactile deception. This surprising malle-ability suggests that the brain continuously constructs its feeling of body ownership using information from the senses — a finding that has earned Ehrsson publications in Scienceand other top journals, along with the atten-tion of other neuroscientists.

      official concept that he is toying with

    1. To put it another way, gamification, echo chambers, and moral outrage porn go together like junk food. Different kinds of junk food are unhealthy in different ways—some are too high in salt, some too high in fat, some too high in sugar. But the reason they are often consumed together is that they are all likely to be consumed by somebody who is willing to trade off health and nutrition in return for a certain kind of quick pleasure. The same is true of gamification, moral outrage porn, and echo chambers. They are all readily available sources of a certain quick and easy pleasure, available to anybody willing to relax with their moral and epistemic standards.

      This is bad analysis re:junk food and it is bad analysis re:instrumentalization. Oh, yes, it's all just down to individuals' backbones!

    1. Electric cars take lithium for batteries—but there’s enough lithium just in the known resources for three billion cars, and at the moment we only have 800 million.”

      Although this gives the sense that we have an abundance of resources, it's hard to access them. Many metals we need reside at the seabed (at the bottom of the Pacific ocean) and it would be environmentally-damaging to mine there. One thing is knowing the resources exist and another is retrieving them.

    1. Coincidentally, it's quite possible that Google's LaMDA, the "sentient" chatbot, utilizes the same approach. Google holds a patent titled "Forming chatbot output based on user state" which feeds on the "digital exhaust" of your IRL activity to train the bot's neural network

      LaMDA is just like Nina Tucker !?!?!!?!?

    1. most intriguing to me was the discovery which even today some 00:23:13 archaeologists deny but the evidence is actually overwhelming that oceans were no barriers to erectus they sailed across oceans so this is a quote from a 00:23:24 very good book on Paleolithic Stone Age seafarers Paleolithic books our ancestors have often been painted as unintelligent brutes however this simply is not the case evidence suggests that at least homo erectus and perhaps even 00:23:37 pre erectus hominids were early seafarers based on this evidence it seems that our early ancestors were successful seafarers biological studies suggest that considerable numbers of founder populations so when we find 00:23:50 evidence of erectus tools on an island there had to have been 2250 erectus arrived they're more or less the same time it's not just that one erectus got there we also know and I'll go into this 00:24:03 that they didn't just wash ashore it would have been almost impossible some archaeologists suggest that they got there by tsunamis but when I talked to friends of mine who are earth scientists they say that's not how 00:24:17 tsunamis work you know the tsunamis are pushing water to land and it is possible that afterwards some things flow out but most of the energy is towards the land and it is true that a few animals have 00:24:30 made it but we don't find regular systematic colonization by humans waiting to ride tsunamis most people don't try to do that

      !- homo erectus : was a seafarer

    1. one of the i guess stranger or slightly scarier conclusions proposed by the scientists in the study is that a lot of the signs we see in the geological history of the planet kind of resemble what we are now observing in 00:08:49 the anthropocene the period when humans started to dominate the globe or basically how the modern climate change to some extent actually resembles a lot of sudden changes that did happen in the last few millions of years with one 00:09:03 specific type of an event currently still unexplained by scientists often referred to as hyperthermals the sudden increase in temperature that usually lasts for a few thousand years but happens in a very short period of time 00:09:15 and currently doesn't have a very definitive explanation but one very well known example we explored in the past in one of the videos should be in the description known as p-e-t-m paleocene using thermal maximum the event that 00:09:28 happened approximately 55 million years ago when the global temperature suddenly increased by 5 to 8 degrees celsius or 9 to 14 degrees fahrenheit just to then drop suddenly within a few thousand years and there's been quite a lot of 00:09:41 explanations for what might have happened maybe an asteroid collision that released huge amounts of co2 gas or a lot of other gases that usually warm up the planet maybe volcanic eruptions doing the same but at the moment there's 00:09:53 just not enough clear evidence specifically craters or any volcanoes that were produced during this time to suggest any specific explanation on the other hand all of these observations kind of resemble what's happening to the planet right now as well and so trying 00:10:06 to figure out exactly what caused these warming conditions is one of the potential ways we could start assessing the hypothesis and try to answer these some of the difficult questions also likewise any kind of alloying industrial 00:10:17 civilization should maybe produce very similar effects on their planet as well at least that's what the modern science expects but what are some of the questions we can ask ourselves in regards to the history of our planet in 00:10:29 order to see how viable the hypothesis is well first of all generally speaking the geological record of our planet is usually very incomplete and it becomes even more difficult to study it as you go back in time for example today we 00:10:42 know that only about three percent of the entire surface of the planet has any kind of urban activity on the surface or basically anything that would potentially resemble modern technology and so the chance for a lot of these cities to survive for thousands or even 00:10:55 millions of years is exceptionally low which also means that within just a few thousand years the chance for discovering these techno fossils for some future humans living here is also going to be pretty low as well you can 00:11:08 kind of see that there are some bottles here and a lot of other leftovers but all this is going to disappear in time turning into nothing but almost completely indistinguishable sediment and for any major city to leave any mark 00:11:20 inside the sediment it really depends on where it's located if it's located on the subsiding plate it might eventually become sediment and get locked inside rock leaving behind certain marks but if 00:11:31 it's on the rising plate or if it's somewhere in the middle everything here might eventually be eroded with time by different types of rain and wind especially as the rain becomes more acidic leaving pretty much nothing 00:11:43 behind on the other hand when it comes to things like for example dinosaur fossils we usually discover one fossil for every 10 000 years with the footprints of dinosaurs being even more rare but even though humans have been on 00:11:56 the planet for at least 300 000 years the civilization that we're used to has only been around for just over a few hundred years and technically even less than that and so the chance for something from our modern civilization 00:12:09 to turn into a fossil that can be discovered in the future is actually super low and so right now there's a really big chance that after a few million years everything we take for granted is going to look like this and 00:12:21 so the natural question here is how do you then tell if any intelligent species ever existed on the planet like we do right now well we might be able to distinguish certain sedimentary anomalies even present in the sediment 00:12:34 today and then combine this with observations of various hyperthermals or any other major changes in the temperature that don't seem to have any other explanation for example all the technofossils are going to leave behind 00:12:46 a very specific isotopic ratio that's extremely difficult to find in nature including of course residues of carbon that doesn't exist in nature such as various microplastics these should linger for quite a while there should 00:13:00 also be geological record of a major extinction event that doesn't really have any good explanations also signs of unusual chemicals that are generally not produced in nature either for example things like cfcs or more 00:13:13 specifically various types of transuranic isotopes from nuclear fission which obviously all the chemical signs we're currently living on the planet by doing a regular stuff 00:13:24 by being ourselves and so we kind of expect something similar could have happened in the past there could have been another species that was basically exploiting the planet and as a result left some signs of this in the 00:13:37 geological record but that's of course assuming that any civilization is going to have a lot of very specific needs in terms of energy and a lot of civilizations are going to eventually result in similar types of pollution 00:13:49 naturally a pretty big assumption but it's really the only assumption we have right now in order to figure out if this hypothesis has any merit but even here there's still a major problem if this type of a civilization has not existed 00:14:02 for longer than let's just say a few hundred years the chance for it to even leave any kind of a mark and that includes the fossil mark is still pretty low and even things like microplastics or things like transuranium elements 00:14:15 might have already mixed with a lot of other stuff or disappeared completely especially if this happened a long time ago and so if these ancient civilizations ever existed and if they managed to somehow change the planet in the past the signs of their existence 00:14:28 would still be extremely difficult to discover with the only signs left after millions of years really just being various types of isotopes that could still be out there in the sediments on the planet

      !- in other words : the natural gelogical earth processes could render the artefacts of previous advanced civilizations untectable

    2. we can kind of make an assumption that 00:04:22 complex brains and by extension complex intelligence should also be somewhat common in terms of evolutionary success and assuming that it's evolutionary preferential or basically that evolves many times throughout the history of the 00:04:35 planet we can then make a conjecture that it should exist somewhere out there where life exists on other planets okay just to rephrase this if we truly believe that extraterrestrial intelligence exists out there and that 00:04:48 it kind of evolved in the same way that it evolved here on planet earth it's pretty safe to assume that it might have evolved several times on the planet because we're making an assumption here that this is an evolutionary advantage 00:05:00 that all planets that potentially have life on them are going to end up with some kind of a species that's going to become super intelligent and that's going to be self-aware able to use technology and essentially kind of communicate in the same way that we 00:05:13 communicate using for example radio waves

      !- in other words : there should be signs of complex intelligence like ours in the paleontological records

    1. Author Response

      Reviewer #1 (Public Review):

      GCaMP indicators have become common, almost ubiquitous tools used by many neuroscientists. As calcium buffers, calcium indicators have the potential to perturb calcium dynamics and thereby alter neuronal physiology. With so many labs using GCaMPs across a variety of applications and brain regions, it's remarkable how few have documented GCaMP-related perturbations of physiology, but there are two main contexts in which perturbations have been observed: after prolonged expression of a high GCaMP concentration (common several weeks after infection with a virus using a strong promoter); and when cytoplasmic GCaMP is present during neuronal development. As a result, GCaMP studies are often designed to avoid these two conditions.

      Here, Xiaodong Liu and colleagues ask whether GCaMP-X series indicators are less toxic that GCaMPs. GCaMP-X indicators are modified GCaMPs with an additional N-terminal calmodulin binding domain that reduces interactions of the calmodulin moiety of GCaMP with other cellular proteins. Xiaodong Liu and colleagues document effects of GCaMP expression on neuronal morphology in vitro, calcium oscillations in vitro, and sensory responses in vivo, in each case showing that GCaMP-X indicators are less toxic. Their results are compelling.

      Unfortunately, the paper suffers two main weaknesses. Firstly, the results demonstrate that GCaMP is toxic during development, after prolonged expression via viruses in vivo, and in cell culture where maturation of the culture likely recapitulates key steps in development. GCaMPs are known to be toxic in these circumstances, such toxicity is readily circumvented by driving expression in the adult, and there are countless examples of studies in which adequate GCaMP expression was achieved without toxicity. These new results are of little relevance to the majority of GCaMP experiments. That GCaMP-X indicators are less toxic during development is a new result and may be of interest to those who wish to deploy calcium indicators during development, but this is a relatively small number of neuroscientists.

      We thank the reviewer for providing valuable opinions on these critical matters. Here, we would like to clarify:

      1. In our work, the status of neurites (length, branching, etc.) is indeed one main aspect to monitor, and neuritogenesis during the early stages of development is known to have temporal trajectories with ample dynamic range thus helpful to quantitatively compare GCaMP-X versus GCaMP. However, the key factor is the actual time and level of probe expression in neurons, and the starting timepoint of expression could vary. We have conducted additional experiments using virus-infected neurons (Figure 5—figure supplement 1) and transgenic neurons with inducible expression (Figure 7—figure supplement 3), both starting to express the probes at the mature stage. Thus, GCaMP-X imaging is not necessarily limited to developing neurons. As in the original reports of GCaMP probes with toxicity, virus injection was performed for both immature (2-3 weeks, Tian 2009 PMID: 19898485) and mature mice (~2 months, Chen 2013 PMID: 23868258). According to the protocol (Huber 2012 PMID: 22538608), GCaMP virus injection was done for adult mice (>2 months), which exhibited functional and morphological deficits in nucleus-filled neurons beyond OTW (Figure 2, Figure 5 and Figure 6). Collectively, the central principles of GCaMP-X versus GCaMP are applicable to both immature and mature neurons.

      2. Chronic GCaMP-X imaging has a broad spectrum of potential applications, not limited to neural development (Resendez 2016 PMID: 26914316). As mentioned, GCaMP-X resolves the problem of longitudinal expression thus making chronic imaging more feasible. We agree with the reviewer that a large body of our data in the original version focused on the characteristics of calcium signals during the early stage of neuronal development, which served as an exemplary scenario to compare GCaMP-X with GCaMP. Indeed, the importance of Ca2+ oscillation in neural development is commonly accepted (Kamijo 2018 PMID: 29773754; Gomez 2006 PMID: 16429121). In vivo Ca2+ imaging (Figure 2 and Figure 5) and morphological analyses (e.g., Figure 6) have extended the major conclusions onto mature neurons where dysregulations of Ca2+ oscillations are also tightly coupled with neuronal health or death/damage. Importantly, GCaMP-X paves the way to unexplored directions previously impeded or discouraged due to GCaMP perturbations, e.g., chronic imaging of cultured neurons to concurrently monitor Ca2+ activities and cell morphology as in this study.

      3. To circumvent the toxicity of GCaMP is not a trivial procedure for viral infection. The expression levels need to be carefully adjusted experimentally, e.g., by dilution studies (Resendez 2016 PMID: 26914316). A delicate balance of GCaMP expression is critical: low level (or short time) of expression would result in weak signals and poor SNR whereas high level (or long time) of expression would cause nuclear filling and neural toxicity. Even for the work-around conditions of time window and dilution dosage, nucleus-filled neurons are not uncommon judged by the expression/fluorescence patterns, e.g., in the original reports of GCaMP6 (Supplementary Figure 7, Chen 2013 PMID: 23868258), and GCaMP3 (Supplementary Figure 11, Tian 2009 PMID: 19898485). Under particular conditions (subtypes of neurons, time window of imaging, dosage of virus injection, etc.), many neurons could be found without apparent perturbation/nuclear-filling to proceed with calcium imaging. Using GCaMP-X, dosage is less restricted (10fold higher concentration for GCaMP-X with improved SNR and overall performance in Figure 2, Figure 5 and Figure 6). Practically, GCaMP-X is a simple solution for the issues related to excessive/prolonged expression. Also, GCaMP-X is expected to help maintain the total number of healthy neurons and thus the general health of the brain. Reportedly, some GCaMP lines of transgenic mice exhibit epileptic activities (Steinmetz 2017 PMID: 28932809), awaiting future studies to explore whether GCaMP-X could help.

      4. As the reviewer pointed out, the key of GCaMP-X is to resolve the unwanted (apo)GCaMP binding to endogenous proteins in neurons. We agree with the reviewer that according to the empirical observations the following factors appear to increase the severity of GCaMP perturbations: prolonged time, high concentration and nuclear accumulation. GCaMP-X is able to protect GCaMP from unwanted binding and the consequent damage to neurons, validated by various tests thus far (in vitro and in vivo). In this context, the prolonged time would result in higher GCaMP concentration, meanwhile accumulating the effects due to GCaMP interactions; higher GCaMP concentration would interfere with more binding events and targets of endogenous CaM; and enhanced/prolonged expression of GCaMP is directly correlated with nuclear accumulation, a hallmark of neuronal damage.

      Secondly, the authors extend their claims to conclude that GCaMP indicators are toxic under other circumstances, claims supported by neither their results nor the literature. To provide one example, at the end of the introduction is the statement, 'chronic GCaMP-X imaging has been successfully implemented in vitro and in vivo, featured with long-term overexpression (free of CaM-interference), high spatiotemporal contents (multiple weeks and intact neuronal network) and subcellular resolution (cytosolic versus nuclear), all of which are nearly infeasible if using conventional GCaMP.' The statement's inaccurate: there are many chronic imaging studies in vitro and in vivo using GCaMP indicators without nuclear accumulation of GCaMP or perturbed sensory responses. There are more examples throughout the paper where the conclusions overreach the results and are inaccurate. The results are simply insufficient to support many of the strong statements in the paper.

      Overall, the critics and suggestions of the reviewer have been well taken and we have revised the text accordingly. For this particular paragraph here mentioned by the reviewer, we want to clarify that it was the summary of our results in the whole manuscript, where each claim referred to the data and analyses shown in corresponding figures. In details, these figures were: 'free of CaM-interference (Figure 1), multiple weeks and intact neuronal network (in vitro: Figure 3 and Figure 4; in vivo: Figure 2, Figure 5 and Figure 6; transgenic neurons: Figure 7) and cytosolic versus nuclear (Figure 1 and the previous Figure 8). The last sentence of 'all of which are nearly infeasible if using conventional GCaMP' was meant to summarize the results comparing GCaMP versus GCaMP-X in our experimental settings of chronic imaging with prolonged/excessive probe expression. Again, we agree that for particular experimental settings and purposes the toxicity of GCaMP can be circumvented empirically. To avoid miscommunications, we have revised this paragraph by moving it to the Discussion (after all the data), also ensuring that the statements on GCaMP are backed up with data or literature. Please also see Essential Revisions, Item 3.

      Reviewer #2 (Public Review):

      Geng and colleagues provide further evidence for the lower neuronal toxicity of their improved GECI, GCaMP-X, which allows improved recordings of Ca2+ signals in neurons. As reported previously and studied in more detail here, the improved properties are primarily due to a lower tendency of GCaMP-Xc (reporting cytosolic Ca2+) to enter the nucleus. They present a systematic comparison of their cytosolic or nucleus-targeted GCamP-Xc (and Xn) with the corresponding "conventional" GCaMPs (jGCaMP7b, GCaMP6m). They, again, confirm the absence of apoGCaMP-X binding to the CaM binding domain of Cav1.3 L L-type Ca2+ channels suggesting that this is the main or one of several GCaMP interactions leading to altered intracellular signaling affecting neuronal survival, development and architecture. Evidence for more (likely) physiological Ca2+ responses were obtained from a battery of experiments, including in vivo recordings of acute sensory responses after viral expression of GCaMPs, monitoring of long-term calcium oscillations in cultured neurons, correlations measured Ca2+ oscillations with hallmarks of neuronal development (soma size, neurite outgrowth/arborizations, and long-term recordings of spontaneous Ca2+ activities in vivo in S1 primary somatosensory cortex. The latter experiments also showed that much higher doses of AAV-GCaMP6m-Xc could be administered than of GCaMP6m. They also show that unfavorable effects of GCaMPs on neurons of adult GCaMP expressing transgenic mice, both in in slices and cultured neurons. While most experiments aim at demonstrating improved performance of GCaMP-X, one finding also provides potential novel insight into the role of neuronal activity patterns during neuronal development in culture. Assuming more undisturbed physiological Ca2+ signaling even through longer time periods they can follow different Ca2+ activity patterns during neuronal development. Oscillation amplitudes and the level of synchrony correlated with neurite length and frequency inversely correlated with neurite outgrowth.

      They provide convincing experimental evidence for the improvements claimed for their novel GCamP-X constructs. Some aspects should be clarified.

      A key finding explaining the construct differences is the nuclear localization. The authors should also provide numbers for the N/C ratio for Ca2+ imaging of sensoryevoked responses in vivo (Fig. 2; pg 6: nuclear accumulation was barely noticeable from GCaMP6m-Xc even beyond OTW). Also, for chronic experiments in brain slices they state for GCaMP6m-Xc in the text that (pg 12) "meanwhile the N/C ratio remained ultra-low", yet Fig. 6 shows a N/C ratio of 0.2. This does not appear to be "ultra low".

      We appreciate the reviewer for bringing up the matter of N/C ratio (indicative of nuclear accumulation). We have appended the values of N/C ratio for in vivo experiments (revised Figure 2). Following the previous report, the criteria of N/C ratio was set to 0.8 to regroup the neurons into two subpopulations. A significant fraction of GCaMP neurons were nucleus-filled (N/C ratio>0.8); meanwhile, nearly no neuron expressing GCaMP-XC was found with N/C ratio greater than 0.8 when examined 8-13 weeks post injection. Generally, due to imaging resolution, confocal microscopy provided more precise evaluation for N/C ratio than two-photon in vivo images. In Figure 6, even more clear difference in nuclear distribution was observed between GCaMP and GCaMP-X, which was described as “ultralow” (GCaMP-X). Of note, the N/C ratio of YFP itself was ~1.3. The N/C ratio for GCaMP-XC was not close to zero, consistent with the measurements from other NES-tagged peptides (Yang 2022 PMID: 35589958). GCaMP-XC was not completely excluded from cell nuclei, thus producing some fluorescence there. In light of this comment, we have revised the relevant text including the phrase of “ultralow” (Page 14, Line 393). In addition, Figure 5 was also revised accordingly.

      Along these lines, since nuclear-filled neurons were observed in their experiments with GCaMP-Xc, the authors should comment if altered Ca2+ signals were also seen for the few neurons expressing GCaMP-Xc in the nucleus.

      During 2-photon imaging experiments in vivo, occasionally GCaMP-XC neurons appeared to have some level of nuclear expression especially in those blurred images of low quality. Judged by the criteria of N/C ratio (0.8), these neurons rarely fell into the nucleus-filled group (Figure 2B and Figure 5C, also see confocal imaging Figure 1B). On the other hand, a small fraction of GCaMP-XC could be “leaked” into the nucleus. GCaMP-XN also eliminated toxic (apo)GCaMP interactions in neurons, sharing the same design principle with GCaMP-XC (Figure 1). Therefore, nuclear GCaMP-XC is expected to resemble GCaMP-XN. Experimentally, with GCaMP-XC or GCaMP-XN present in the nucleus, no significant change in neuronal Ca2+ or neurite morphology has been observed. Meanwhile, this comment has pointed out one important direction of future research, i.e., to more precisely confine GCaMP-X within the targeted organelles, e.g., by improving or replacing localization tags.

      Since they performed a systematic comparison of two constructs to demonstrate an (expected) superiority of one of them, the experiments, or at least the analysis, should ideally be performed in a blinded way. The authors should clarify how they avoided experimental bias.

      For in vitro experiments, multiple independent trials of experiments with analyses were performed by two (or more) researchers to ensure the reproducibility and to minimize any bias. And the results and conclusions have been highly consistent (among different trials/researchers). Following the suggestion, we have assured that in vivo experiments and data analyses were separately conducted by the researchers from two different labs. For long-term expression/imaging, the differences between GCaMP-X and GCaMP were often discernable directly in the images even without further calculations or statistics (e.g., Figure 3B). Related information can be found in the Methods (Page 32, Line 799).

      In their chronic Ca2+ fluorescence imaging for autonomous Ca2+ oscillations in cultured cortical neurons ultralong lasting signals (Fig. 3B, DIV 17, GCaMP6m) could be observed. It would be helpful to further describe the nature of these transients, ideally by adding it to their video collection.

      As suggested by the reviewer, the video for Figure 3B (DIV 17, GCaMP6m) has been included in this revision (Figure 3—video supplement 2). In contrast to the oscillatory signals normally observed from healthy neurons, the pronounced and sustained Ca2+ signals are associated with apoptosis and other pathological conditions in neurons (Khan 2020 PMID: 32989314; Nicotera 1998 PMID: 9601613; Harr 2010 PMID: 20826549). The Ca2+ wave with broadened width (FWHM) was indicative of damaged neurons by GCaMP (Figure 3F), rather than (altered) sensing characteristics of GCaMP. We agree that this observation is a notable and interesting phenomenon, worth to follow up in future studies.

      The discussion is very long. In my opinion it would benefit from shortening, avoid redundancies and focus only on the key findings in this paper. This includes the chapter on design and application guidelines for CaM-based GECIs. The main message what the advantage of their GCaMP-X modifications has been made before in the discussion. A more detailed discussion on this appears more suitable in a review article.

      In response to this suggestion, we have made it as concise as possible, by simplifying or removing several topics including the design and application guidelines for CaMbased GECIs.

      It may be worthwhile to include another aspect in the discussion: does the improved GCaMP-Xc cause no change in neuronal function or morphology or is it just less damaging than other GCaMPs. How can this issue be addressed experimentally.

      We have revised the discussion accordingly (Page 21, Line 588). We agree that additional experiments would help evaluate how close GCaMP-X data are to the reality, considering the Ca2+-buffering effect intrinsic to Ca2+ probes and also other factors. In light of this suggestion and also those from Reviewer #1, we have incorporated more experimental controls, including Ai140 mice (GFP, Figure 7—figure supplement 2) and Fluo-4 AM (Ca2+ dye, Figure 3—figure supplement 4). The results have been encouraging in that GCaMP-X neurons were nearly indistinguishable in the morphological and functional aspects from GFP or Fluo-4 AM controls. The incoming feedbacks from GCaMP-X users should continue to help clarify this matter, which we would like to follow up.

    1. Selection of Zettelkasten method types: Software-based Zettelkasten: It’s certainly super handy having all your notes in digital form. Instead of adjusting and renaming your folder structure on your computer, you could consider using a knowledge management software (psst, Hypernotes!) that uses the Zettelkasten method. Software-based Zettelkasten already have integrated features to make smart note-taking so much easier, such as auto-connecting related notes, and syncing to multiple devices. Paper-based Zettel: You may enjoy the manual practice of writing down information and keeping index cards in a folder or designated filing cabinet in your home. Just because it isn’t digital, doesn’t mean you’re not going to be productive (Niklas Luhmann is proof of this!). Archive / DocuWiki: If you’re not picky on the design or format and value the text-based information, using a DocuWiki as a Zettelkasten might be right for you. DokuWikis store plain text filled with simple markup locally in a folder on your computer and use the renaming function to create folders as document categories, just like drawers in a filing cabinet.  

      Zettelkasten method types

    1. “relating to,” more than just “knowing about” the user

      This makes me think about gossip. It's such a large part of our lives. When we gossip, we mostly try to "know about" people - or embellish on things that we know with our own ideas. Very rarely in life do we attempt to relate to other people that we meet. Instead, we come to our own conclusions about them based on hearsay or a couple of interactions.

      I think if people in general made it a goal to relate to people instead of just "know about" them, the human race would be kinder and people would gossip a whole lot less.

    1. Cyberpunk realized that the old SF stricture of "alter only one thing and see what happens" was hopelessly outdated, a doctrine rendered irrelevant by the furious pace of late 20th century technological change. The future isn't "just one damn thing after another," it's every damn thing all at the same time. Cyberpunk not only realized this truth, but embraced it.

      I wonder if general SF historians would agree with this

    1. first glance, modal editing controls may appear unintuitive or just weird. Modal editors make us of mnemonics to try to make it intuitive for us to learn the commands. Sounds doubtful, but believe me when I say that it’s just about undertanding the logic behind i

      True!

    1. m.

      I have said this multiple times before, but this truly proves just how racist his motivations are. None of this is about the language, it's about getting rid of the people he doesn't want. When Justin Trudeau made it a law that some companies have to hire minorities, François Legault wasn't pleased so he decided to fight back with this awful law that is discriminatory, useless and unfair. If you're racist in 2022, just say it, if you don't even have the nerve to admit it in front of anyone, then why are you Prime minister?

    1. he would notcompromise on any levels of scholarship in order to get the work done faster

      It seems Busa had a similar view of his work that a scientist might have. That there is a right way to do it and that is the only way. Not to say humanities scholars have no method, it just reminded my of the statement made earlier by the text about computing being unambiguous in it's approach.

    1. For example, it explains why Web3 – notionally a project to remake the web without Big Tech choke­points – is so closely associated with cryptocurrency. It’s not just the ideological notion that if we paid for things, companies would abandon surveillance and sensationalism (a dubious proposition!); it’s the idea that the internet could be remade as something that can only be used by people who have cryptocur­rency tokens. The internet is not a luxury. It’s a necessity, as the pandemic and the lockdown proved. Without the internet, you are cut off from family life, healthcare, employment, leisure, access to government services, political discourse, civic life, and romance. Those are all things you need, not just things you want. If you need cryptocurrency to access these services on a replacement, transactional internet built on the blockchain, then you will do work and sell goods in exchange for cryptocurrency tokens. They will become the new hut-tax, and the fact that everyone who wants the things the internet provides has to trade work or goods for cryptos will make cryptos very moneylike.

      Web3 creates a need for cryptocurrencies

      If cryptocurrencies become required to do any transaction on the internet, then "everyone who wants the things the internet provides has to trade work or good for crypto".

    1. you can think about the invention of powerful representations and the invention of powerful media to host powerful 00:11:27 representations as being one of the big drivers of last 2,000 years of the intellectual progress of humanity because each representation allows us to think thoughts that we couldn't think before we kind of 00:11:39 continuously expand our think about territory so you can think of this as tied to you know the grand meta-narrative of the scent of humanity moving away from myth and superstition 00:11:51 and ignorance and towards a deeper understanding of ourselves in the world around us I bring this up explicitly because I think it's good for people to acknowledge the motivation for their 00:12:02 work and this is this story of the intellectual progress of humanity is something that I find very motivating inspiring and is something that I feel like I want to contribute to but I think 00:12:16 that if this if you take this as your motivation you kind of have to be honest with yourself that that there definitely has been ascent we have improved in many 00:12:27 ways but there are also other ways in which our history has not been ascent so we invent technology we media technology 00:12:39 to kind of help us make this this climb but every technology is a double-edged sword every technology enables us has a potential de navels in certain ways while debilitating us in other ways and 00:12:51 that's especially true for representations because the way the reputations work is they draw on certain capabilities that we have so if we go all in in a particular medium like we 00:13:03 did with print so the capabilities that are not well supported in that medium they get neglected in they atrophy and we atrophy I wish I knew who drew the picture 00:13:20 because it's it's a wonderful depiction of what I'm trying to express here and even a little misleading because the person the last stage they're kind of hunched over is tiny rectangle we reach 00:13:31 that stage accomplish that stage with the printing press and cheap paper book based knowledge the invention of paper-based bureaucracy paper-based 00:13:44 working we invented this lifestyle this way of working where to do knowledge work meant to sit at a desk and stare at your little tiny rectangle make a little motions of your hand you know started 00:13:56 out as sitting at a desk staring at papers or books and making little motions with a pen and now it's sitting at a desk staring at a computer screen making a little motions with your on a keyboard but it's basically the same 00:14:08 thing we've this is what it means to do knowledge work nowadays this is what it means to be a thinker it means to be sitting and working with symbols on a little tiny rectangle to the extent that 00:14:20 again it almost seems inseparable you can't separate the representation for what it actually is and and this is basically just an accident of history this is just the way that our media 00:14:32 technology happen to evolve and then we kind of designed a way of knowledge work for that media that we happen to have and I think so I'm going to make the claim that this style of knowledge work 00:14:47 this lifestyle is inhumane

      !- for : symbolic representation - language - the representation is closely tied to the media - a knowledge worker sits at a desk and plays with symbols in a small area all day! - This is actually inhumane when you think about it

    1. just realizing at a first level that annotation just seems like an important part of education it's kind of unrealized
      • annotation important part of education
    1. Modern technologies are amplifying these biases in harmful ways, however. Search engines direct Andy to sites that inflame his suspicions, and social media connects him with like-minded people, feeding his fears.

      I can agree that algorithms confirm our biases. I can confidently say this because I’ve seen my friends’ feeds before and the content that is showing up on theirs is very different from mine. On top of that, I can only imagine the amount of political news content that shows up on the feed of those who are very active in sharing their political opinions on their social media. I created an Instagram page for my bunnies and followed other bunny pages as well as pages that promote veganism. I’m not a vegan myself but these are the types of pages I don’t normally follow on my personal page. I noticed that a lot of content that is showing up on my bunnies explore page are gruesome videos/photos showing the practices of the farm industry. Although I am aware of these unethical practices, I found myself getting angry and started sharing those videos on bunnies’ Instagram stories. It’s interesting because the vegan pages that I follow don’t really post gruesome photos/videos, they just post educational content about veganism and recipes.

    1. McConnell said it’s up to the Republican candidates in various Senate battleground races to explain how they view the hot-button issue.   (function () { try { var event = new CustomEvent( "nsDfpSlotRendered", { detail: { id: 'acm-ad-tag-mr2_ab-mr2_ab' } } ); window.dispatchEvent(event); } catch (err) {} })(); “I think every Republican senator running this year in these contested races has an answer as to how they feel about the issue and it may be different in different states. So I leave it up to our candidates who are quite capable of handling this issue to determine for them what their response is,” he said.

      Context: Lindsey Graham had just proposed a bill for a nationwide abortion ban after 15 weeks of pregnancy.

      McConnell's position seems to be one that choice about abolition is an option, but one which is reserved for white men of power over others. This is painful because that choice is being left to people without any of the information and nuance about specific circumstances versus the pregnant women themselves potentially in consultation with their doctors who have broad specific training and experience in the topics and issues at hand. Why are these leaders attempting to make decisions based on possibilities rather than realities, particularly when they've not properly studied or are generally aware of any of the realities?

      If this is McConnell's true position, then why not punt the decision and choices down to the people directly impacted? And isn't this a long running tenet of the Republican Party to allow greater individual freedoms? Isn't their broad philosophy: individual > state government > national government? (At least with respect to internal, domestic matters; in international matters the opposite relationships seem to dominate.)

      tl;dr:<br /> Mitch McConnell believes in choice, just not in your choice.

      Here's the actual audio from a similar NPR story:<br /> https://ondemand.npr.org/anon.npr-mp3/npr/me/2022/09/20220914_me_gop_sen_lindsey_graham_introduces_15-week_abortion_ban_in_the_senate.mp3#t=206


      McConnell is also practicing the Republican party game of "do as I say and not as I do" on Graham directly. He's practicing this sort of hypocrisy because as leadership, he's desperately worried that this move will decimate the Republican Party in the midterm elections.

      There's also another reading of McConnell's statement. Viewed as a statement from leadership, there's a form of omerta or silent threat being communicated here to the general Republican Party membership: you better fall in line on the party line here because otherwise we run the risk of losing power. He's saying he's leaving it up to them individually, but in reality, as the owner of the purse strings, he's not.


      Thesis:<br /> The broadest distinction between American political parties right now seems to be that the Republican Party wants to practice fascistic forms of "power over" while the Democratic Party wants to practice more democratic forms of "power with".

    1. Today, the telephone industry en­courages such calls;

      I think this is probably one of the most important things to understand. While it's still by nature that we humans still interact with one another face-to-face, it is also important to communicate with each other with other mediums just in case time is short and people are busy with other things.

    1. Andthebestofmenhasperished,Sarpedon,sonofZeus;whowillnotstandbyhischildren.

      A minor detail, but it's interesting how Glaukos mentions Zeus is a man who will not stand by his children mere pages after Zeus "wept tears of blood" for his fallen son. Although he wanted to mobilize and save his son, his hands were tied as the head of all the gods and out of mercy for his son, and yet none of that backstory is known to the soldiers down below who continue to see Zeus as a ruthless, uncaring god and father. If the soldiers had more insight into the thoughts and motivations of not just Zeus, but all the gods in regard to the war, would they still be fighting it? As in, if all of the petty grudges and the thought processes of each god as they meddle in the war were publicly known to all of the soldiers, how early in the war would the breaking point, if any, have been?

    1. Maybe texting is a sticking point. If your partner asks you not to text a certain person, that might be a red flag. If it's a whole gender, there could be serious control issues at work.

      Phew, they're not. Don't care who or what just that were honest and open with who.

    1. There has been much discussion about “atomic notes”, which represents the key ideas from a person’s research on some topic or source (sources one and two). These are not the kind of thing I am interested in creating/collecting, or at least not what I have been doing. A far more typical thing for me is something I did at work today. I was trying to figure out how to convert the output of a program into another format. I did some searching, installed a tool, found a script, played with the script in the tool, figured out how to use it, then wrote down a summary of my steps and added links to what I found in my search. Since I am not doing research for a book or for writing academic papers, the idea of an atomic note does not fit into my information world. However, capturing the steps of a discovery or how I worked out a problem. is very real and concrete to me. I used to know a fellow engineer who wrote “technical notes” to capture work he was doing (like a journal entry). Maybe that is how I should consider this type of knowledge creation. 

      Andy Sylvester says his engineering type of notes don't fit with the concept of atomic note. A 'how to solve x' type of note would fit my working def of 'atomic' as being a self-contained whole, focused on a single thing (here how to solve x). If the summary can be its title I'd say it's pretty atomic. Interestingly in [[Technik des wissenschaflichen Arbeitens by Johannes Erich Heyde]] 1970, Heyde on p18 explicitly mentions ZK being useful for engineers, not just to process new scientific insights from e.g. articles, but to index specific experiences, experiments and results. And on p19 suggests using 1 ZK system for all of your material of various types. Luhmann's might have been geared to writing articles, but it can be something else. Solving problems is also output. I have these types of notes in my 'ZK' if not in the conceptual section of it.

      Vgl [[Ambachtelijke engineering 20190715143342]] artisanal engineering, Lovelock Novacene 2019, plays a role here too. Keeping a know-how notes collection in an environment where also your intuition can play a creative role is useful. I've done this for coding things, as I saw experienced coders do it, just as Andy describes, and it helped me create most of my personal IndieWeb scripts, because they were integrated in the rest of my stuff about my work and notions. Vgl [[Vastklik notes als ratchet zonder terugval 20220302102702]]

    1. This hasn't yet been scheduled, but we're tracking it on our backlog as something we want to do this year. A few months ago, we arranged for additional capacity to address items like this that have waited for so long. Now that additional capacity is available, it's just a matter of scheduling based on relative priority. We're anxious to get this one done, and I hope to soon have a clearer date to post here.
    1. So really this three-handed clock is a relic of that brief moment in time between the old and new, when there was an acceptance that standard time was kind of required in some ways but local time was still preferred and you were just in this weird interregnum where both of those things were equally dominant.

      Reminds me of how some places have daylight savings and some don't. Time is always such a trippy concept to explore to me because it's so subjective especially for me as a ND person who can struggle with time blindness

    1. anthropology

      Considering that my MacLeish project focused on themes of politics and philosophy in culture that directly influenced British and American authors, I think it's reasonable that Eliot, too, was a creation of the society that shaped him. He himself writes in this note that "The Golden Bough influenced [his] generation profoundly," and I believe that The Waste Land as a title is a description of the bleak society he sees around him. While he is influenced by the physical realities of Europe post WW1, I think that Eliot is digging deeper into the metaphysical perception of human life. At the heart of our perception of society, he argues, is culture, which is composed of both religion and science. Clearly, culture is conflicting between societies and also ever-changing, which is why Eliot brings in so many allusions to figments of cultures that collectively paint the picture of society as The Waste Land. I believe what the culmination of these clashing allusions may suggest is that Eliot understands mankind's adaptation of pseudoscience in religion and myth to explain the world, but believes they all paint an overly harmonious picture. In truth, the world is The Waste Land, but fallen ideas from past history.

      As an example - the idea of an immortality elixir or Holy Grail are found in all areas of society - these are sometimes dismissed as a figment of alchemy or religion, but at their heart show the human yearn for some sort of enlightenment or final goal to life. Le Morte D’Arthur is just one example of this journey, and although Galahad does find the Grail and get his happy ending, the title The Waste Land suggests that this success is not as happy as it seems. In From Ritual to Romance, the idea of TWL is directly referenced as "land becomes Waste" or "left the land Waste." Failure and the creation of a Waste Land are built into the myth of the Grail, and Eliot may believe that the failure that creates all disaster is inevitable.

      Plus, wow that science, centuries later, has largely dismissed the possibility of any of these magical solutions, Eliot is referencing this impossibility as what causes pessimism and bleakness in society. What used to be a beacon of human hope and enlightenment is now dismissed as fiction, and Eliot reads into the loss of old ideals. This is similar to the religions and myths referenced in The Golden Bough.

    1. Reviewer #1 (Public Review):

      This paper represents the first spatio-temporal functional parcellation derived from infant multimodal imaging data. The parcellations are generated from the longitudinally collected baby connectome project, and clearly benefit from incorporating repeat samples from individuals. Analyses demonstrate that parcellations estimated for different age groups (3, 6, 9, 12, 18 and 24 months) are fairly consistent and that repeat generation of the parcellations, using shuffled 'generating' and 'repeating' groups is robust.

      In general, I think the paper does an extremely good job of robustly testing its claims and therefore I have relatively few suggestions for improvement. However, I do have some concerns that the differences in network clustering reported in Fig 6 may be due to noise and I think the comparisons against the HCP parcellation could be more robust.

      Specifically, with regard to the network clustering in Fig 6. The authors use a clustering algorithm (which is not explained) to cluster the parcels into different functional networks. They achieve this by estimating the mean time series for each parcel in each individual, which they then correlate between the n regions, to generate an nxn connectivity matrix. This they then binarise, before averaging across individuals within an age group. It strikes me that binarising before averaging will artificially reduce connections for which only a subset of individuals are set to zero. Therefore averaging should really occur before binarising. Then I think the stability of these clusters should be explored by creating random repeat and generation groups (as done for the original parcells) or just by bootstrapping the process. I would be interested to see whether after all this the observation that the posterior frontoparietal expands to include the parahippocampal gryus from 3-6 months and then disappears at 9 months - remains.

      Then with regard to the comparison against the HCP parcellation, this is only qualitative. The authors should see whether the comparison is quantitatively better relative to the null clusterings that they produce.

      While it's clear from the results that the template achieves some good degree of spatio-temporal coherence, from the considerable benefit of the longitudinal imaging, not all individuals appear (from Fig 8) to be acquired exactly at the desired timepoints, so maybe the authors might comment on why they decided not to apply any kernel weighted or smoothing to their averaging? Pg. 8 'and parcel numbers show slight changes that follow a multi-peak fluctuation, with inflection ages of 9 and 18 months' explain - the parcels per age group vary - with age with peaks at 9 and 18 - could this be due to differences in the subject numbers, or the subjects that were scanned at that point?

      I also have some residual concerns over the number of parcels reported, specifically as to whether all of this represents fine grained functional organisation, or whether some of it represents noise. The number of parcels reported is very high. While Glasser et al 2016 reports 360 as a lower bound, it seems unlikely that the number of parcels estimated by that method would greatly exceed 400. This would align with the previous work of Van Essen et al (which the authors cite as 53) which suggests a high bound of 400 regions. While accepting Eickhoff's argument that a more modular view of parcellation might be appropriate, these are infants with underdeveloped brain function. Further comparisons across different subjects based on small parcels increases the chances of downstream analyses incorporating image registration noise, since as Glasser et al 2016 noted, there are many examples of topographic variation, which diffeomorphic registration cannot match. Therefore averaging across individuals would likely lose this granularity. I'm not sure how to test this beyond showing that the networks work well for downstream analyses but I think these issues should be discussed.

      Finally, I feel the methods lack clarity in some areas and that many key references are missing. In general I don't think that key methods should be described only through references to other papers. And there are many references, particular to FSL papers, that are missing.

    1. Author Response

      Reviewer #3 (Public Review):

      Lillvis et al present a new method for quick targeted analysis of neural circuits through a combination of tissue expansion and (lattice) light sheet microscopy. Three color labeling is available which allows to label neurons of a molecularly specific type, presynaptic and/or post-synaptic sites.

      Strengths:

      • The experimental technique can provide much higher throughput than EM

      • All source code has been made available

      • Manual correction of automatic segmentations has been implemented, allowing for an efficient semi-automatic workflow

      • Very different kinds of analyses have been demonstrated

      • Inclusion of electrical connections is really exciting, what a great complement to the existing EM volumes!

      Weaknesses:

      • Limitations of the method are not really discussed. While the approach is simpler and cheaper than EM, it's still important to give the readers a clear picture of the use cases where it's not expected to work before they embark on the journey of acquiring tens of terabytes of data. Here are just a few examples of the questions I would have if I wanted to implement the method myself - I am a computational person and can easily imagine my "wet lab" colleagues would have even more to ask about the experimental side:

      Please see our response to the Essential Revisions (for the authors) section above in addition to the responses to each point below.

      • It is not very clear to me if the resolution of the method is sufficient to disentangle individual neurons of the same type. It has been demonstrated for a few examples in the paper, but is it generally the case? Are there examples of brain regions/neuron types where it wouldn't be possible? If another column was added to the table in Figure 1, e.g. "individual neuron connectivity", EM would be "+", LM "-", what would ExLLSM be?

      Individual neuron connectivity is possible using this current version of ExLLSM either by labeling individual neurons genetically or by manually segmenting neurons in sparsely labeled samples. Of course, the exact answer to this question depends on labeling density and sample quality, and we have added a statement to address this.

      Lines 585-591: The difficulty of such manual segmentation can vary substantially depending on labelling density and signal quality. For instance, manually segmenting individual L2 outputs (Fig. 3) took ~10 minutes/neuron whereas segmenting a pair of SAG neurons from off-target neurons (Fig. 4) took 1-5 hours depending on the sample. Of course, more densely labeled samples will take more time. Finally, while it is possible to segment individual neurons from entangled bundles as shown here and elsewhere (Gao et al., 2019), the expansion factor will need to be increased by an order or magnitude or more and neuron labels must be continuous to approach EM levels of reconstruction density.

      • Similarly, the procedures for filling gaps in the signal could result in falsely merged neurons. Does it ever happen in practice?

      Because the gap filling process is not utilized until after semi-automatic segmentation this was not a concern (the gaps were filled on manually inspected neuron masks that should only include signals from the neuron(s) of interest). This would certainly be a concern if we were using this gap filling step – or the fully automated neuron segmentation approach – to segment individual neurons from samples in which off-target neurons are also labeled, but that was not the case here.

      • How long does semi-manual analysis take in person-hours/days for a new biological question similar in scope to the ones demonstrated in the paper?

      The statement discussed above (lines 585-591) and an additional statement (lines 581-583) aim to address this.

      Lines 580-582: As such, analyzing the DA1-IPN data, for example, required relatively little human time. The semi-automatic neuron segmentation steps required a maximum of one hour per sample and all other steps are automated.

      • How robust are the networks for synaptic "blob" detection? The authors have shown they work for different reporters, when are they expected to break? Would you recommend to retrain for every new dataset? How would you recommend to validate the results if no EM data is available?

      We expect that the network for blob detection is quite robust as it essentially acts as high signal detector for punctate signals, as opposed to classifying a high-level shape or structure. We have modified the text to suggest that the synapse and neuron segmentation models we include be attempted before automatically retraining.

      Lines 368-372: Furthermore, the convolutional neural network models for synapse and neuron segmentation are classifiers of high signal punctate and continuous structures, respectively. As such, the models may already work well for segmenting similar structures from other species or microscopes. If not, these models can be retrained with a suitable ground truth data set and the entire computational pipeline can be applied to these new systems.

    1. In combination with SCA, CERICoffers freedom from the transmission model of learning, where theprofessor lectures and the students regurgitate. SCA can help buildlearning communities that increase students’ agency and power inconstructing knowledge, realizing something closer to a constructivistlearning ideal. Thus, SCA generates a unique opportunity to makeclassrooms more equitable by subverting the historicallymarginalizing higher education practices centered on the professor.

      Here's some justification for the prior statement on equity, but it comes after instead of before. (see: https://hypothes.is/a/SHEFJjM6Ee2Gru-y0d_1lg)

      While there is some foundation to the claim given, it would need more support. The sage on the stage may be becoming outmoded with other potential models, but removing it altogether does remove some pieces which may help to support neurodiverse learners who work better via oral transmission rather than using literate modes (eg. dyslexia).

      Who is to say that it's "just" sage on the stage lecturing and regurgitation? Why couldn't these same analytical practices be aimed at lectures, interviews, or other oral modes of presentation which will occur during thesis research? (Think anthropology and sociology research which may have much more significant oral aspects.)

      Certainly some of these methods can create new levels of agency on the part of the learner/researcher. Has anyone designed experiments to measure this sort of agency growth?

    1. “For some reason, that simple act and belief changed my entire perception of schooling, and life really,” he said. “She was the first person who saw something good in me.”

      I know that it's cliche to mention this, but I've always been a serious believer in this. It's very easy to belittle your own abilities, let alone view yourself in a better light. As teachers, it's really easy to get caught up in just making sure everyone's going along through the courses, and as long as you don't see regression, it seems good. Speaking life into students and peers can really make a vast difference in the way they perceive themselves.

    1. level 1mambocab · 2 days agoWhat a refreshing question! So many people (understandably, but annoyingly) think that a ZK is only for those kinds of notes.I manage my slip-box as markdown files in Obsidian. I organize my notes into folders named durable, and commonplace. My durable folder contains my ZK-like repository. commonplace is whatever else it'd be helpful to write. If helpful/interesting/atomic observations come out of writing in commonplace, then I extract them into durable.It's not a super-firm division; it's just a rough guide.

      https://www.reddit.com/r/Zettelkasten/comments/xaky94/so_what_do_you_do_for_topics_that_dont_fit_in_a/

      Other than my own practice, this may be the first place I've seen someone mentioning that they maintain dual practices of both commonplacing and zettelkasten simultaneously.


      I do want to look more closely at Niklas Luhmann's ZKI and ZKII practices. I suspect that ZKI was a hybrid practice of the two and the second was more refined.

    1. it moves beyond the voices of old white men talking about even older white men.

      It is about time that we get to hear the perspectives and thoughts of not just old white men talking about even older white men, I feel there may be bias there. It's exciting how so many doors have opened to relaying information to one another is an honest and diverse fashion.

    1. my children gone, my relations and friends gone, our house and home and all our comforts—within door and without—all was gone

      While these feelings are very valid, it's interesting reading the perspective of the english, especially the people that just came along for the ride with their husbands and families. Knowing now that the presence of the english, did in fact do everything she is describing to the native american people.

    1. it's so true. Being eco-conscious is not just about caring for the environment, its making sure that the supply can continue for long time/forever = more money

    1. #stylez--3fKJu styles--3sKVw "> #_I_have_ "other ideas" that are related to our concentration here; and I really thinksomeone in a position like yours would benefit greatly from working on the branch of crypto related to "free communioation."I want to build an open social network ["protocol"] that combines "what email, facebook, reddit and ... wikipedia to enable "commenting on anything" the light of ...# "hey ma, where did all the online newspaper comments disappear to?"I know what has to go into it, I'm looking at things like hypothes.is, tableland.xyz and ... https://lnkd.in/gNbBAewt ... and I think it would be simple to put something together that will intrigue people; i know the software and infrastructure can offer us a bulletproof check on censorship that we need now more than ever before in history; and I'm having trouble figuring out why more people aren't interested in helping me ensure that we have a safe happy future free from "un america n th i ngz" like "no newspaper" and no recourse against insurance and credit fraud/problems; which is what I'm staring at in full blown disbelief.</textarea></div></div></label></div><div class="styles--2mJeY"><div class="styles--YBb-N styles--3IYUq"><div--

      stylez--3fKJu styles--3sKVw "> #I_have "other ideas" that are related to our concentration here; and I really think

      someone in a position like yours would benefit greatly from working on the branch of crypto related to "free communioation." I want to build an open social network ["protocol"] that combines "what email, facebook, reddit and ... wikipedia to enable " commenting on anything" the light of ...

      "hey ma, where did all the online newspaper comments disappear to?"

      I know what has to go into it, I'm looking at things like hypothes.is, tableland.xyz and ... https://lnkd.in/gNbBAewt ... and I think it would be simple to put something together that will intrigue people; i know the software and infrastructure can offer us a bulletproof check on censorship that we need now more than ever before in history; and I'm having trouble figuring out why more people aren't interested in helping me ensure that we have a safe happy future free from "un america n th i ngz" like "no newspaper" and no recourse against insurance and credit fraud/problems; which is what I'm staring at in full blown disbelief.</textarea></div></div></label></div><div class="styles--2mJeY"><div class="styles--YBb-N styles--3IYUq"><div -- The importance of seeing that it's an open "DeFi-inspir[ing/edu]" protocol that will work with existing service and interfaces like LinkedIn and Facebook and Mastodon and diasp.org is ... without question a necessary part of understanding the vision. I think we will see great leaps and bounds in interface design that make the "second small step" Dissenter/Unity and #hypothes eze ... have almost brought to the forefront of the "right venue." https://web.hypothes.is/sponsors/ Seeing #hypothesisontableland and having it work is the first "glaringly bright flash" that will ensure that we never again watch commenting and sites like discus and reddit and facebook turn from the light of social "what's on fire?" to the darkness of shadow ... "throttling" of the presentation of the world changing that somehow has missed our tongues and hopefully not our eyes.<br /> Hopefully once we start talking and getting more involved it will be clear how easy it was and is to make the world a better place just by ... "dropping in your two cents" or BTC as the case may be. --

      We've got to get serious about caring "of things like ourselves" for the truth and health and happiness that some of us probably take for granted as I do;

      still you can see me smiling when i know full well it's a little early for that--maybe you can help shift the timeline.

    1. declared psychology to be forever an impossible science.

      did this statement significantly alter the field? Psychologist’s attitudes, decisions, beliefs, etc. Were people influenced by this declaration, therefore altering the course of psychology’s history forever? I just wonder how large of an impact this declaration of Kant’s may have had - it's interesting that we'll never fully know.

  3. drive.google.com drive.google.com
    1. he master beam(asalas alemmas, a masculine term) which extends the protection of themale part of the house to the female part, is explicitly identified with themaster of the house, whereas the main pillar, a forked tree trunk (thigejdith,a feminine term), On which it rests, is identified with the wife (accordingto Maunier the Beni Khellili call it Masauda, a feminine fi rst name meaning'the happy one'), and their interlocking symbolizes sexual unionrepresented in the wall paintings, in the form of the union of the beamand the pillar, by two superimposed forked shapes (Devulder 1 95 1 ).

      I'm sorry but this is the most heterosexual thing I have read in a very long time. What does this even mean. Well; I understand the conceit here, but the ascribing of masculinity and femininity to support beams is just such a strange derivation of the social construct of gender. It's incomprehensible for me to think of the world under that sort of absolute distinction between man and woman, where the two categories encompass so much more than mere gender.

    2. Conversely, a number of ritual acts aimto ensure the 'filling' of the house, such as those that consist of castingthe remains of a marriage lamp (whose shape re p resents sexual union andwhich plays a part in most fertility rites) into the foundations, after firstsacrificing an animal; or of making the bride sit on a leather bag full ofgrain, on first entering the house.

      This text never ceases to fascinate me with how it drops some of the most insane strings of words I've ever read like it's completely normal. Lamps represent sex and are used in fertility rituals. The 'remains' of these (after melting...? not sure) are infused into the foundation of the house after sacrificing an animal. Sure, okay. The bride squats on a bag of grain when she comes into the house. Why not? It's just...so utterly far removed from any sense of modern western living, it's very interesting.

    1. To students who are just getting started in psychological research, the challenge of measuring such variables might seem insurmountable. Is it really possible to measure things as intangible as self-esteem, mood, or an intention to do something? The answer is a resounding yes, and in this chapter we look closely at the nature of the variables that psychologists study and how they can be measured. We also look at some practical issues in psychological measurement.

      I’m so curious to develop more understanding of how to measure variables like self-esteem or mood. I’m so excited to apply that theory into a practical execution of psychological research. I agree that it's very useful to use different kinds of scales to measure self esteem. It is very significant to create surveys or questionnaires to measure intangible concepts like mood or self confidence.

    1. one of the 00:10:51 things is that our brains were set up for dealing with about a hundred people at a time living by our wits hunting and gathering and dying in the same world we 00:11:03 were born into for hundreds of thousands of years there's no concept of progress in our genes we just don't have it but like all animals we have an enormous set 00:11:17 of genetic apparatus to make us good copers anything happens to us we can find a way of being resilient about it and adapting to it we're copers and 00:11:29 adapters and so when we come up against difficulties our tendency is to cope with these difficulties it's like working for a company go into a company 00:11:42 and the company seems sort of screwed up maybe you can quit you can cope but your chances of actually changing the company are very low because nobody will listen 00:11:56 to reason right that is not what the company is there for they are there for their a task this is something that engelbart the inventor of the mouse pointed out years ago that companies are 00:12:10 devoted to their area a task which is what they think they were about most companies do not have a very good be process which is supposed to look at the 00:12:21 a tasks and make them more efficient but almost no companies have a see process which questions the tasks are our goals still reasonable our processes still reasonable that's the last thing it gets 00:12:35 question

      !- applies to : climate change - many are adopting and trying to take a coping strategy instead of one of fundamental change - if coping is the only strategy, it becomes a failing one when whole system change is required

    2. here's an old model from the 19th century of memory which actually in the 21st century has come 00:13:03 back as a pretty good one as a metaphor anyway so the idea is that rain comes down on the ground and there's a little regularities randomly there and at some point those regularities will be a 00:13:17 little more responsive to the rain and a little channel will form the channel acts as an amplifier and so wherever that channel got started it starts funneling lots more water through it other water is draining into 00:13:31 it and all of a sudden it starts cutting deeper and you get these gullies and you get down into these gullies you have to remember to look up because everything 00:13:44 down there in this gully is kind of pink you can think that the world is pink and in fact if you get into a real gully one of my favorites is Grand Canyon by the 00:13:57 way that's only a hundred million years of erosion to get the Grand Canyon it's relatively recent get into one of these things and the enormity of what you see 00:14:08 outwards Dwarfs what you can see if you look up if you've ever been on one of these things you're just in a different world it's a pink world you don't think 00:14:23 about climbing out of it you think about moving along in it

      !- In other words : stuck in a groove - stuck in a conceptual groove -

    3. one of the things that's worked the best the last three or 00:10:05 four hundred years is you get simplicity by finding a slightly more sophisticated building block to build your theories out of its when you go for a simple building block that anybody can 00:10:18 understand through common sense that is when you start screwing yourself right and left because it just might not be able to ramify through the degrees of freedom and scaling you have to go through and it's this 00:10:31 inability to fix the building blocks that is one of the largest problems that computing has today in large organizations people just won't do it

      !- example : simplicity - astronomy example is perfect - paradigm shift to go to slightly more complex fundamental building block that CAN scale

    4. if you take it out 30 years the puck is going to be there thirty years and Moore's law is going to go like that she been well covered and 00:43:15 predicted in 1965 out to 1995 the answers yeah goddamnit no question 1995 there is no way we are not going to have a tablet computer no way it's just going to happen we don't even have to worry about 00:43:28 right now what we're going to do it because what we have to do is to figure out what it should be once you start thinking about it then the next interesting part of it is bring back a 00:43:42 more concrete version so out there you can do pie-in-the-sky what about 10 or 15 years out what can we do then and the answer is yeah we can do one then what would that be like

      !- for : backcasting - start with longest time then go halfway there to see what you need to do practically to make it a reality

    5. how many people have seen curves that look like these progress against time right everywhere reading 00:48:14 scores test scores people love these yay oh no yay oh no it's bad because our 00:48:32 nervous system is only set up for relative change and in fact there's cause for cheering if that's the threshold but in fact for reading 00:48:43 threshold is this this is all oh no doesn't matter whether it goes up or not because there are many many things that where you have to get to the real 00:48:58 version of the thing before you're doing it at all in the 21st century it doesn't have help to read just a little bit you have to be fluent at it so this is a 00:49:09 huge problem and once you draw the threshold in there immediately converts this thing that looked wonderful into a huge qualitative gap and the gap is 00:49:20 widening and we have two concepts that are enemies of what we need to do perfect and better right so better is a 00:49:36 way of getting fake success we had improvement see it all the time it's the ultimate quarterly report we had improvements here and perfect is 00:49:51 tough to get in this world so both of those are really bad so what you want is what's actually needed and the exquisite skill here which I'm going to use these 00:50:06 two geniuses Thakur and Engels to labor it I'm going to call that the sweet spot the way you make progress here is you pick the thing that is just over that threshold that is qualitatively better 00:50:21 than all the rest of the crap you can do you can spend billions turning around and once you do that you widen up you give yourself a little blue plane to 00:50:34 operate in and for a while everything you do in there is something that is actually going to be meaningful

      !- similar to : climate change solutions - Good metaphor for climate change progress

    6. what is 00:32:39 your ten year plan and the reaction I get is that right think about it the idea of a ten year plan that people are 00:32:50 serious about is just it's fake companies just don't have it they don't set themselves up to be able to deal with this thing which is really just to 00:33:04 find hope that they're going to be in business in ten years they have no idea

      !- applies to : net zero plan

    1. Why are we awesome you ask?We are truly at the forefront of the crypto ecosystem as maintainers of the infrastructure layer of blockchain networks! We practice the crypto team mentality by assembling a global and diverse team (with even pseudo-anonymous team members). We collectively represent more than 12 different countries and are united in a single mission: building out the future of decentralization. Crypto is here to stay, having introduced novel ideas such as DeFi, NFTs, and DAOs. At the core of all this are teams like us working relentlessly to build the necessary tools and applications that help run and secure blockchains.

      Opensea.io needs comments :)

      you will probably sell more nifty's that way ...

      this is the way

      This pieces name was changed after the initial publishing and sale; I imagine the ones on sale will not be updated because they were already written to the bockchain. I am "insinuiating" that what I am producing here is at least as "bulletproof" as the IPFS and Filecoin bound ...

      FROM THE MACHINE . ORG

      which I have toiled endlessly over losing amidst the nox of no ... "S3 buckets at affiordable rates" and IPFS providers that are going under or out of business either because of lack of interest or some other issue I just can't fathom. It's censorship, though, that's for sure--and it's got to be something we get into our hearts is synonymous with nontruth and that means "not good for us."

      Changing the way we see "the import of the written word" is paramount ... and Tri$tar. #XTINAGAZNOX

      --

      Recalling the moment; as if the serpent itself was speaking through me; with a kind of "love" that you can't really fathom being likened to something I detest so much ... I cherish the thought of the possibility that in some other life or some other world I could have come out of the night and the darkness enjoying the time period known as ...

      nights.

      It's not really unfathomable, I've got her singing about something so close to the truth; "a century of lonely nights" with that look on her face; you might actually think she could be releasing me ... right this very moment.

      Or you could; and I do believe that release is something we prefer to celebrate ... the morning known as "the end of nights."

    1. Adam Marshall Dobrin Author Creative Director and ... Backseat Ferryman at XCALIBER DAO. Writer. Futurologist. Aspiring dad. 30s (edited) https://hyp.is/go?url=https%3A%2F%2Fwww.linkedin.com%2Ffeed%2Fupdate%2Furn%3Ali%3Aactivity%3A6974689858535530496%2F&group=__world__ The importance of seeing that it's an open "DeFi-inspir[ing/edu]" protocol that will work with existing service and interfaces like LinkedIn and Facebook and Mastodon and diasp.org is … without question a necessary part of understanding the vision. I think we will see great leaps and bounds in interface design that make the "second small step" Dissenter/Unity and #hypothes eze … have almost brought to the forefront of the "right venue."https://web.hypothes.is/sponsors/Seeing #hypothesisontableland and having it work is the first "glaringly bright flash" that will ensure that we never again watch commenting and sites like discus and reddit and facebook turn from the light of social "what's on fire?" to the darkness of shadow … "throttling" of the presentation of the world changing that somehow has missed our tongues and hopefully not our eyes.Hopefully once we start talking and getting more involved it will be clear how easy it was and is to make the world a better place just by … "droppin

      Adam Marshall Dobrin Author Creative Director and ... Backseat Ferryman at XCALIBER DAO. Writer. Futurologist. Aspiring dad. 30s (edited)

      https://hyp.is/go?url=https%3A%2F%2Fwww.linkedin.com%2Ffeed%2Fupdate%2Furn%3Ali%3Aactivity%3A6974689858535530496%2F&group=world

      The importance of seeing that it's an open "DeFi-inspir[ing/edu]" protocol that will work with existing service and interfaces like LinkedIn and Facebook and Mastodon and diasp.org is … without question a necessary part of understanding the vision. I think we will see great leaps and bounds in interface design that make the "second small step" Dissenter/Unity and #hypothes eze … have almost brought to the forefront of the "right venue." https://web.hypothes.is/sponsors/

      Seeing #hypothesisontableland and having it work is the first "glaringly bright flash" that will ensure that we never again watch commenting and sites like discus and reddit and facebook turn from the light of social "what's on fire?" to the darkness of shadow … "throttling" of the presentation of the world changing that somehow has missed our tongues and hopefully not our eyes.

      Hopefully once we start talking and getting more involved it will be clear how easy it was and is to make the world a better place just by … "droppin

    2. #stylez--3fKJu styles--3sKVw "> #_I_have_ "other ideas" that are related to our concentration here; and I really thinksomeone in a position like yours would benefit greatly from working on the branch of crypto related to "free communioation."I want to build an open social network ["protocol"] that combines "what email, facebook, reddit and ... wikipedia to enable "commenting on anything" the light of ...# "hey ma, where did all the online newspaper comments disappear to?"I know what has to go into it, I'm looking at things like hypothes.is, tableland.xyz and ... https://lnkd.in/gNbBAewt ... and I think it would be simple to put something together that will intrigue people; i know the software and infrastructure can offer us a bulletproof check on censorship that we need now more than ever before in history; and I'm having trouble figuring out why more people aren't interested in helping me ensure that we have a safe happy future free from "un america n th i ngz" like "no newspaper" and no recourse against insurance and credit fraud/problems; which is what I'm staring at in full blown disbelief.</textarea></div></div></label></div><div class="styles--2mJeY"><div class="styles--YBb-N styles--3IYUq"><div

      The importance of seeing that it's an open "DeFi-inspir[ing/edu]" protocol that will work with existing service and interfaces like LinkedIn and Facebook and Mastodon and diasp.org is ... without question a necessary part of understanding the vision. I think we will see great leaps and bounds in interface design that make the "second small step" Dissenter/Unity and #hypothes eze ... have almost brought to the forefront of the "right venue."

      https://web.hypothes.is/sponsors/

      Seeing #hypothesisontableland and having it work is the first "glaringly bright flash" that will ensure that we never again watch commenting and sites like discus and reddit and facebook turn from the light of social "what's on fire?" to the darkness of shadow ... "throttling" of the presentation of the world changing that somehow has missed our tongues and hopefully not our eyes.

      Hopefully once we start talking and getting more involved it will be clear how easy it was and is to make the world a better place just by ... "dropping in your two cents" or BTC as the case may be.

    1. #stylez--3fKJu styles--3sKVw "> #_I_have_ "other ideas" that are related to our concentration here; and I really thinksomeone in a position like yours would benefit greatly from working on the branch of crypto related to "free communioation."I want to build an open social network ["protocol"] that combines "what email, facebook, reddit and ... wikipedia to enable "commenting on anything" the light of ...# "hey ma, where did all the online newspaper comments disappear to?"I know what has to go into it, I'm looking at things like hypothes.is, tableland.xyz and ... https://lnkd.in/gNbBAewt ... and I think it would be simple to put something together that will intrigue people; i know the software and infrastructure can offer us a bulletproof check on censorship that we need now more than ever before in history; and I'm having trouble figuring out why more people aren't interested in helping me ensure that we have a safe happy future free from "un america n th i ngz" like "no newspaper" and no recourse against insurance and credit fraud/problems; which is what I'm staring at in full blown disbelief.</textarea></div></div></label></div><div class="styles--2mJeY"><div class="styles--YBb-N styles--3IYUq"><div

      The importance of seeing that it's an open "DeFi-inspir[ing/edu]" protocol that will work with existing service and interfaces like LinkedIn and Facebook and Mastodon and diasp.org is ... without question a necessary part of understanding the vision. I think we will see great leaps and bounds in interface design that make the "second small step" Dissenter/Unity and #hypothes eze ... have almost brought to the forefront of the "right venue."

      https://web.hypothes.is/sponsors/

      Seeing #hypothesisontableland and having it work is the first "glaringly bright flash" that will ensure that we never again watch commenting and sites like discus and reddit and facebook turn from the light of social "what's on fire?" to the darkness of shadow ... "throttling" of the presentation of the world changing that somehow has missed our tongues and hopefully not our eyes.

      Hopefully once we start talking and getting more involved it will be clear how easy it was and is to make the world a better place just by ... "dropping in your two cents" or BTC as the case may be.

    1. The Andrew W. Mellon Foundation $752,000 1 January, 2014 The Andrew W. Mellon Foundation awarded Hypothesis a multi-year grant to support the development of annotation services for digital scholarly materials, including support for the I Annotate annual conference, I Annotate 2014: Annotato Ergo Sum.

      The importance of seeing that it's an open "DeFi-inspir[ing/edu]" protocol that will work with existing service and interfaces like LinkedIn and Facebook and Mastodon and diasp.org is ... without question a necessary part of understanding the vision. I think we will see great leaps and bounds in interface design that make the "second small step" Dissenter/Unity and #hypothes eze ... have almost brought to the forefront of the "right venue."

      https://web.hypothes.is/sponsors/

      Seeing #hypothesisontableland and having it work is the first "glaringly bright flash" that will ensure that we never again watch commenting and sites like discus and reddit and facebook turn from the light of social "what's on fire?" to the darkness of shadow ... "throttling" of the presentation of the world changing that somehow has missed our tongues and hopefully not our eyes.

      Hopefully once we start talking and getting more involved it will be clear how easy it was and is to make the world a better place just by ... "dropping in your two cents" or BTC as the case may be.

    1. bout why they behave in a certain way. Men and women might do different amounts of housework because they perceive mess (or lack thereof) differently, consider household work a part of their (gendered) identity, have an awareness of others’ expectations, or are concerned about social consequences.

      Surely everybody has experienced the frantic mother cleaning before people come over just in case their house looks lived in. In every single one of these scenarios, what does the male figure do? Cleaning was largely a woman's job in my childhood home, as my mum didn't work and my father worked shift work in the mines. Even still, if my father did any chores, my mother would nearly always comment on the "quality" of the clean. In my own personal experience, I have lived with partners who have for example, washed the dishes in the evening after cooking a meal - but not wiping down the benches. And that was never something that seemed like it should have been done. But to me, that's very important. There's food spills! There's mess. Ya gotta wipe the bench too. It's part of the job! But apparently not deemed important enough.

  4. bafybeid2p3t47fwv4j436v6nrh7xp4gbouaehangadk7qrwqo4h4rbxihm.ipfs.localhost:8080 bafybeid2p3t47fwv4j436v6nrh7xp4gbouaehangadk7qrwqo4h4rbxihm.ipfs.localhost:8080
    1. Download parameters:lotus-miner fetch-params 32GiB lotus-miner fetch-params 64GiB

      I pause.

      They want me to download a hundred gigs just to start mining? Are they paying? Honestly, why aren't the new pinning services charging anything? Do they have large corporate clients?

      Is there a working business system? It doesn't look like Estuary.tech is paying their miners enough--or anything--either. There's something strange going on. Google started charging for storage, and ... it's "pretty cheap" ... IPFS beat it out of the water; "before.." but the services haven't been reliable until now.

      Stream of consciousness ;... "thinking about my mortality" and the LA Times echoing ... that :)

    1. Joris also cautions that we not limit Celan’s poetry to a “revenge play,” but I don’t consider it a limitation. Like the artwork of witness, the artwork of vengeance could be its own genre

      I think discussion of the "vengeful” vibe in Celan' s poetry is a great inspiration for writers, or just people that eager to express. It’s ok to refuse forgiveness, it’s ok to keep the negative feelings, because they are powerful and valuable in recording reality, and may evoke reflection and improvement in the future.

    1. I'm not just talking about journaling. I'm talking about writing down the first few thoughts you have after you've arrived at work but before you've started on the day's tasks. Draw a picture or doodle an idea. It's a way to figure out what is important, and what is stressing you out. It is a record of your preparation and a way to help you look back and see, for these seven minutes, what was really important. Make sure you don't get too focused on the writing and not enough on the thinking.

      I love this practice and may have been doing it wrong. Pluswhich, I no longer GO to the office (unless walking down the stairs is that). But mostly, I've made it too routine. More doodling is in order!

    1. ### i must really like this ad. why are you seeing this ad?

      "our society has a level of censorship related to violence and torture that is untenable--community standards need to change ... there's "a reason" caustic visions of holodecks rouse strange nightmares ..."

      in short: "heaven without safety is the problem. talking openly will get us to a safer future faster."

      Adam Marshall Dobrin

      osnrtSdeopht95727clflc9mgg3li1hli86l535t0g3210 | m | 44a4 | 6 | l608f7ul | Shared with Public

      i must really like this ad. why are you seeing this ad?

      commenting a little about data retention; something that's ... "become a big deal for me" in the face of the mortality of my soul and things like my "facebook account data"--which continues to be under significant scrutiny because of "at least one" of you. i say that because every time i post anything "slightly questionable" as being posted--on a public building wall or something--it is immediately [immediately] flagged and i get a warning. i don't really have "facebook jail issues" though at the moment i'm on a restriction that is something like a "shadow ban"--it throttles wall posts. that's annoying to me, i don't like it; and frankly don't agree at all with the community standards related to things like "bad words and images"--

      swastikas and nigeria, schwarzenegger an zellweger ... those are things I think you should have to be able to gaze at and not be offended.

      i've put my entire facebook profile on IPFS to preserve "the silence" and what [specifically not the silence itself but the evidence of it in our history] it looks like from my perspective; where it has taken my mother's life and destroyed my relationship with most of my family. it has manifested as my entire graduating class apparently-ish moving to LA and not using facebook "all at one time" sometime around 10 years ago.

      it's hurt america, and american media more than i can imagine.

      i hope it comes to an and; which is something akin to hoping my mortality disappears, as well as yours--and that's a big part of "the religion of azrael" that i haven't really discussed much--but since i'm "close to bored to death" i'm going to "just do it." i fear greatly that we have ... "come full circle" and from another "just like this time" returned here, immortal in another place--and "not liking it that much" ... as in I believe strongly in "right to death" and I think it's a "big deal" and underlying thread in all of religion, but specifically in the death of JC and "Jewish Koshruit Law" ... and it's tie [shevirat hakelim, ksamim sharvit(like a magic wand smashing the vessel)] ... to American Law related to psychiatric ... "fighting sanity and me ... and our right to sleep."

      i'm not happy with the state of my personal quality of life or the american state of freedom--which is to say there's probably nowhere on earth "better"--though maybe i should see the world from the perspective of http://sydney.co.nz before "deciding that having never left north america.

      commenting, http://Pinata.cloud is still around--though they are now charging more in a strange manner and that's a "bad sign" for the immortality of http://lamc.la [ https://bafybeidpq3cmso7h7wpz4i6fwbwhupx3hr542tdgevwn3ali... look it #almostsaysalive ] though the Filecoin implementation on http://Estuary.tech has given me hope that I might actually succeed in building something immortal ... even if it's just my Facebook profile and website.

      Since I already "don't have much privacy" #sotospeak it frustrates me to no end that someone on my friends list goes out of their way to try and shut down what today is probably the best proof anywhere of what "the sacred silence" [ of the sound of, simon, garfunkel, and system of a down ] is actually about: the literal darkness of Egypt and Beth-El; the "house of god" ... #SoToSpeak I am going to try to work on my NZB/NNTP storage system for permanent immutability [ data immortality ] and linking it to a IPFS Filecoin node I trying to install now. I have some "big ideas" and I really hope to be able to see one or two of them through to

      .. "existence."

    1. Octavia Butler’s 1993 Parable of the Sower. The story follows a teenage girl seeking freedom from her deteriorating community in a future destabilized by climate change. Part of the reason it’s held up so well is that so many of Butler’s predictions have come true. But she wasn’t a fortune teller, she just did her homework.

      .

    1. This mode of communication be just as frequently used by politicians and profes-sors as it be by journalists and advertisers.

      Young's using ethos here, lending credibility to the idea that code meshing has always been a Thing, and should be explored further. If it's good enough for politicians and professors, it's good enough to teach students!

    1. Comprehension Surveys (10%)---these are quizzes. They will be routine and check for understanding of basic concepts from nightly homeworks. Should we allow you to retake them? You're annotating this syllabus. You tell me. 

      To be completely honest, I have never liked comprehension quizzes. I find that these quizzes make me stress too much to learn every detail rather than understanding and trying to enjoy the actual reading. It's likely that the quizzes will motivate me more to pay closer attention to detail, I just hope I don't get too discouraged if I don't remember something that I was supposed to.

  5. muse.jhu.edu muse.jhu.edu
    1. I failed for a long time to seethe underlying parallels between the sports and academic worlds, parallelsthat might have enabled me to cross more readily from one argument cultureto the other.

      School often only seems to teach us that intellect is solely academic based. However, it's clear from this quote that it is possible to have intellect in academics and non-academics. While having academic intellect is important, it is also important to let people, especially younger people, explore and gain intellect on non-academic subjects that they love and let them know that that intellect is just as valuable.

    1. (3) Facebook i must really like this ad. why are you seeing this ad?

      Adam Marshall Dobrin osnrtSdeopht95727clflc9mgg3li1hli86l535t0g3210 | m | 44a4 | 6 | l608f7ul | Shared with Public

      i must really like this ad. why are you seeing this ad?

      commenting a little about data retention; something that's ... "become a big deal for me" in the face of the mortality of my soul and things like my "facebook account data"--which continues to be under significant scrutiny because of "at least one" of you. i say that because every time i post anything "slightly questionable" as being posted--on a public building wall or something--it is immediately [immediately] flagged and i get a warning. i don't really have "facebook jail issues" though at the moment i'm on a restriction that is something like a "shadow ban"--it throttles wall posts. that's annoying to me, i don't like it; and frankly don't agree at all with the community standards related to things like "bad words and images"--

      swastikas and nigeria, schwarzenegger an zellweger ... those are things I think you should have to be able to gaze at and not be offended.

      i've put my entire facebook profile on IPFS to preserve "the silence" and what [specifically not the silence itself but the evidence of it in our history] it looks like from my perspective; where it has taken my mother's life and destroyed my relationship with most of my family. it has manifested as my entire graduating class apparently-ish moving to LA and not using facebook "all at one time" sometime around 10 years ago.

      it's hurt america, and american media more than i can imagine.

      i hope it comes to an and; which is something akin to hoping my mortality disappears, as well as yours--and that's a big part of "the religion of azrael" that i haven't really discussed much--but since i'm "close to bored to death" i'm going to "just do it." i fear greatly that we have ... "come full circle" and from another "just like this time" returned here, immortal in another place--and "not liking it that much" ... as in I believe strongly in "right to death" and I think it's a "big deal" and underlying thread in all of religion, but specifically in the death of JC and "Jewish Koshruit Law" ... and it's tie [shevirat hakelim, ksamim sharvit(like a magic wand smashing the vessel)] ... to American Law related to psychiatric ... "fighting sanity and me ... and our right to sleep."

      i'm not happy with the state of my personal quality of life or the american state of freedom--which is to say there's probably nowhere on earth "better"--though maybe i should see the world from the perspective of http://sydney.co.nz before "deciding that having never left north america.

      commenting, http://Pinata.cloud is still around--though they are now charging more in a strange manner and that's a "bad sign" for the immortality of http://lamc.la [ https://bafybeidpq3cmso7h7wpz4i6fwbwhupx3hr542tdgevwn3ali... look it #almostsaysalive ] though the Filecoin implementation on http://Estuary.tech has given me hope that I might actually succeed in building something immortal ... even if it's just my Facebook profile and website.

      Since I already "don't have much privacy" #sotospeak it frustrates me to no end that someone on my friends list goes out of their way to try and shut down what today is probably the best proof anywhere of what "the sacred silence" [ of the sound of, simon, garfunkel, and system of a down ] is actually about: the literal darkness of Egypt and Beth-El; the "house of god" ... #SoToSpeak I am going to try to work on my NZB/NNTP storage system for permanent immutability [ data immortality ] and linking it to a IPFS Filecoin node I trying to install now. I have some "big ideas" and I really hope to be able to see one or two of them through to

      .. "existence."

    1. i must really like this ad. why are you seeing this ad?

      Adam Marshall Dobrin osnrtSdeopht95727clflc9mgg3li1hli86l535t0g3210 | m | 44a4 | 6 | l608f7ul | Shared with Public

      i must really like this ad. why are you seeing this ad?

      commenting a little about data retention; something that's ... "become a big deal for me" in the face of the mortality of my soul and things like my "facebook account data"--which continues to be under significant scrutiny because of "at least one" of you. i say that because every time i post anything "slightly questionable" as being posted--on a public building wall or something--it is immediately [immediately] flagged and i get a warning. i don't really have "facebook jail issues" though at the moment i'm on a restriction that is something like a "shadow ban"--it throttles wall posts. that's annoying to me, i don't like it; and frankly don't agree at all with the community standards related to things like "bad words and images"--

      swastikas and nigeria, schwarzenegger an zellweger ... those are things I think you should have to be able to gaze at and not be offended.

      i've put my entire facebook profile on IPFS to preserve "the silence" and what [specifically not the silence itself but the evidence of it in our history] it looks like from my perspective; where it has taken my mother's life and destroyed my relationship with most of my family. it has manifested as my entire graduating class apparently-ish moving to LA and not using facebook "all at one time" sometime around 10 years ago.

      it's hurt america, and american media more than i can imagine.

      i hope it comes to an and; which is something akin to hoping my mortality disappears, as well as yours--and that's a big part of "the religion of azrael" that i haven't really discussed much--but since i'm "close to bored to death" i'm going to "just do it." i fear greatly that we have ... "come full circle" and from another "just like this time" returned here, immortal in another place--and "not liking it that much" ... as in I believe strongly in "right to death" and I think it's a "big deal" and underlying thread in all of religion, but specifically in the death of JC and "Jewish Koshruit Law" ... and it's tie [shevirat hakelim, ksamim sharvit(like a magic wand smashing the vessel)] ... to American Law related to psychiatric ... "fighting sanity and me ... and our right to sleep."

      i'm not happy with the state of my personal quality of life or the american state of freedom--which is to say there's probably nowhere on earth "better"--though maybe i should see the world from the perspective of http://sydney.co.nz before "deciding that having never left north america.

      commenting, http://Pinata.cloud is still around--though they are now charging more in a strange manner and that's a "bad sign" for the immortality of http://lamc.la [ https://bafybeidpq3cmso7h7wpz4i6fwbwhupx3hr542tdgevwn3ali... look it #almostsaysalive ] though the Filecoin implementation on http://Estuary.tech has given me hope that I might actually succeed in building something immortal ... even if it's just my Facebook profile and website.

      Since I already "don't have much privacy" #sotospeak it frustrates me to no end that someone on my friends list goes out of their way to try and shut down what today is probably the best proof anywhere of what "the sacred silence" [ of the sound of, simon, garfunkel, and system of a down ] is actually about: the literal darkness of Egypt and Beth-El; the "house of god" ... #SoToSpeak I am going to try to work on my NZB/NNTP storage system for permanent immutability [ data immortality ] and linking it to a IPFS Filecoin node I trying to install now. I have some "big ideas" and I really hope to be able to see one or two of them through to

      .. "existence."

    1. The other notable thing about the UK is that they are mad for football over there, everyone is mad for English football. So there is a lot of “unpaid” streaming going on. And also many lawsuits, which means that UK ISPs now routinely get court orders that say, you must block the following illegal streaming sites. And it has now gone so far that internet providers actually preemptively hunt for these streamers. So before big matches, they look what sites suddenly get a lot more traffic. It’s probably no accident if just before the Arsenal match starts you suddenly you have 500 megabits of internet going somewhere. And then they block these sites. So it is quite preemptive.

      Evil

    2. They have a sort of highly curated Facebook and they even have video conferencing solutions that they built themselves, which were used more during the Corona outbreak even. And they have made their own specifically modified Android tablets and they have a Linux operating system. So these people really run their own internet and it’s probably not even correct to call it an internet. Of specific interest, their modified Android tablets are rumored to be a fully controlled and setup to keep track of everything you do. And that’s just terrible. And then I realized that’s actually what actual Android tablets also do, but they do it on behalf of advertisers. And in this case, they do it on behalf of the north Korean government. The probably only had to change a few domain names in the source code.

      bleak

    1. first sentence: what does this metaphor of "rules and a game" bring to the table?

      question of fairness: are all games fair? amartya sen - the chance of where you are born

      game theory tries to model individual strategic behavior as if people act within a game where there are certain realities but individuals still have room to act differently they will try to find the behavioral activity hat maximizes their return

      game theorists developed idea of "cooperative games" that will make them all better off than if they were to continue purely adversarial interaction

      Rules: rules are out of the control of the participants - hence the "constraints that shape human interaction" but they are "humanly devised" if you decide to play it, you have to accept the rules of the game thinking in terms of "rules" leads us to understand/think about institutions as constraints

      "Conceptually, what must be clearly differentiated are the rules from the players" distinction between the players (organizations, e.g. corporations) and the rules (institutions)

      original institutional economics: institutions are not only constraints, they do more than that... they plant ideas in minds of people they condition behavior they open certain avenues of behavior

      what allows certain individuals in certain situations to manipulate rules of the game? what ensures social adherence to the rules?

      how do institutions change? through revolutionary means or through evolutionary means? which have better chance at social adherence? what's more effective?

      institutional change is non-teleological we are not walking towards progres or some goal it's just a sequential cause and effect it's just what happens if the change is for better or for worse is not for social scientists to say

      North has a clear normative scale of judgment uses economic criteria to judge institutional change his normative scale comes from traditional economic theory he's both a critic of standard neoclassical economic theory, and also using the language of the theory - and arguing within the framework of the theory

      webling - very into the darwinian tradition tries to apply evolutionary theory to economics evolutionary processes in economics and in society in general are non-teleological it's just progressive adaptation to the environment

      north is trying to understand how institution affect economic life? and how is it possible to change institutions?

      north: changes in rules due to actions of people who play it. evolution associated with the game as players realize limitations, weak spots, etc. in the game

      in economics, rationality is just instrumental reasoning

      1) detection of deviant behavior 2) how do we punish deviant behavior

      problem of institutional design: if there is weak enforcement of punishment, people will break the rules

      formal institutions vs informal institutions

      informal institutions are more deep-seated and difficult to uproot, more deeply ingrained and more longstanding people have psychologically internalized the rules and have come to see them as their second nature

      these are the connection between past present and future the things that tend to persist over time don't change radically, but gradually and at the margins

      in North's preface to the book: History matters. tries to bring together 1) informal institutions: long validity that tends to persist over time 2) institutional change and evolution impossible to understand the present without institutional heritage of the past, impossible to understand the future without understanding the present

      the focus of our analysis should be institutions; they are the connection between the past, present, and future

      "The major role of institutions in society is to reduce uncertainty by establishing a stable (but not necessarily efficient) structure to human interaction"

      institutions as a device for creating stability and reducing human uncertainty

      wesley clair mitchell - studied the science of business cycles

      institutions take some of the "randomness" out of human behaviors generate certain patterns of behavior in society institutions modulate behavior, so we observe patterns in society

      the perpetuating effect of informal norms - the tendency to perpetuate themselves over time

      institutional constraints can be barriers to improvement, depending on your normative opinion

      can both preserve a good situation And prevent change from a bad situation

      north's definition of institutions is itself institutionally constrained "that is a very nice meta argument :)"

    1. It's more than climate change; it's also extraordinary burdens of toxic chemistry, mining, depletion of lakes and rivers under and above ground, ecosystem simplification, vast genocides of people and other critters, etc, etc, in systemically linked patterns that threaten major system collapse after major system collapse after major system collapse.

      To think of it as just climate change simplifies the effects of it. The consequences are deadly and can lead to greater changes in our earth and way of life. Therefore, this sentence really puts it into perspective, showing us just how devastating it could be.

    2. It's more than climate change; it's also extraordinary burdens of toxic chemistry, mining, depletion of lakes and rivers under and above ground, ecosystem simplification, vast genocides of people and other critters, etc, etc, in systemically linked patterns that threaten major system collapse after major system collapse after major system collapse.

      They are trying to say, it is more than just the simple things. There comes a point in time where everything fail, for instance environment, resources, etc. The “system” shuts down and the existence on earth forms new normalcy until that on collapse.

    1. “If you want to minimize carbon dioxide in the atmosphere in 2070  you might want to accelerate the burning of coal in India today,”

      This is a really interesting thought; even in our own county, we rely on less 'green' solutions in order to keep the lights on long enough to draw up the solar panel blueprints. Just a shame that under capitalism that ends up giving the fossil fuel industries a hell of a lot of power to lobby against that technology ever being implemented on a wide scale, even if it's totally ready to go.

    1. TikTok for news has increased fivefold among 18–24s across all markets over just three years, from 3% in 2020 to 15%

      This is a very interesting statistic. It evokes two things in me; the first being that TikTok seems like a platform that can have a wildfire of misinformation. I use TikTok and I follow some scientists, but when news information comes across my screen I always research it before choosing to believe or not. I feel as though, based on what we have learned in the class, that not many people do this. The second thing this evokes in me is a curiosity as to why. What makes TikTok a platform for news: is it the interpersonal connection with content creators? TikTok generates this feeling of familiarity really easy with it's users and creators because TikTok can get very personal about a creator's life. Does this sense of familiarity drive up TikTok's statistics in it being used for news?

    1. The pattern language is the instrument which makes it possible for members of the cluster to design their own houses, and for the builder to help them take their rudimentary sketches and make a building out of them. It is a system of instructions based on the most fundamental psychological necessities of buildings, which gives the individuals who use it unexpected creative power.

      The pattern language is quite beautiful - giving users the power of a high level description language to apply to design, allowing designers, architects and other employees to implement the interface specified by a client. It's really quite simple to identify emotional and functional need by matching to some of a finite set of patterns, and then simple for the architect to identify a pattern as a need in the home.

      This reminds me of home-made software and the imbalance of power that has been established between the user of the computer and the design of the interface; it's not at all true that people know what they want, or that one specific interface can be prescribed to fit all people. Rather, the user must have access to high level building blocks - or "patterns" - that they compose to make their computer a home.

      We spend just as much time with our computers - if not more - than we do with our homes now, and I posit that the interface of the software plays just as pivotal of a role in the experience that someone has navigating their home. How do we build software that allows users to choose high level patterns and adapt their systems when choosing the patterns?

      (An aside - it's insane to me that device manufacturers, especially new ones, are able to re-architect new physical interfaces for the devices but have to stick to the same software. This seems like the opposite of the dream of the computer; we're supposed to be able to run and change anything at any time, so why are we stuck using stock Android on every new device? It's so hard to build software that new innovations happen with hardware reinvention rather than software. This is insane.)

    1. raison d'etre

      I'll just say that I wish academic authors made their readings a bit more accessible because it's hard to understand the author's point when they use this kind of language, but I also understand that there is a certain standard of writing in academia that the author is prescribing to.

    1. If a site tries to charge me for work others do for free I block them. They're not paying these people to review, there's no standard of quality for these reviews. It's not something that should be charged for. Or maybe I just overdo things, I even refuse to use the self-checkouts at stores because there's someone they can pay for that and unless I get a discount for my work I'm not doing it. People keep allowing companies to get away with crap like that and now stores will have 1 employee and 20 self-checkout stations
    1. People of color need their own spaces. Black people need their own spaces. We need places in which we can gather and be free from the mainstream stereotypes and marginalization that permeate every other societal space we occupy. We need spaces where we can be our authentic selves without white people’s judgment and insecurity muzzling that expression. We need spaces where we can simply be—where we can get off the treadmill of making white people comfortable and finally realize just how tired we are.

      Immediately, this passage struck out to me as it is important that Black people need space to breathe. We face so much struggle trying to survive in our society, that it's often too much for us to push to the back of our minds and continue about our day. We need a safe space to reflect and to expose any or every emotion that we have.

    1. “When we work with undergraduates on digital humanities projects,” Quinn said, “it's often easier to take a humanities undergrad and teach them just enough coding to do what they need to do rather than taking some of the CS majors who can do the coding in their sleep but don't really think about the questions in the nuanced ways that we need them to.”

      this is so friggin' true I could cry

    1. moment of self-realization as a child, when you suddenly realize that you are not the external world or your perceptions, but you are an individual entity perceiving this world, separate from others.

      As a child, I felt as though I was the center of the universe, and everyone saw things the way I did up to a certain point. In psychology, we learn that around the age of 3 children then start to actually form memories. It's like a pivotal point in life where they specifically realize they are an individual entity and develop further on their own which I feel this quote has perfectly embodied just in different words.

    1. Some called her a “martyr.”

      If that woman is a martyr, then who are the other 3 people that got killed? Or is it just that more people saw and got to experience her final moments over the others... which wouldn't make that much sense considering it's a raid.

    2. By day’s end, four people would be dead: one from gunfire and three from medical emergencies that officials have yet to explain.

      Ensuring that the deaths are not going unaccounted for. It's important to bring up that with the whole ACAB phenomenon that's been going on that it's no wonder they made the woman who got shot by police as part of the title, when 3 other people died. It's not a competition. I can't tell if they're trying to stir up controversy or if they're simply just adding relevance to their story. Either way, it makes the story more newsworthy.

    1. in which the scope of action allowed to the students extends only as far as receiving, filing, and storing the deposits.

      meaning there is more to education than just receiving information and getting tested on it. After a while students will forget some of that information do it's important to look at the whole picture of what education really is

    1. People can hike the Gabrielino Trail in Angeles National Forest,just north of El Monte, and up Tongva Peak2 in the Verdugo Mountains,north of LA; go to the public Tongva Park in Santa Monica;3 go to the TongvaMemorial Garden at Loyola Marymount University;4 and see the San GabrielMountains on a daily basis.

      It's unsettling (and not surprising) to know that what was once the Tongva's peoples land has become nothing but a public park. Their land, peoples, and culture was stolen for mere public amusement, and the only thing given back to them was a simple memorial garden.

    1. New Orleans, obviously, was a major city in the Confederacy, and when Louisiana seceded, New Orleans did too. But it was retaken fairly early on in the war. When the Union army occupied New Orleans, the expectation among town people was, “Just wait until August.” Because then, all these unacclimated Union soldiers are gonna die, and they’ll see that yellow fever is on the side of the South. When all these boys die it will vindicate us and our system.Benjamin Butler, who was the occupying general, was really worried about disease, and like everyone in America he had heard the tales of New Orleans. He was acutely aware that most of the men in his army had never been this far south before, that they were decidedly unacclimated and vulnerable to this disease. So he installed a strict quarantine. He didn’t let ships come in or out without a thorough inspection. He doubled the salary of the quarantine officer, he cleaned the streets, he fired people who seemed to be the lackeys of the bureaucrats in New Orleans. He did a whole program of sanitation.And it worked. There were only a few cases of yellow fever reported during the war years, even though hundreds of thousands of people came in and out of the city each year. It’s actually a miraculous demonstration of just how effective martial law can be in stopping diseases, I guess, and how effective quarantine could be when properly instituted and rigorously upheld.But right after the war, when the Union army receded, the same people who had been in control of New Orleans before the war took up their former positions and went back to their old ways. The said, whatever happened during the war, let’s forget about that. Benjamin Butler was very much hated as a tyrant in New Orleans by whites, who associated the quarantine with him but not improved health. And right away health problems came back. There was a serious epidemic again in 1867, and periodic epidemics throughout the ‘60s and ‘70s, culminating in the epidemic of 1878, just after the end of Reconstruction. That was the worst in a generation; 5,000 or 10,000 people died in New Orleans alone, 25,000 people died across the Gulf South. It went all the way up to Memphis, which had never had yellow fever before.But once again, in the midst of this devastating epidemic, you had this prevailing attitude among the commercial-civic elite, who say “Quarantines just don’t work here.” They said this even though they had proof that it did work during the war. It was only when immigration dries up so precipitously after the war, and when cotton goes elsewhere, that New Orleans could sort of shake itself free from this attitude.
    1. in south australia we've got the hornsdale power reserve which is a 00:32:43 100 megawatt capacity this is one that elon musk very famously uh put in so this is what the european union is now using as the standard to talk about you know it's 00:32:56 been done in australia we can do it here so in the global system we would need 15 million 635 and 478 such stations across the planet 00:33:08 in the power grid system just for that four week buffer so and that is actually about 30 times capacity uh compared to the entire global

      !- for : global capacity renewable energy storage - this is not realistic

    2. let's put the electrical power systems together these electrical power 00:22:29 systems that this is actually on the low side because most industrial action happens with the consumption of coal and gas on site and then it's converted to energy on site this is what's just been drawn off the power grid 00:22:42 so there's a vast amount of energy associated with manufacturing that is not included here and that is actually a huge piece of work to include that so these numbers i'm showing you are very much on the low side 00:22:55 so we're going to put it all together we need 36 000 terawatt hours all there abouts that's a that's a very low estimate

      !- key insight : minimum power of energy transition, excluding the large amount of energy for industrial processes ! - for : energy transition, degrowth, green growth

    1. ABSTRACT

      This work has been published in GigaByte Journal under a CC-BY 4.0 license (https://doi.org/10.46471/gigabyte.67), and has published the reviews under the same license. These are as follows.

      Reviewer 1. Alison Gould

      Is there sufficient data validation and statistical analyses of data quality? Not my area of expertise.

      Other comments:

      This was a very clear and well-written manuscript presenting a whole genome assembly for the giant trevally. This will serve as an important resource for future researcher interested in this and other closely related species of fish. I. only have a few minor suggestions but overall, found the paper to be of high-quality. -It would be helpful to include the estimated genome size and BUSCO score in the abstract -Include the species name on the X-axes of each column in Fig 5. -Several of the tables (Table2, Table6, for example) don't seem necessary in the main text as they are not really discussed in the paper and could included as supporting material.

      Reviewer 2. Yue Song

      Are all data available and do they match the descriptions in the paper?

      Yes. The description of the data in the article is generally correct, but there are some inconsistencies. e.g. in line 186, the author used single-copy orthologs from the actinopterygii set of OrthoDB (v10) to assess assembly completeness but using vertebrate set for the comparison with other fish genomes. All the other species are also fish genomes, so why not use the same database (e.g. actinopterygii)?

      Are the data and metadata consistent with relevant minimum information or reporting standards?

      Yes. it is best to provide relevant information about protein-coding genes i think.

      Is there sufficient detail in the methods and data-processing steps to allow reproduction?

      No. (1) It is better to provide detailed software parameters and the description of how to assemble contigs into scaffold is not clear enough. (2) The method of how to identify single-copy orthologs is not clearly described.

      Is there sufficient data validation and statistical analyses of data quality?

      No. I don't think it's enough to just rely on single-copy orthologs and/or synteny blocks to assess the genome quality, maybe it would be better to add some others, e.g. reads mapping?

      Other comments:

      (1) In line 223, the authors just provide how many scaffolds there are in the final assembly version, but how many chromosomes are assembled and how about the proportion of scaffold or contigs which have been located into the chromosomes. These information is not found in the MS. Note that there is already published a genome for this genus in NCBI, but only at the contig level, if using the Hi-C data could provide the chromosomal level one, I think it would be more useful. (2) In line 252, I noticed there was not mentioned about gene sets, especially the protein-coding genes, how many coding genes are there in this genome? (3) In figure 4, many cross-linking intensities are not obvious, which may be related to sequencing depth of Hi-C data, I can't figure out how many chromosomes are there in the final assembly from this diagram. (4) Some minor bugs, in FIGURE CAPTIONS, figure 6, here the author used the ray-finned fish, right? I think it is a mistake here, cause in line 186, the author mentioned vertebrata set.

      Re-review: The author has responded to the corresponding questions, recommended accepting the manuscript.

    1. “Imagine a computer programmed to execute one function. This function cannot be paused, modified or erased. No new data can be stored. No new commands can be installed. This computer will perform that one function, over and over, until its power source eventually shuts down.”

      Just like the functions of zombies can't be paused, it feels like our daily tasks can't be paused. As much as it would probably be incredible for my mental health to delete my Gmail app and all of my social media, I would realistically miss out on a lot. It's just not realistic to be without our medias

    1. The consequence argument points out that deterministic laws imply that the future isn't really up for grabs; it's determined by the present state just as surely as the past is. So we don't really have choices about anything.

      Yup, that makes sense to me. I'm fine with that too.

      Still, however, everyone is ignoring the influence of learning on our future state.

    2. Of course, just because it can be compatible with the laws of nature, doesn't mean that the concept of free will actually is the best way to talk about emergent human behaviors.

      And that's the crux of the matter. Knowing that free will is only constructed, we can decide it would be best to not base certain decisions on its existence. For instance, how we deal with crime and punishment.

      Of course, if there's no free will, then there are some people who will never accept it's non-existence.

    3. The concept of baseball is emergent rather than fundamental, but it's no less real for all of that. Likewise for free will. We can be perfectly orthodox materialists and yet believe in free will, if what we mean by that is that there is a level of description that is useful in certain contexts and that includes "autonomous agents with free will" as crucial ingredients.

      Again, the problem here is that we can define and characterize baseball such that we can unequivocally say that a given entity either is or is not "baseball".

      But we cannot do that for free will - because we cannot measure it.

      Carroll is also being quite utilitarian, which is fine. My idea is that considering the utility of a concept only matters for emergent properties because they are constructed and not fundamental. The fundamentals have no utility; they just are.

    1. Schema-building: What does “reading” mean to you? Who are you as a reader? How would you describe your reading process?

      Personally, reading means to take in information, data, or whatever the text is giving. Taking in information does not mean that you completely understand. As a reader, I read a lot of text, and most of the time I can not make sense of it. Sometimes it’s the words that I do not understand, other times I would understand the vocabulary, but just not able to make a full meaningful understanding of what the text is trying to say or mean. I tend to understand and read better when I read by myself quickly than reading out loud in a group. Also, when I do not understand, I usually reread the paragraph.

    1. But religious dogmatists’ problem is exactly the same as the story’s unbeliever: blind certainty, a close-mindedness that amounts to an imprisonment so total that the prisoner doesn’t even know he’s locked up.

      I like this statement because it says that you should question your religion, and not just go along with everything just because it's a part of your religion.

    2. Because it’s hard. It takes will and effort, and if you are like me, some days you won’t be able to do it, or you just flat out won’t want to.

      This part of the speech stood out to me because Wallace just took a lot of time explaining situations to us that require decisions. He pointed out that in many of these decisions we dont always make the best decision on how to act. However, in this part he points out how difficult things can be and admits that everyone is not always perfect in there choices to act.

    3. By way of example, let’s say it’s an average adult day, and you get up in the morning, go to your challenging, white-collar, college-graduate job, and you work hard for eight or ten hours, and at the end of the day you’re tired and somewhat stressed and all you want is to go home and have a good supper and maybe unwind for an hour, and then hit the sack early because, of course, you have to get up the next day and do it all again. But then you remember there’s no food at home. You haven’t had time to shop this week because of your challenging job, and so now after work you have to get in your car and drive to the supermarket. It’s the end of the work day and the traffic is apt to be: very bad. So getting to the store takes way longer than it should, and when you finally get there, the supermarket is very crowded, because of course it’s the time of day when all the other people with jobs also try to squeeze in some grocery shopping. And the store is hideously lit and infused with soul-killing muzak or corporate pop and it’s pretty much the last place you want to be but you can’t just get in and quickly out; you have to wander all over the huge, over-lit store’s confusing aisles to find the stuff you want and you have to manoeuvre your junky cart through all these other tired, hurried people with carts (et cetera, et cetera, cutting stuff out because this is a long ceremony) and eventually you get all your supper supplies, except now it turns out there aren’t enough check-out lanes open even though it’s the end-of-the-day rush. So the checkout line is incredibly long, which is stupid and infuriating. But you can’t take your frustration out on the frantic lady working the register, who is overworked at a job whose daily tedium and meaninglessness surpasses the imagination of any of us here at a prestigious college.

      I believe that this entire statement just shows how people tend to let built up anger and frustration bother them. Their all stuck in the same boring routine everyday and each day the little things come back and bother them to the point when they take it out on someone else.

    1. I think that’s the biggest thing that I take from this: any text should at least hint at the rich tapestry of things it is resulting from, if not directly discuss it or link to it. A tapestry not just made from other texts, but other actions taken (things created, data collected, tools made or adapted), and people (whose thoughts you build on, whose behaviour you observe and adopt, who you interact with outside of the given text). Whether it’s been GPT-3 generated or not, that holds.

      Useful and likely human written texts show the richness of the context it results from, by showing and linking. Not just to/with 1) other texts, but also 2) other actions (things created, data gathering, experiments, tools adapted) and 3) people (that provided input, you look at, interact with outside the text). Even if such things were generated following up those leads should show its inauthenticity.

    1. Acid-Alkaline Breakfast Cereals List

      Early feedback from the spreadsheets has prompted me to upgrade them. Because the original format was great for identifying high acid load cereals. Then switching to high alkaline cereals. But the spreadsheet format allows us to do much more.

      So I've added columns that allow you to easily make better alkaline cereal choices. Significantly, I've added a column for PRAL values per 100 calories. And to emphasize the fact that PRAL values are average estimates, I've dropped the decimal points from that new column. Hopefully that will encourage you to look for changes that lower your PRAL score by at least 2 points per change. Remember, you must plan for some acid forming foods. Just ensure that your total daily PRAL score is negative.

      One benefit from my PRAL spreadsheets upgrade is that it's now easy to see the most acidic and the most alkaline cereals. So here's a couple of significant lists for you...

      Top 10 Acidic Breakfast Cereals

      These cereals are listed with the highest acid load first. Where the first number in the list is PRAL value per 100 calorie serving: [See Foodary Nexus Subscriber notes for pre-publication details]

      Next, a few more from the other end of the scale.

    1. inattention

      This is sort of an aside, but I'm intrigued by this way of talking about ADHD/autism. From an inside perspective on the disorder, I don't find that it's a deficit of attention, rather that it's a deficit of attention regulation. Or in other words, I'm always focused on something, it's just not necessarily what I'm supposed to focus on, and that looks like inattention from an observer's perspective.

    1. Nothing gets people’s attention like something startling. Surprise, a simple emotion, hijacks a person’s mind and body and focuses them on a source of possible danger (Simons, 1996). When there’s a loud, unexpected crash, people stop, freeze, and orient to the source of the noise. Their minds are wiped clean—after something startling, people usually can’t remember what they had been talking about—and attention is focused on what just happened. By focusing all the body’s resources on the unexpected event, surprise helps people respond quickly

      It's interesting to see the the emotion of surprise no matter how composed, calm or worried you are, the feeling of surprise always affects everyone the same because you lose all that feeling of readiness when it hits you. On the other hand, surprises can sometimes show one's best moments as your whole body is reacting and focusing to the surprise, your reaction, thinking can also temporally be enhanced for that moment. I said in the last lecture that because we are different there are different results but i think this time for the emotion of surprise the background event and how unique it is what determines what emotion of surprise the person may feel.

    1. Another thing to know about operant conditioning is that the response always requires choosing one behavior over others.

      I feel like this happens because we're thinking about how it'll benefit us at the end. It's always the end goal. For example they used the student who goes to the bar on Thursday vs the one who stays home to study. For the one who stays to study he knows he'll get a good grade and is probably close to graduating and once to just be done. But the other student is probably thinking that it's just one exam and that he'll have plenty of time to make up for it.

    1. quote by Cornel West: “Justice is what love looks like in public.”

      Cornel West, US philosopher / activisti https://en.wikipedia.org/wiki/Cornel_West Full quote: "Justice is what love looks like in public. Tenderness is what love looks like in private." Justice as an expression of love, to make manifest that you include all within humanity. It seems in some YT clips it's also a call to introduce more tenderness into systems. Sounds like a [[Multidimensionaal gaan ipv platslaan 20200826121720]] variant, of even better a [[Macroscope 20090702120700]] in the sense of [[Macroscope for new civil society 20181105203829]] where just systems surround tender interactions.

    1. “The world cannot be half democratic and halfautocratic. It must be all democratic or all Prussian. There can be no compromise,”

      This is basically stating that the US can't just have half of it's citizens with the ability to vote while the other can't. It is not fair to those who live in this 'free' country.

  6. Aug 2022
    1. To take a shot at mainstream American success, Hi-Chew’s makers did the usual stuff that consumer-products businesses do: They hired retail consultants, switched distributors, that kind of thing. But they also set their sights on a very important group: Major League Baseball players, the only people who routinely spend time chewing snacks in extreme close-up on TV. Morinaga supplied Japanese players in the league with Hi-Chew, Kawabe told me, focusing first on teams in markets where major retailers were headquartered. The gambit worked; ESPN reported on just how obsessed the 2015 Yankees squad was with the little fruit candies. Walgreens and CVS picked up the brand after it became popular with the Chicago Cubs and Boston Red Sox. Regular people tried the newly plentiful and suddenly trendy candy, and then insisted that their brother or spouse or co-workers try it. Hi-Chew’s U.S. sales grew from $8 million in 2012 to more than $100 million in 2021, according to Kawabe.

      I can believe this. It helps that it's a category-definer: gum-like-but-not-gum.

    1. At the time he was selling, Jay-Z was also coming up with rhymes. He normally wrote down his material in a green notebook he carried around with him — but he never took the notebook with him on the streets, he says. "I would run into the corner store, the bodega, and just grab a paper bag or buy juice — anything just to get a paper bag," he says. "And I'd write the words on the paper bag and stuff these ideas in my pocket until I got back. Then I would transfer them into the notebook. As I got further and further away from home and my notebook, I had to memorize these rhymes — longer and longer and longer. ... By the time I got to record my first album, I was 26, I didn't need pen or paper — my memory had been trained just to listen to a song, think of the words, and lay them to tape." Since his first album, he says, he's never written down any of his lyrics. "I've lost plenty of material," he says. "It's not the best way. I wouldn't advise it to anyone. I've lost a couple albums' worth of great material. ... Think about when you can't remember a word and it drives you crazy. So imagine forgetting an entire rhyme. 'What's that? I said I was the greatest something?' "

      In his youth, while selling drugs on the side, Jay-Z would write down material for lyrics into a green notebook. He never took the notebook around with him on the streets, but instead would buy anything at a corner store just for the paper bags as writing material. He would write the words onto these paper bags and stuff them into his pockets (wearable Zettelkasten anyone? or maybe Zetteltasche?) When he got home, in long standing waste book tradition, he would transfer the words to his notebook.

      Jay-Z has said he hasn't written down any lyrics since his first album, but warns, "I've lost plenty of material. It's not the best way. I wouldn't advise it to anyone. I've lost a couple albums' worth of great material."

      https://ondemand.npr.org/anon.npr-mp3/npr/fa/2010/11/20101116_fa_01.mp3

      Link to: https://hypothes.is/a/T3Z38uDUEeuFcPu2U_w_zA (Jonathan Edwards' zettelmantle)

    1. They thought that a reading class was focused on “just reading.”

      But it is more than reading, it’s knowing the background, the spelling, the pronouncement, and meaning.

    1. Should I always create a Bib-note? .t3_x2f4hn._2FCtq-QzlfuN-SwVMUZMM3 { --postTitle-VisitedLinkColor: #9b9b9b; --postTitleLink-VisitedLinkColor: #9b9b9b; --postBodyLink-VisitedLinkColor: #989898; }

      reply to: https://www.reddit.com/r/antinet/comments/x2f4hn/should_i_always_create_a_bibnote/

      If you want to be lazy you could just create the one card with the quote and full source and save a full bibliographical note. Your future self will likely be pleasantly surprised if you do create a full bib note (filed separately) which allows for a greater level of future findability and potential serendipity, It may happen when you've run across that possibly obscure author multiple times and it may spur you to read other material by them or cross reference other related authors. It's these small, but seemingly "useless", practices in the present that generate creativity and serendipity over longer periods of time that really bring out the compounding value of ZK.

      More and more I find that the randomly referenced and obscure writer or historical figure I noted weeks/months/years ago pops up and becomes a key player in research I'm doing now, but that I otherwise would have long forgotten and thus not able to connect or inform my current pursuits. These golden moments are too frequently not written about or highlighted properly in much of the literature about these practices.

      Naturally, however, everyone's practices may differ. You want to save the source at the very least, even if it's just on that slip with the quote. If you're pressed for time now, save the step and do it later when you install the card.

      Often is the time that I don't think of anything useful contemporaneously but then a week or two later I'll think of something relevant and go back and write another note or two, or I'll want to recommend it to someone and then at least it's findable to recommend.

      Frequently I find that the rule "If it's worth reading, then it's worth writing down the author, title, publisher and date at a minimum" saves me from reading a lot of useless material. Of course if you're researching and writing about the broader idea of "listicles" then perhaps you have other priorities?

    1. Not many teachers at Elsie Allen High School can connect with students in the same way. While 80 percent of students are Latino, just two of 56 teachers are — 3.5 percent. Nationally, a Washington Post analysis of school district data from 46 states and the District of Columbia finds that only one-tenth of 1 percent of Latino students attend a school system where the portion of Latino teachers equals or exceeds the percentage of Latino students.

      Sometimes I think this is just our school, but then I'm reminded by statistics like these that it is almost everywhere. When I was in high school, I was completely unaware of this reality because it wasn't my reality. I'm a white person who had all white teachers growing up. I had people who looked exactly like me, so it was easy to relate. It's important for students to have teachers who look like them. The same is also true. We need teachers who don't look like us too. I didn't have a non-white teacher until I was in college.

    2. , just two of 56 teachers are — 3.5 percent.

      I hate that this is the reality for so many schools. We really have to work on this. It's difficult to want better when you have few examples of people who look like you doing better.

    1. Note: This rebuttal was posted by the corresponding author to Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Reviewer #1 (Evidence, reproducibility and clarity (Required)):

      In this paper, Staneva et al describe a novel complex found at RNA PolII promoters that they term the SPARC. The manuscript focuses on defining the core components of the complex and the pivotal role of SET27 in defining its function, and role in PolII transcription. This manuscript is a logical follow on from an initial paper (Staneva et al, 2021) by the same authors where they systematically analyzed chromatin factors, and their role in both transcription start and termination. What is also very clear, is that this complex is one made of histone readers and writers which suggests its function is to change the chromatin structure around a PolII promoters. The authors show that this complex is necessary for the correct positioning of PolII and directionality of transcription.

      This was a well-designed study and well written and clear manuscript that provides fascinating insight transcription control in bloodstream form parasites.

      I have no major comments only a few minor ones.

      1) Localisation of the different SPARC components appears to be either nuclear or nuclear and cytoplasmic. - Both SET27 and CRD1 show a nuclear and cytoplasmic localisation in the bloodstream form IFA (Supplementary Fig 1B), but only a nuclear localisation procyclic form.

      Did the authors attempt C terminally tagging SET27, CRD1 to see if this resulted in a change in the pattern?

      We have not tagged either protein at the C terminus, however SET27 (Tb927.9.13470) has been tagged both N- and C-terminally in procyclic form (PF) cells as part of the TrypTag project (http://tryptag.org). In both cases, SET27 localized to the nucleus, suggesting that the differences in localization we observe for SET27 depend on the life cycle stage, and not on the position of the tag. One caveat is that in the TrypTag project proteins are tagged with mNeonGreen whereas in our study proteins were tagged with YFP. Based on our images, CRD1 appears to be predominantly nuclear in both bloodstream form (BF) and PF parasites. CRD1 (Tb927.7.4540) has been tagged only N-terminally in PF cells as part of the TrypTag project where it has also been classified as mostly nuclear with only 10% of cells showing cytoplasmic localization for CRD1.

      We are well aware that tags can alter the behaviour of a protein. Absolute confirmation of location will require the generation of antibodies that detect untagged proteins. However, this is a longer-term undertaking. We have added the following statement to the Results section to address the point raised:

      “We tagged the proteins on their N termini to preserve 3′ UTR sequences involved in regulating mRNA stability (Clayton, 2019). We note, however, that the presence of the YFP tag and/or its position (N- or C-terminal) might affect protein expression and localization patterns”.

      • The point is made that JBP2 shows a 'distinct cytoplasmic localisation' in PF cells. by this logic, the SET27 localisation in BF is also distinctly cytoplasmic and a nuclear enrichment is not clear.

      Indeed the reviewer is correct - we have inadvertently over accentuated the significance of this difference in the text. We had emphasized the predominantly cytoplasmic localization of JBP2 in PF trypanosomes as potentially related to its weaker association with other (predominantly nuclear) SPARC components in the mass spectrometry experiments. The presence of SET27 in the nuclei of both BF and PF cells is confirmed by a positive ChIP signal. We have revised the manuscript text by changing “distinct cytoplasmic” to “predominantly cytoplasmic” to describe JBP2 localization in PF cells. We hope that this resolves the issue.

      • Why would the localisation pattern change between life cycle stages? Surely PolII transcription should remain the same?

      Although our analysis suggests that there may be some shift in SET27 and JBP2 localization between BF and PF stages, sufficient amounts of these proteins may be present in the nucleus for proper SPARC assembly and RNAPII transcription regulation in both life cycle forms. The proportion of SET27 and JBP2 proteins that localizes to the cytoplasm may have functions unrelated to transcription.

      2) Several of the images in Supplementary Fig 1B seem to show foci in the nucleus (CSD1, PWWP1, CRD1). Do you see foci throughout the cell cycle or just in G1/S phase cells as shown here?

      We have not systematically investigated protein localization at different cell cycle stages, so we do not have microscopy images for all proteins at all stages of the cell cycle. However, the images we did collect suggest the punctate pattern is preserved for CRD1 in the G2 phase in both BF and PF cells (see below) as we showed in Supplemental Figure S1B for cells with 1 kinetoplast and 1 nucleus (G1/S phase cells). The significance of these puncta remains to be determined.

      3) In Figure 6, what does 'TE' stand for?

      TE denotes transposable elements. We have added this to the figure legend.

      4) The authors show this interesting link between SPARC complex and subtelomeric VSG gene silencing. -In the CRD1 ChIP or RBP1 ChIP, are there any other peaks in telomere adjacent regions in the WT cells similar to that seen on chromosome 9A? And does the sequence at this point resemble a PolII promoter?

      Apart from peaks located on Chromosome 9_3A, there are other CRD1 and RPB1 ChIP peaks in chromosomal regions adjacent to telomeres in WT cells. We observed broadening of RPB1 distribution in these regions upon SET27 deletion, similar to what we show for Chromosome 9_3A. In particular, wider RPB1 distribution on Chromosome 8_5A coincides with upregulation of 10 VSG transcripts. These two loci explain most of the differentially expessed genes (DEGs) detected, but other subtelomeric regions show a similar pattern. We have added the following statement to the Results section to highlight that the phenotype shown for Chromosome 9_3A is not unique:

      “We also observed a similar phenotype at other subtelomeric regions, such as Chromosome 8_5A where 10 VSGs and a gene encoding a hypothetical protein were upregulated upon SET27 deletion (Supplemental Table S3)”.

      Cordon-Obras et al. (2022) have recently defined key sequence elements present at one RNAPII promoter. We searched for similar sequence motifs but failed to identify them as underlying CRD1 and RPB1 ChIP peaks, highlighting the likely sequence heterogeneity amongst trypanosome RNAPII promoters. To address this point, we have added the following sentence to the Discussion:

      “Sequence-specific elements have recently been found to drive RNAPII transcription from a T. brucei promoter (Cordon-Obras et al., 2022), however, we were unable to identify similar motifs underlying CRD1 or RPB1 ChIP-seq peaks, suggesting that T. brucei promoters are perhaps heterogeneous in composition”.

      -In the FLAG-CRD1 IP (Figure 3B), the VSG's seen here are not represented (as far as I can tell) in Figure 6B and C. If my reading is correct could, is this a difference in the FC cut off for what is significant in these experiments?

      The VSGs detected in the FLAG-CRD1 IP from set27D/D cells are indeed different from the ones shown in Figure 6 (even after setting the same fold change cutoffs). We have highlighted this by adding the following statement to the Results section: “Gene ontology analysis of the upregulated mRNA set revealed strong enrichment for normally silent VSG genes (Figure 6B-D) which were distinct from the VSG proteins detected in the FLAG-CRD1 immunoprecipitations from set27D/D cells (Figure 3B)”.

      The VSGs in the mass spectrometry experiments likely represent unspecific interactors of FLAG-CRD1. To clarify this, we have added the following statement to the Results section: ”Instead, several VSG proteins were detected as being associated with FLAG-CRD1 in set27D/D cells, though it is likely that these represent unspecific interactions”.

      Reviewer #1 (Significance (Required)):

      Trypanosomes are unusual in the way that they transcribe protein coding genes. Recent advances have defined the chromatin composition at the TSS and TTS, and the recent publication of a PolII promoter sequence(s) further adds to our understanding of how transcription here is regulated. Defining the SPARC complex now add to this understanding and highlights the role of potential histone readers and writers. I think that this will be of interest to the kinetoplastid community especially those working on control of gene expression.

      Our lab studies gene expression and antigenic variation in T. brucei.

      Reviewer #2 (Evidence, reproducibility and clarity (Required)):

      In this manuscript, the authors identify a six-membered chromatin-associated protein complex termed SPARC that localizes to Transcription Start Regions (TSRs) and co-localizes with and (directly or indirectly) interacts with RNA polymerase II subunits. Careful deletion studies of one of its components, SET27, convincingly show the functional importance of this complex for the genomic localization, accuracy, and directionality of transcription initiation. Overall, the experiments are well and logically designed and executed, the results are well presented, and the manuscript is easy to read.

      There are a few minor points that would benefit from clarification and/or from a more detailed discussion:

      1) The concomitant expression of many VSGs (37) in a SET27 deletion strain is remarkable and has important implications for their normally monoallelic expression. It is well established that VSG expression in wild-type T. brucei can only occur from one of ~15 subtelomeric bloodstream expression sites, which include the ESAGs. This result implies that VSG genes are also transcribed from "archival VSG sites" in the genome, not only from expression sites. Are there VSGs from the silent BESs among the upregulated VSGs? Is there precedence in the literature for the expression of VSGs from chromosomal regions besides the subtelomeric expression sites?

      Our analysis of differentially expressed genes (DEGs) revealed that 43 VSG genes (37 of which are subtelomeric) and 2 ESAG genes are upregulated in the absence of SET27. Both ESAGs but none of the upregulated VSGs in set27D/D cells are annotated as located in BES regions. While it is possible that recombination events have resulted in gene rearrangements between the reference strain and our laboratory’s strain, at least some of the upregulated VSGs are likely to be transcribed from non-BES archival sites. VSG transcript upregulation from non-BES regions was also recently described by López-Escobar et al (2022).

      We note that the upregulated mRNAs in set27D/D are still relatively lowly expressed (Figure 6C). This is presumably insufficient to coat the surface of T. brucei, and expression from BES sites instead may be required to achieve this. We have revised the manuscript Discussion section to make these points more clear:

      “Bloodstream form trypanosomes normally express only a single VSG gene from 1 of ~15 telomere-adjacent bloodstream expression sites (BESs). In contrast, in set27D/D cells we detected upregulation of 43 VSG transcripts, none of which were annotated as located in BES regions. Recently, López-Escobar et al (2022) have also observed VSG mRNA upregulation from non-BES locations, suggesting that VSGs might sometimes be transcribed from other regions of the genome. However, the VSG transcripts we detect as upregulated in set27D/D were relatively lowly expressed (Figure 6C) and may not be translated to protein or be translated at low levels compared to a VSG transcribed from a BES site”.

      2) The role of SPARC in defining transcription initiation is compelling. It's less clear to the reviewer if the observed transcriptional silencing within subtelomeric regions can also ascribed to SPARC. Have the authors considered the possibility that some components of the SPARC may be shared by other chromatin complexes, which could be responsible for the transcriptional activation of silent genes in SET27 deletion mutants?

      We cannot rule out indirect effects through the participation of some SPARC components in other complexes operating independently of SPARC. Indeed, the transcriptional defect within the main body of chromosomes appears to be somewhat different from that observed at subtelomeric regions, particularly with respect to distance from SPARC. We have added a statement in the Discussion section to highlight the possibility raised by the reviewer:

      “However, an alternative possibility is that transcriptional repression in subtelomeric regions is mediated by different protein complexes which share some of their subunits with SPARC, or whose activity is influenced by it”.

      3) The authors mention that the observed interaction of FLAG-CRD1 with VSGs in the immunoprecipitations (Fig. 3B) is evidence for the actual expression of normally silent VSGs on the protein level. This is true, but it should be spelled out that this interaction is nevertheless likely an artifact, at least the physiological relevance of these interactions is questionable.

      We agree that these are likely background associations and have added the following statement to the Results section to clarify this point:

      “Instead, several VSG proteins were detected as associated with FLAG-CRD1 in set27D/D cells, though it is likely that these represent unspecific interactions”.

      To avoid unnecessary confusion we have also removed the following sentence from the revised Discussion:

      “The interactions of FLAG-CRD1 with VSGs in the affinity selections from set27Δ/Δ cells indicate that some of the normally silent VSG genes are also translated into proteins in the absence of SET27”.

      4) "ophistokont" is misspelled in the introduction

      Thanks for noticing. We have corrected it to “Opisthokonta”.

      Reviewer #2 (Significance (Required)):

      The manuscript by Staneva et al. addresses the fundamental regulatory mechanism of gene transcription in the protozoan parasite Trypanosoma brucei, a highly divergent eukaryotic organism that is renowned for unusual features and mechanisms in gene regulation, metabolism, and other cellular processes. While post-transcriptional regulation is prevalent and relatively well established in T. brucei, much less is known about the mechanism of transcription initiation and transcriptional control, in part due to the general paucity of well-defined conventional promoter regions in this organism (only very few have been identified thus far). In this context, the work by Staneva et al. is highly significant and represents an important contribution to the field of gene regulation and chromatin biology in T. brucei and other related kinetoplastid parasites.

    1. Good research must begin with a good research question. Yet coming up with good research questions is something that novice researchers often find difficult and stressful. One reason is that this is a creative process that can appear mysterious—even magical—with experienced researchers seeming to pull interesting research questions out of thin air. However, psychological research on creativity has shown that it is neither as mysterious nor as magical as it appears. It is largely the product of ordinary thinking strategies and persistence (Weisberg, 1993)[1]

      LEARN: It's amazing how research all begins with a just a simple question. But getting it right is not as simple! It's so easy for a question to be too general or too narrow. And when you have to re-evaluate how you are viewing (your observation and question) to ensure it's right for an actual study, it seems daunting (to me). Almost how I felt thinking about creating our own research study for this class. But like the last sentence notes, it's about strategizing and persistence. Which I will keep in mind throughout the course for our project.

    1. Obnoxious.

      As someone recently pointed out on HN, it's very common nowadays to encounter the no-one-knows-else-what-they're-doing-here refrain as cover—I don't have to feel insecure about not understanding this because not only am I not alone, nobody else understands it either.

      Secondly, if your code is hard to understand regarding its use of this, then your code his hard to understand. this isn't super easy, but it's also not hard. Your code (or the code you're being made to wade into) probably just sucks. The this confusion is making you confront it, though, instead of letting it otherwise fly under the radar.* So fix it and stop going in for the low-effort, this-centric clapter.

      * Not claiming here that this is unique; there are allowed to be other things that work as the same sort of indicator.

    1. Folklore is informal, traditional culture. It’s all the cultural stuff—customs, stories, jokes, art—that we learn from each other, byword of mouth or observation, rather than through formal institu-tions like school or the media. Just as literature majors study nov-els and poems or art historians study classic works of art, folklor-ists focus on the informal and traditional stuff, like urban legendsand latrinalia.

      Latrinalia - bathroom vandalism/graffiti

    1. Indie sites can’t complete with that. And what good is hosting and controlling your own content if no one else looks at it? I’m driven by self-satisfaction and a lifelong archivist mindset, but others may not be similarly inclined. The payoffs here aren’t obvious in the short-term, and that’s part of the problem. It will only be when Big Social makes some extremely unpopular decision or some other mass exodus occurs that people lament about having no where else to go, no other place to exist. IndieWeb is an interesting movement, but it’s hard to find mentions of it outside of hippie tech circles. I think even just the way their “Getting Started” page is presented is an enormous barrier. A layperson’s eyes will 100% glaze over before they need to scroll. There is a lot of weird jargon and in-joking. I don’t know how to fix that either. Even as someone with a reasonably technical background, there are a lot of components of IndieWeb that intimidate me. No matter the barriers we tear down, it will always be easier to just install some app made by a centralised platform.
    1. Scud. Thankye. I’m from fair to middlin’, like a bamboo cane, much the same all the year round. Zoe. No; like a sugar cane; so dry outside, one would never think there was so much sweetness within.

      I thought this was a very interesting analogy. Coming from someone who has bamboo in their backyard, one thing that some people don't know about bamboo is that it's actually an invasive plant. If there's a stalk of it planted, unless it's contained in a pot it grows underground and spreads. After following the story told just a few lines ago, I sort of made the connection with Salem Scudder and while he may have been hired by the judge, some of his actions sort of "sprouted up" without any warning, like the sugar-mills.

    1. as most of us are lucky enough to be taught basic comprehension in elementary school and don’t give much thought to the act of reading after that, except as a chore for school that it usually becomes around junior high.

      I never really thought about it that way, too me reading and writing was just something that was a must. Never occurred to me that it's rather a gift than something that's ubiquitous.

    1. When there are, say, a hundred cities, there are about 10158 possible routes, and, although various shortcuts are possible, no known computer algorithm is fundamentally better than checking each route one by one. The traveling salesman problem belongs to a class called NP-complete, which includes hundreds of other problems of practical interest. (NP stands for the technical term ‘Nondeterministic Polynomial-Time.’) It’s known that if there’s an efficient algorithm for any NP-complete problem, then there are efficient algorithms for all of them. Here ‘efficient’ means using an amount of time proportional to at most the problem size raised to some fixed power—for example, the number of cities cubed. It’s conjectured, however, that no efficient algorithm for NP-complete problems exists. Proving this conjecture, called P¹ NP, has been a great unsolved problem of computer science for thirty years.

      It's amazing that there are so many possible outcomes of paths to take when it comes to cities. Being so big that the human mind can't grasp the mountain of possibilities. How is this number really decided? There can be an infinite set of possible routes for one to take. Even if it is just the slightest difference of angles.

    2. Such cases, it seems to me, transcend arithmetical ignorance and represent a basic unwillingness to grapple with the immense

      I understand this sentiment, but it also feels like this entire reading might be contributing to the issue more than solving it. Isn't creating shorter ways of writing incomprehensively massive numbers in itself just one kind of "unwillingness to grapple with the immense," by opting to abbreviate it instead? I can't really comprehend twenty million, but I can type it out in two neat little words. At least typing out all those zeros in 20,000,000 puts it a little into perspective, right? I think even the author recognizes this, since in the previous paragraph he writes out 20,000,000 in its entirety for the emphasis. I know it's not an option for how large most of the numbers in this article are, but that just makes me wonder why the examples in this paragraph are so much smaller than the caliber of number the rest of the article aims to talk about?

    3. Who Can Name the Bigger Number?by Scott Aaronson [Author's blog] [This essay in Spanish] [This essay in French] [This essay in Chinese] In an old joke, two noblemen vie to name the bigger number. The first, after ruminating for hours, triumphantly announces "Eighty-three!" The second, mightily impressed, replies "You win." A biggest number contest is clearly pointless when the contestants take turns. But what if the contestants write down their numbers simultaneously, neither aware of the other’s? To introduce a talk on "Big Numbers," I invite two audience volunteers to try exactly this. I tell them the rules: You have fifteen seconds. Using standard math notation, English words, or both, name a single whole number—not an infinity—on a blank index card. Be precise enough for any reasonable modern mathematician to determine exactly what number you’ve named, by consulting only your card and, if necessary, the published literature. So contestants can’t say "the number of sand grains in the Sahara," because sand drifts in and out of the Sahara regularly. Nor can they say "my opponent’s number plus one," or "the biggest number anyone’s ever thought of plus one"—again, these are ill-defined, given what our reasonable mathematician has available. Within the rules, the contestant who names the bigger number wins. Are you ready? Get set. Go. The contest’s results are never quite what I’d hope. Once, a seventh-grade boy filled his card with a string of successive 9’s. Like many other big-number tyros, he sought to maximize his number by stuffing a 9 into every place value. Had he chosen easy-to-write 1’s rather than curvaceous 9’s, his number could have been millions of times bigger. He still would been decimated, though, by the girl he was up against, who wrote a string of 9’s followed by the superscript 999. Aha! An exponential: a number multiplied by itself 999 times. Noticing this innovation, I declared the girl’s victory without bothering to count the 9’s on the cards. And yet the girl’s number could have been much bigger still, had she stacked the mighty exponential more than once. Take , for example. This behemoth, equal to 9387,420,489, has 369,693,100 digits. By comparison, the number of elementary particles in the observable universe has a meager 85 digits, give or take. Three 9’s, when stacked exponentially, already lift us incomprehensibly beyond all the matter we can observe—by a factor of about 10369,693,015. And we’ve said nothing of or . Place value, exponentials, stacked exponentials: each can express boundlessly big numbers, and in this sense they’re all equivalent. But the notational systems differ dramatically in the numbers they can express concisely. That’s what the fifteen-second time limit illustrates. It takes the same amount of time to write 9999, 9999, and —yet the first number is quotidian, the second astronomical, and the third hyper-mega astronomical. The key to the biggest number contest is not swift penmanship, but rather a potent paradigm for concisely capturing the gargantuan. Such paradigms are historical rarities. We find a flurry in antiquity, another flurry in the twentieth century, and nothing much in between. But when a new way to express big numbers concisely does emerge, it’s often a byproduct of a major scientific revolution: systematized mathematics, formal logic, computer science. Revolutions this momentous, as any Kuhnian could tell you, only happen under the right social conditions. Thus is the story of big numbers a story of human progress. And herein lies a parallel with another mathematical story. In his remarkable and underappreciated book A History of π, Petr Beckmann argues that the ratio of circumference to diameter is "a quaint little mirror of the history of man." In the rare societies where science and reason found refuge—the early Athens of Anaxagoras and Hippias, the Alexandria of Eratosthenes and Euclid, the seventeenth-century England of Newton and Wallis—mathematicians made tremendous strides in calculating π. In Rome and medieval Europe, by contrast, knowledge of π stagnated. Crude approximations such as the Babylonians’ 25/8 held sway. This same pattern holds, I think, for big numbers. Curiosity and openness lead to fascination with big numbers, and to the buoyant view that no quantity, whether of the number of stars in the galaxy or the number of possible bridge hands, is too immense for the mind to enumerate. Conversely, ignorance and irrationality lead to fatalism concerning big numbers. Historian Ilan Vardi cites the ancient Greek term sand-hundred, colloquially meaning zillion; as well as a passage from Pindar’s Olympic Ode II asserting that "sand escapes counting." ¨ But sand doesn’t escape counting, as Archimedes recognized in the third century B.C. Here’s how he began The Sand-Reckoner, a sort of pop-science article addressed to the King of Syracuse: There are some ... who think that the number of the sand is infinite in multitude ... again there are some who, without regarding it as infinite, yet think that no number has been named which is great enough to exceed its multitude ... But I will try to show you [numbers that] exceed not only the number of the mass of sand equal in magnitude to the earth ... but also that of a mass equal in magnitude to the universe. This Archimedes proceeded to do, essentially by using the ancient Greek term myriad, meaning ten thousand, as a base for exponentials. Adopting a prescient cosmological model of Aristarchus, in which the "sphere of the fixed stars" is vastly greater than the sphere in which the Earth revolves around the sun, Archimedes obtained an upper bound of 1063 on the number of sand grains needed to fill the universe. (Supposedly 1063 is the biggest number with a lexicographically standard American name: vigintillion. But the staid vigintillion had better keep vigil lest it be encroached upon by the more whimsically-named googol, or 10100, and googolplex, or .) Vast though it was, of course, 1063 wasn’t to be enshrined as the all-time biggest number. Six centuries later, Diophantus developed a simpler notation for exponentials, allowing him to surpass . Then, in the Middle Ages, the rise of Arabic numerals and place value made it easy to stack exponentials higher still. But Archimedes’ paradigm for expressing big numbers wasn’t fundamentally surpassed until the twentieth century. And even today, exponentials dominate popular discussion of the immense. Consider, for example, the oft-repeated legend of the Grand Vizier in Persia who invented chess. The King, so the legend goes, was delighted with the new game, and invited the Vizier to name his own reward. The Vizier replied that, being a modest man, he desired only one grain of wheat on the first square of a chessboard, two grains on the second, four on the third, and so on, with twice as many grains on each square as on the last. The innumerate King agreed, not realizing that the total number of grains on all 64 squares would be 264-1, or 18.6 quintillion—equivalent to the world’s present wheat production for 150 years. Fittingly, this same exponential growth is what makes chess itself so difficult. There are only about 35 legal choices for each chess move, but the choices multiply exponentially to yield something like 1050 possible board positions—too many for even a computer to search exhaustively. That’s why it took until 1997 for a computer, Deep Blue, to defeat the human world chess champion. And in Go, which has a 19-by-19 board and over 10150 possible positions, even an amateur human can still rout the world’s top-ranked computer programs. Exponential growth plagues computers in other guises as well. The traveling salesman problem asks for the shortest route connecting a set of cities, given the distances between each pair of cities. The rub is that the number of possible routes grows exponentially with the number of cities. When there are, say, a hundred cities, there are about 10158 possible routes, and, although various shortcuts are possible, no known computer algorithm is fundamentally better than checking each route one by one. The traveling salesman problem belongs to a class called NP-complete, which includes hundreds of other problems of practical interest. (NP stands for the technical term ‘Nondeterministic Polynomial-Time.’) It’s known that if there’s an efficient algorithm for any NP-complete problem, then there are efficient algorithms for all of them. Here ‘efficient’ means using an amount of time proportional to at most the problem size raised to some fixed power—for example, the number of cities cubed. It’s conjectured, however, that no efficient algorithm for NP-complete problems exists. Proving this conjecture, called P¹ NP, has been a great unsolved problem of computer science for thirty years. Although computers will probably never solve NP-complete problems efficiently, there’s more hope for another grail of computer science: replicating human intelligence. The human brain has roughly a hundred billion neurons linked by a hundred trillion synapses. And though the function of an individual neuron is only partially understood, it’s thought that each neuron fires electrical impulses according to relatively simple rules up to a thousand times each second. So what we have is a highly interconnected computer capable of maybe 1014 operations per second; by comparison, the world’s fastest parallel supercomputer, the 9200-Pentium Pro teraflops machine at Sandia National Labs, can perform 1012 operations per second. Contrary to popular belief, gray mush is not only hard-wired for intelligence: it surpasses silicon even in raw computational power. But this is unlikely to remain true for long. The reason is Moore’s Law, which, in its 1990’s formulation, states that the amount of information storable on a silicon chip grows exponentially, doubling roughly once every two years. Moore’s Law will eventually play out, as microchip components reach the atomic scale and conventional lithography falters. But radical new technologies, such as optical computers, DNA computers, or even quantum computers, could conceivably usurp silicon’s place. Exponential growth in computing power can’t continue forever, but it may continue long enough for computers—at least in processing power—to surpass human brains. To prognosticators of artificial intelligence, Moore’s Law is a glorious herald of exponential growth. But exponentials have a drearier side as well. The human population recently passed six billion and is doubling about once every forty years. At this exponential rate, if an average person weighs seventy kilograms, then by the year 3750 the entire Earth will be composed of human flesh. But before you invest in deodorant, realize that the population will stop increasing long before this—either because of famine, epidemic disease, global warming, mass species extinctions, unbreathable air, or, entering the speculative realm, birth control. It’s not hard to fathom why physicist Albert Bartlett asserted "the greatest shortcoming of the human race" to be "our inability to understand the exponential function." Or why Carl Sagan advised us to "never underestimate an exponential." In his book Billions & Billions, Sagan gave some other depressing consequences of exponential growth. At an inflation rate of five percent a year, a dollar is worth only thirty-seven cents after twenty years. If a uranium nucleus emits two neutrons, both of which collide with other uranium nuclei, causing them to emit two neutrons, and so forth—well, did I mention nuclear holocaust as a possible end to population growth? ¨ Exponentials are familiar, relevant, intimately connected to the physical world and to human hopes and fears. Using the notational systems I’ll discuss next, we can concisely name numbers that make exponentials picayune by comparison, that subjectively speaking exceed as much as the latter exceeds 9. But these new systems may seem more abstruse than exponentials. In his essay "On Number Numbness," Douglas Hofstadter leads his readers to the precipice of these systems, but then avers: If we were to continue our discussion just one zillisecond longer, we would find ourselves smack-dab in the middle of the theory of recursive functions and algorithmic complexity, and that would be too abstract. So let’s drop the topic right here. But to drop the topic is to forfeit, not only the biggest number contest, but any hope of understanding how stronger paradigms lead to vaster numbers. And so we arrive in the early twentieth century, when a school of mathematicians called the formalists sought to place all of mathematics on a rigorous axiomatic basis. A key question for the formalists was what the word ‘computable’ means. That is, how do we tell whether a sequence of numbers can be listed by a definite, mechanical procedure? Some mathematicians thought that ‘computable’ coincided with a technical notion called ‘primitive recursive.’ But in 1928 Wilhelm Ackermann disproved them by constructing a sequence of numbers that’s clearly computable, yet grows too quickly to be primitive recursive. Ackermann’s idea was to create an endless procession of arithmetic operations, each more powerful than the last. First comes addition. Second comes multiplication, which we can think of as repeated addition: for example, 5´3 means 5 added to itself 3 times, or 5+5+5 = 15. Third comes exponentiation, which we can think of as repeated multiplication. Fourth comes ... what? Well, we have to invent a weird new operation, for repeated exponentiation. The mathematician Rudy Rucker calls it ‘tetration.’ For example, ‘5 tetrated to the 3’ means 5 raised to its own power 3 times, or , a number with 2,185 digits. We can go on. Fifth comes repeated tetration: shall we call it ‘pentation’? Sixth comes repeated pentation: ‘hexation’? The operations continue infinitely, with each one standing on its predecessor to peer even higher into the firmament of big numbers. If each operation were a candy flavor, then the Ackermann sequence would be the sampler pack, mixing one number of each flavor. First in the sequence is 1+1, or (don’t hold your breath) 2. Second is 2´2, or 4. Third is 3 raised to the 3rd power, or 27. Hey, these numbers aren’t so big! Fee. Fi. Fo. Fum. Fourth is 4 tetrated to the 4, or , which has 10154 digits. If you’re planning to write this number out, better start now. Fifth is 5 pentated to the 5, or with ‘5 pentated to the 4’ numerals in the stack. This number is too colossal to describe in any ordinary terms. And the numbers just get bigger from there. Wielding the Ackermann sequence, we can clobber unschooled opponents in the biggest-number contest. But we need to be careful, since there are several definitions of the Ackermann sequence, not all identical. Under the fifteen-second time limit, here’s what I might write to avoid ambiguity: A(111)—Ackermann seq—A(1)=1+1, A(2)=2´2, A(3)=33, etc Recondite as it seems, the Ackermann sequence does have some applications. A problem in an area called Ramsey theory asks for the minimum dimension of a hypercube satisfying a certain property. The true dimension is thought to be 6, but the lowest dimension anyone’s been able is prove is so huge that it can only be expressed using the same ‘weird arithmetic’ that underlies the Ackermann sequence. Indeed, the Guinness Book of World Records once listed this dimension as the biggest number ever used in a mathematical proof. (Another contender for the title once was Skewes’ number, about , which arises in the study of how prime numbers are distributed. The famous mathematician G. H. Hardy quipped that Skewes’ was "the largest number which has ever served any definite purpose in mathematics.") What’s more, Ackermann’s briskly-rising cavalcade performs an occasional cameo in computer science. For example, in the analysis of a data structure called ‘Union-Find,’ a term gets multiplied by the inverse of the Ackermann sequence—meaning, for each whole number X, the first number N such that the Nth Ackermann number is bigger than X. The inverse grows as slowly as Ackermann’s original sequence grows quickly; for all practical purposes, the inverse is at most 4. ¨ Ackermann numbers are pretty big, but they’re not yet big enough. The quest for still bigger numbers takes us back to the formalists. After Ackermann demonstrated that ‘primitive recursive’ isn’t what we mean by ‘computable,’ the question still stood: what do we mean by ‘computable’? In 1936, Alonzo Church and Alan Turing independently answered this question. While Church answered using a logical formalism called the lambda calculus, Turing answered using an idealized computing machine—the Turing machine—that, in essence, is equivalent to every Compaq, Dell, Macintosh, and Cray in the modern world. Turing’s paper describing his machine, "On Computable Numbers," is rightly celebrated as the founding document of computer science. "Computing," said Turing, is normally done by writing certain symbols on paper. We may suppose this paper to be divided into squares like a child’s arithmetic book. In elementary arithmetic the 2-dimensional character of the paper is sometimes used. But such use is always avoidable, and I think it will be agreed that the two-dimensional character of paper is no essential of computation. I assume then that the computation is carried out on one-dimensional paper, on a tape divided into squares. Turing continued to explicate his machine using ingenious reasoning from first principles. The tape, said Turing, extends infinitely in both directions, since a theoretical machine ought not be constrained by physical limits on resources. Furthermore, there’s a symbol written on each square of the tape, like the ‘1’s and ‘0’s in a modern computer’s memory. But how are the symbols manipulated? Well, there’s a ‘tape head’ moving back and forth along the tape, examining one square at a time, writing and erasing symbols according to definite rules. The rules are the tape head’s program: change them, and you change what the tape head does. Turing’s august insight was that we can program the tape head to carry out any computation. Turing machines can add, multiply, extract cube roots, sort, search, spell-check, parse, play Tic-Tac-Toe, list the Ackermann sequence. If we represented keyboard input, monitor output, and so forth as symbols on the tape, we could even run Windows on a Turing machine. But there’s a problem. Set a tape head loose on a sequence of symbols, and it might stop eventually, or it might run forever—like the fabled programmer who gets stuck in the shower because the instructions on the shampoo bottle read "lather, rinse, repeat." If the machine’s going to run forever, it’d be nice to know this in advance, so that we don’t spend an eternity waiting for it to finish. But how can we determine, in a finite amount of time, whether something will go on endlessly? If you bet a friend that your watch will never stop ticking, when could you declare victory? But maybe there’s some ingenious program that can examine other programs and tell us, infallibly, whether they’ll ever stop running. We just haven’t thought of it yet. Nope. Turing proved that this problem, called the Halting Problem, is unsolvable by Turing machines. The proof is a beautiful example of self-reference. It formalizes an old argument about why you can never have perfect introspection: because if you could, then you could determine what you were going to do ten seconds from now, and then do something else. Turing imagined that there was a special machine that could solve the Halting Problem. Then he showed how we could have this machine analyze itself, in such a way that it has to halt if it runs forever, and run forever if it halts. Like a hound that finally catches its tail and devours itself, the mythical machine vanishes in a fury of contradiction. (That’s the sort of thing you don’t say in a research paper.) ¨ "Very nice," you say (or perhaps you say, "not nice at all"). "But what does all this have to do with big numbers?" Aha! The connection wasn’t published until May of 1962. Then, in the Bell System Technical Journal, nestled between pragmatically-minded papers on "Multiport Structures" and "Waveguide Pressure Seals," appeared the modestly titled "On Non-Computable Functions" by Tibor Rado. In this paper, Rado introduced the biggest numbers anyone had ever imagined. His idea was simple. Just as we can classify words by how many letters they contain, we can classify Turing machines by how many rules they have in the tape head. Some machines have only one rule, others have two rules, still others have three rules, and so on. But for each fixed whole number N, just as there are only finitely many distinct words with N letters, so too are there only finitely many distinct machines with N rules. Among these machines, some halt and others run forever when started on a blank tape. Of the ones that halt, asked Rado, what’s the maximum number of steps that any machine takes before it halts? (Actually, Rado asked mainly about the maximum number of symbols any machine can write on the tape before halting. But the maximum number of steps, which Rado called S(n), has the same basic properties and is easier to reason about.) Rado called this maximum the Nth "Busy Beaver" number. (Ah yes, the early 1960’s were a more innocent age.) He visualized each Turing machine as a beaver bustling busily along the tape, writing and erasing symbols. The challenge, then, is to find the busiest beaver with exactly N rules, albeit not an infinitely busy one. We can interpret this challenge as one of finding the "most complicated" computer program N bits long: the one that does the most amount of stuff, but not an infinite amount. Now, suppose we knew the Nth Busy Beaver number, which we’ll call BB(N). Then we could decide whether any Turing machine with N rules halts on a blank tape. We’d just have to run the machine: if it halts, fine; but if it doesn’t halt within BB(N) steps, then we know it never will halt, since BB(N) is the maximum number of steps it could make before halting. Similarly, if you knew that all mortals died before age 200, then if Sally lived to be 200, you could conclude that Sally was immortal. So no Turing machine can list the Busy Beaver numbers—for if it could, it could solve the Halting Problem, which we already know is impossible. But here’s a curious fact. Suppose we could name a number greater than the Nth Busy Beaver number BB(N). Call this number D for dam, since like a beaver dam, it’s a roof for the Busy Beaver below. With D in hand, computing BB(N) itself becomes easy: we just need to simulate all the Turing machines with N rules. The ones that haven’t halted within D steps—the ones that bash through the dam’s roof—never will halt. So we can list exactly which machines halt, and among these, the maximum number of steps that any machine takes before it halts is BB(N). Conclusion? The sequence of Busy Beaver numbers, BB(1), BB(2), and so on, grows faster than any computable sequence. Faster than exponentials, stacked exponentials, the Ackermann sequence, you name it. Because if a Turing machine could compute a sequence that grows faster than Busy Beaver, then it could use that sequence to obtain the D‘s—the beaver dams. And with those D’s, it could list the Busy Beaver numbers, which (sound familiar?) we already know is impossible. The Busy Beaver sequence is non-computable, solely because it grows stupendously fast—too fast for any computer to keep up with it, even in principle. This means that no computer program could list all the Busy Beavers one by one. It doesn’t mean that specific Busy Beavers need remain eternally unknowable. And in fact, pinning them down has been a computer science pastime ever since Rado published his article. It’s easy to verify that BB(1), the first Busy Beaver number, is 1. That’s because if a one-rule Turing machine doesn’t halt after the very first step, it’ll just keep moving along the tape endlessly. There’s no room for any more complex behavior. With two rules we can do more, and a little grunt work will ascertain that BB(2) is 6. Six steps. What about the third Busy Beaver? In 1965 Rado, together with Shen Lin, proved that BB(3) is 21. The task was an arduous one, requiring human analysis of many machines to prove that they don’t halt—since, remember, there’s no algorithm for listing the Busy Beaver numbers. Next, in 1983, Allan Brady proved that BB(4) is 107. Unimpressed so far? Well, as with the Ackermann sequence, don’t be fooled by the first few numbers. In 1984, A.K. Dewdney devoted a Scientific American column to Busy Beavers, which inspired amateur mathematician George Uhing to build a special-purpose device for simulating Turing machines. The device, which cost Uhing less than $100, found a five-rule machine that runs for 2,133,492 steps before halting—establishing that BB(5) must be at least as high. Then, in 1989, Heiner Marxen and Jürgen Buntrock discovered that BB(5) is at least 47,176,870. To this day, BB(5) hasn’t been pinned down precisely, and it could turn out to be much higher still. As for BB(6), Marxen and Buntrock set another record in 1997 by proving that it’s at least 8,690,333,381,690,951. A formidable accomplishment, yet Marxen, Buntrock, and the other Busy Beaver hunters are merely wading along the shores of the unknowable. Humanity may never know the value of BB(6) for certain, let alone that of BB(7) or any higher number in the sequence. Indeed, already the top five and six-rule contenders elude us: we can’t explain how they ‘work’ in human terms. If creativity imbues their design, it’s not because humans put it there. One way to understand this is that even small Turing machines can encode profound mathematical problems. Take Goldbach’s conjecture, that every even number 4 or higher is a sum of two prime numbers: 10=7+3, 18=13+5. The conjecture has resisted proof since 1742. Yet we could design a Turing machine with, oh, let’s say 100 rules, that tests each even number to see whether it’s a sum of two primes, and halts when and if it finds a counterexample to the conjecture. Then knowing BB(100), we could in principle run this machine for BB(100) steps, decide whether it halts, and thereby resolve Goldbach’s conjecture. We need not venture far in the sequence to enter the lair of basilisks. But as Rado stressed, even if we can’t list the Busy Beaver numbers, they’re perfectly well-defined mathematically. If you ever challenge a friend to the biggest number contest, I suggest you write something like this: BB(11111)—Busy Beaver shift #—1, 6, 21, etc If your friend doesn’t know about Turing machines or anything similar, but only about, say, Ackermann numbers, then you’ll win the contest. You’ll still win even if you grant your friend a handicap, and allow him the entire lifetime of the universe to write his number. The key to the biggest number contest is a potent paradigm, and Turing’s theory of computation is potent indeed. ¨ But what if your friend knows about Turing machines as well? Is there a notational system for big numbers more powerful than even Busy Beavers? Suppose we could endow a Turing machine with a magical ability to solve the Halting Problem. What would we get? We’d get a ‘super Turing machine’: one with abilities beyond those of any ordinary machine. But now, how hard is it to decide whether a super machine halts? Hmm. It turns out that not even super machines can solve this ‘super Halting Problem’, for the same reason that ordinary machines can’t solve the ordinary Halting Problem. To solve the Halting Problem for super machines, we’d need an even more powerful machine: a ‘super duper machine.’ And to solve the Halting Problem for super duper machines, we’d need a ‘super duper pooper machine.’ And so on endlessly. This infinite hierarchy of ever more powerful machines was formalized by the logician Stephen Kleene in 1943 (although he didn’t use the term ‘super duper pooper’). Imagine a novel, which is imbedded in a longer novel, which itself is imbedded in an even longer novel, and so on ad infinitum. Within each novel, the characters can debate the literary merits of any of the sub-novels. But, by analogy with classes of machines that can’t analyze themselves, the characters can never critique the novel that they themselves are in. (This, I think, jibes with our ordinary experience of novels.) To fully understand some reality, we need to go outside of that reality. This is the essence of Kleene’s hierarchy: that to solve the Halting Problem for some class of machines, we need a yet more powerful class of machines. And there’s no escape. Suppose a Turing machine had a magical ability to solve the Halting Problem, and the super Halting Problem, and the super duper Halting Problem, and the super duper pooper Halting Problem, and so on endlessly. Surely this would be the Queen of Turing machines? Not quite. As soon as we want to decide whether a ‘Queen of Turing machines’ halts, we need a still more powerful machine: an ‘Empress of Turing machines.’ And Kleene’s hierarchy continues. But how’s this relevant to big numbers? Well, each level of Kleene’s hierarchy generates a faster-growing Busy Beaver sequence than do all the previous levels. Indeed, each level’s sequence grows so rapidly that it can only be computed by a higher level. For example, define BB2(N) to be the maximum number of steps a super machine with N rules can make before halting. If this super Busy Beaver sequence were computable by super machines, then those machines could solve the super Halting Problem, which we know is impossible. So the super Busy Beaver numbers grow too rapidly to be computed, even if we could compute the ordinary Busy Beaver numbers. You might think that now, in the biggest-number contest, you could obliterate even an opponent who uses the Busy Beaver sequence by writing something like this: BB2(11111). But not quite. The problem is that I’ve never seen these "higher-level Busy Beavers" defined anywhere, probably because, to people who know computability theory, they’re a fairly obvious extension of the ordinary Busy Beaver numbers. So our reasonable modern mathematician wouldn’t know what number you were naming. If you want to use higher-level Busy Beavers in the biggest number contest, here’s what I suggest. First, publish a paper formalizing the concept in some obscure, low-prestige journal. Then, during the contest, cite the paper on your index card. To exceed higher-level Busy Beavers, we’d presumably need some new computational model surpassing even Turing machines. I can’t imagine what such a model would look like. Yet somehow I doubt that the story of notational systems for big numbers is over. Perhaps someday humans will be able concisely to name numbers that make Busy Beaver 100 seem as puerile and amusingly small as our nobleman’s eighty-three. Or if we’ll never name such numbers, perhaps other civilizations will. Is a biggest number contest afoot throughout the galaxy? ¨ You might wonder why we can’t transcend the whole parade of paradigms, and name numbers by a system that encompasses and surpasses them all. Suppose you wrote the following in the biggest number contest: The biggest whole number nameable with 1,000 characters of English text Surely this number exists. Using 1,000 characters, we can name only finitely many numbers, and among these numbers there has to be a biggest. And yet we’ve made no reference to how the number’s named. The English text could invoke Ackermann numbers, or Busy Beavers, or higher-level Busy Beavers, or even some yet more sweeping concept that nobody’s thought of yet. So unless our opponent uses the same ploy, we’ve got him licked. What a brilliant idea! Why didn’t we think of this earlier? Unfortunately it doesn’t work. We might as well have written One plus the biggest whole number nameable with 1,000 characters of English text This number takes at least 1,001 characters to name. Yet we’ve just named it with only 80 characters! Like a snake that swallows itself whole, our colossal number dissolves in a tumult of contradiction. What gives? The paradox I’ve just described was first published by Bertrand Russell, who attributed it to a librarian named G. G. Berry. The Berry Paradox arises not from mathematics, but from the ambiguity inherent in the English language. There’s no surefire way to convert an English phrase into the number it names (or to decide whether it names a number at all), which is why I invoked a "reasonable modern mathematician" in the rules for the biggest number contest. To circumvent the Berry Paradox, we need to name numbers using a precise, mathematical notational system, such as Turing machines—which is exactly the idea behind the Busy Beaver sequence. So in short, there’s no wily language trick by which to surpass Archimedes, Ackermann, Turing, and Rado, no royal road to big numbers. You might also wonder why we can’t use infinity in the contest. The answer is, for the same reason why we can’t use a rocket car in a bike race. Infinity is fascinating and elegant, but it’s not a whole number. Nor can we ‘subtract from infinity’ to yield a whole number. Infinity minus 17 is still infinity, whereas infinity minus infinity is undefined: it could be 0, 38, or even infinity again. Actually I should speak of infinities, plural. For in the late nineteenth century, Georg Cantor proved that there are different levels of infinity: for example, the infinity of points on a line is greater than the infinity of whole numbers. What’s more, just as there’s no biggest number, so too is there no biggest infinity. But the quest for big infinities is more abstruse than the quest for big numbers. And it involves, not a succession of paradigms, but essentially one: Cantor’s. ¨ So here we are, at the frontier of big number knowledge. As Euclid’s disciple supposedly asked, "what is the use of all this?" We’ve seen that progress in notational systems for big numbers mirrors progress in broader realms: mathematics, logic, computer science. And yet, though a mirror reflects reality, it doesn’t necessarily influence it. Even within mathematics, big numbers are often considered trivialities, their study an idle amusement with no broader implications. I want to argue a contrary view: that understanding big numbers is a key to understanding the world. Imagine trying to explain the Turing machine to Archimedes. The genius of Syracuse listens patiently as you discuss the papyrus tape extending infinitely in both directions, the time steps, states, input and output sequences. At last he explodes. "Foolishness!" he declares (or the ancient Greek equivalent). "All you’ve given me is an elaborate definition, with no value outside of itself." How do you respond? Archimedes has never heard of computers, those cantankerous devices that, twenty-three centuries from his time, will transact the world’s affairs. So you can’t claim practical application. Nor can you appeal to Hilbert and the formalist program, since Archimedes hasn’t heard of those either. But then it hits you: the Busy Beaver sequence. You define the sequence for Archimedes, convince him that BB(1000) is more than his 1063 grains of sand filling the universe, more even than 1063 raised to its own power 1063 times. You defy him to name a bigger number without invoking Turing machines or some equivalent. And as he ponders this challenge, the power of the Turing machine concept dawns on him. Though his intuition may never apprehend the Busy Beaver numbers, his reason compels him to acknowledge their immensity. Big numbers have a way of imbuing abstract notions with reality. Indeed, one could define science as reason’s attempt to compensate for our inability to perceive big numbers. If we could run at 280,000,000 meters per second, there’d be no need for a special theory of relativity: it’d be obvious to everyone that the faster we go, the heavier and squatter we get, and the faster time elapses in the rest of the world. If we could live for 70,000,000 years, there’d be no theory of evolution, and certainly no creationism: we could watch speciation and adaptation with our eyes, instead of painstakingly reconstructing events from fossils and DNA. If we could bake bread at 20,000,000 degrees Kelvin, nuclear fusion would be not the esoteric domain of physicists but ordinary household knowledge. But we can’t do any of these things, and so we have science, to deduce about the gargantuan what we, with our infinitesimal faculties, will never sense. If people fear big numbers, is it any wonder that they fear science as well and turn for solace to the comforting smallness of mysticism? But do people fear big numbers? Certainly they do. I’ve met people who don’t know the difference between a million and a billion, and don’t care. We play a lottery with ‘six ways to win!,’ overlooking the twenty million ways to lose. We yawn at six billion tons of carbon dioxide released into the atmosphere each year, and speak of ‘sustainable development’ in the jaws of exponential growth. Such cases, it seems to me, transcend arithmetical ignorance and represent a basic unwillingness to grapple with the immense. Whence the cowering before big numbers, then? Does it have a biological origin? In 1999, a group led by neuropsychologist Stanislas Dehaene reported evidence in Science that two separate brain systems contribute to mathematical thinking. The group trained Russian-English bilinguals to solve a set of problems, including two-digit addition, base-eight addition, cube roots, and logarithms. Some subjects were trained in Russian, others in English. When the subjects were then asked to solve problems approximately—to choose the closer of two estimates—they performed equally well in both languages. But when asked to solve problems exactly, they performed better in the language of their training. What’s more, brain-imaging evidence showed that the subjects’ parietal lobes, involved in spatial reasoning, were more active during approximation problems; while the left inferior frontal lobes, involved in verbal reasoning, were more active during exact calculation problems. Studies of patients with brain lesions paint the same picture: those with parietal lesions sometimes can’t decide whether 9 is closer to 10 or to 5, but remember the multiplication table; whereas those with left-hemispheric lesions sometimes can’t decide whether 2+2 is 3 or 4, but know that the answer is closer to 3 than to 9. Dehaene et al. conjecture that humans represent numbers in two ways. For approximate reckoning we use a ‘mental number line,’ which evolved long ago and which we likely share with other animals. But for exact computation we use numerical symbols, which evolved recently and which, being language-dependent, are unique to humans. This hypothesis neatly explains the experiment’s findings: the reason subjects performed better in the language of their training for exact computation but not for approximation problems is that the former call upon the verbally-oriented left inferior frontal lobes, and the latter upon the spatially-oriented parietal lobes. If Dehaene et al.’s hypothesis is correct, then which representation do we use for big numbers? Surely the symbolic one—for nobody’s mental number line could be long enough to contain , 5 pentated to the 5, or BB(1000). And here, I suspect, is the problem. When thinking about 3, 4, or 7, we’re guided by our spatial intuition, honed over millions of years of perceiving 3 gazelles, 4 mates, 7 members of a hostile clan. But when thinking about BB(1000), we have only language, that evolutionary neophyte, to rely upon. The usual neural pathways for representing numbers lead to dead ends. And this, perhaps, is why people are afraid of big numbers. Could early intervention mitigate our big number phobia? What if second-grade math teachers took an hour-long hiatus from stultifying busywork to ask their students, "How do you name really, really big numbers?" And then told them about exponentials and stacked exponentials, tetration and the Ackermann sequence, maybe even Busy Beavers: a cornucopia of numbers vaster than any they’d ever conceived, and ideas stretching the bounds of their imaginations. Who can name the bigger number? Whoever has the deeper paradigm. Are you ready? Get set. Go. References Petr Beckmann, A History of Pi, Golem Press, 1971. Allan H. Brady, "The Determination of the Value of Rado’s Noncomputable Function Sigma(k) for Four-State Turing Machines," Mathematics of Computation, vol. 40, no. 162, April 1983, pp 647- 665. Gregory J. Chaitin, "The Berry Paradox," Complexity, vol. 1, no. 1, 1995, pp. 26- 30. At http://www.umcs.maine.edu/~chaitin/unm2.html. A.K. Dewdney, The New Turing Omnibus: 66 Excursions in Computer Science, W.H. Freeman, 1993. S. Dehaene and E. Spelke and P. Pinel and R. Stanescu and S. Tsivkin, "Sources of Mathematical Thinking: Behavioral and Brain-Imaging Evidence," Science, vol. 284, no. 5416, May 7, 1999, pp. 970- 974. Douglas Hofstadter, Metamagical Themas: Questing for the Essence of Mind and Pattern, Basic Books, 1985. Chapter 6, "On Number Numbness," pp. 115- 135. Robert Kanigel, The Man Who Knew Infinity: A Life of the Genius Ramanujan, Washington Square Press, 1991. Stephen C. Kleene, "Recursive predicates and quantifiers," Transactions of the American Mathematical Society, vol. 53, 1943, pp. 41- 74. Donald E. Knuth, Selected Papers on Computer Science, CSLI Publications, 1996. Chapter 2, "Mathematics and Computer Science: Coping with Finiteness," pp. 31- 57. Dexter C. Kozen, Automata and Computability, Springer-Verlag, 1997. ———, The Design and Analysis of Algorithms, Springer-Verlag, 1991. Shen Lin and Tibor Rado, "Computer studies of Turing machine problems," Journal of the Association for Computing Machinery, vol. 12, no. 2, April 1965, pp. 196- 212. Heiner Marxen, Busy Beaver, at http://www.drb.insel.de/~heiner/BB/. ——— and Jürgen Buntrock, "Attacking the Busy Beaver 5," Bulletin of the European Association for Theoretical Computer Science, no. 40, February 1990, pp. 247- 251. Tibor Rado, "On Non-Computable Functions," Bell System Technical Journal, vol. XLI, no. 2, May 1962, pp. 877- 884. Rudy Rucker, Infinity and the Mind, Princeton University Press, 1995. Carl Sagan, Billions & Billions, Random House, 1997. Michael Somos, "Busy Beaver Turing Machine." At http://grail.cba.csuohio.edu/~somos/bb.html. Alan Turing, "On computable numbers, with an application to the Entscheidungsproblem," Proceedings of the London Mathematical Society, Series 2, vol. 42, pp. 230- 265, 1936. Reprinted in Martin Davis (ed.), The Undecidable, Raven, 1965. Ilan Vardi, "Archimedes, the Sand Reckoner," at http://www.ihes.fr/~ilan/sand_reckoner.ps. Eric W. Weisstein, CRC Concise Encyclopedia of Mathematics, CRC Press, 1999. Entry on "Large Number" at http://www.treasure-troves.com/math/LargeNumber.html. Back to Writings page Back to Scott's homepage Back to Scott's blog

      Why do we even care about big numbers is there any use?

    1. The real lesson inherent in the death of Lance Crosby, and in the equally regrettable death of the bear that killed him, is a reminder of something too easily forgotten: Yellowstone is a wild place, constrained imperfectly within human-imposed limits. It’s a wild place that we have embraced, surrounded, riddled with roads and hotels and souvenir shops, but not tamed, not conquered—a place we treasure because it still represents wildness. It’s filled with wonders of nature—fierce animals, deep canyons, scalding waters—that are magnificent to behold but fretful to engage. Most of us, when we visit Yellowstone, see it as if through a Plexiglas window. We gaze from our cars at a roadside bear, we stand at an overlook above a great river, we stroll boardwalks amid the geyser basins, experiencing the park as a diorama. We remain safe and dry. Our shoes don’t get muddy with sulfurous gunk. But the Plexiglas window doesn’t exist, and the diorama is real. It’s painted in blood—the blood of many wild creatures, dying violently in the natural course of relations with one another, predator and prey, and occasionally also the blood of humans. Walk just 200 yards off the road into a forested gully or a sagebrush flat, and you had better be carrying, as Lance Crosby wasn’t, a canister of bear spray. Your park entrance receipt won’t protect you. You can be killed and eaten. But if you are, despite the fact that you have freely made your own choices, there may be retribution.

      Because the park isn't fully tamed, there are wild animals waiting just 200 yards off the road. Therefore you should be cautious since the park ticket won't guarantee a protection.

    1. I think we can define an "archival virtual machine" specification that is efficient enough to be usable but simple enough that it never needs to be updated and is easy to implement on any platform; then we can compile our explorable explanations into binaries for that machine. Thenceforth we only need to write new implementations of the archival virtual machine platform as new platforms come along

      We have that. It's the Web platform. The hard part is getting people to admit this, and then getting them to actually stop acting counter to these interests. Sometimes that involves getting them to admit that their preferred software stack (and their devotion to it) is the problem, and it's not going to just fix itself.

      See also: Lorie and the UVC

    1. like people just over indexing on being like on like syntactic like transformation tools when you think about like manipulating data uh it's easy to think about like 00:47:03 manipulating its syntax and semantics in one and that way seeing the difference between those two so it's a suite of tools for uh specifically doing the kind of like semantic manipulation that is characteristic 00:47:15 and integration projects for knowledge graphs

      !- claim : people just over indexing - - syntactic transformation tool - comment : syntax right semantics take care of itself - ref : Haugland Mind Design - if you get your syntax right - semantics can take care of itself - retort : get your iconcept for your intent right - software can take care of itself - symmanthetic manipulation - combinig symmathetic with semantic - effectiveness illusion of syntax being right - integration of knowledge graphs - for : HyperKnowledge

    1. Being present means more than just attending class; it means participating in and contributing to class.

      I think it's true that just attending class isn't enough. I feel like if you really wanna help yourself learn more, you have to give to the class and participate.

    1. The most important difference is that the government of South Korea (along with a few very large corporations) played a leading role in directing the process of development, explicitly promoting some industries, requiring firms to compete in foreign markets and also providing high quality education for its workforce.

      I think it's good that not just the government was pushing for the development of certain industries but also that if the government is educating it's people to make the right economic calls it will benefit the country so much. It shows the government and economic leaders aren't clashing head but working together to help each other progress the country.

    1. have days

      it's so funny... my sister just started college in manhattan and says that she finds it so glamorous to have a scheduled day, like doctor apt at 9 coffee w friedns at 10 so on. and I'm always so confused... nyc glamour

    1. And I sit here in my straight leg jeans that prompted my partner to say that I look like “Diane Lane in an ‘80s movie” (compliment!) and/or that I’m about to go out and farm, I’m reminded of how uncomfortable I felt in skinny jeans for the first time — but also how outmoded my flared low-rise had come to feel. None of this jean discourse is really about fashion, or figuring out what you like. Same, much of the time, when it comes to other forms of bodily discipline, particularly with food and exercise. There’s always a “choice” about what kind of maintenance you want to pursue, but it’s a severely delimited one. So much of this maintenance is about not falling behind, particularly as a woman. To fall behind is not only to lose a grip on your class status, but your visibility and value within society at large. It’s not just middle class a woman is communicating with “appropriate” clothes and body and grooming. It’s vitality, participation, and gameness in a game in which you’re always already losing.
    1. Proctorio has a strict data policy against such an act. The recordings we conduct are at the university's request and are shared only with the course's professor or other approved administrator. They cannot be downloaded and saved for unmonitored, offline viewing or forwarded to third parties. Other competitors of ours allow this, but it is a line in the sand we will not cross

      It's just incredibly weird to even think that a student is recorded at all during a test. Educators do not record students taking a test while in the classroom, it's a "live feed" if you want to put it into computer terms of the proctor in the room.

    1. Note: This rebuttal was posted by the corresponding author to Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Reviewer #1 (Evidence, reproducibility and clarity (Required)):

      The manuscript by Neville et al addresses the link between the localization and the activity of the so-called "Pins complex" or "LGN complex", that has been shown to regulate mitotic spindle orientation in most animal cell types and tissues. In most cell types, the polarized localization of the complex in the mitotic cell (which can vary between apical and basolateral, depending on the context) localizes pulling forces to dictate the orientation. The authors reexplore the notion that this polarized localization of the complex is sufficient to dictate spindle orientation, and propose that an additional step of "activation" of the complex is necessary to refine positioning of the spindle.

      The experiments are performed in the follicular epithelium (FE), an epithelial sheet of cell that surrounds the drosophila developing oocyte and nurse cells in the ovarium. Like in many other epithelia, cell divisions in the FE are planar (the cell divides in the plane of the epithelium). The authors first confirm that planar divisions in this epithelium depends on the function of Pins and its partner mud, and that the interaction between the two partners is necessary, like in many other epithelial structures. Planar divisions are often associated with a lateral/basolateral "ring" of the Pins complex during mitosis. The authors show that in the FE, Pins is essentially apical in interphase and becomes enriched at the lateral cortex during mitosis, however a significant apical component remains, whereas mud is almost entirely absent from the apical cortex. Pins being "upstream" of mud in the complex, this is a first hint that the localization of Pins is not sufficient to dictate the localization of mud and of the pulling forces.

      The authors then replace wt Pins, whose cortical anchoring strongly relies on its interaction with Gai subunits, with a constitutively membrane anchored version (via a N-terminal myristylation). They show that the localization of myr-Pins mimics that of wt-Pins, with a lateral enrichment in mitosis, and a significant apical component. Since a Myr-RFP alone shows a similar distribution, they conclude that the restricted localization of Pins in mitosis is a consequence of general membrane characteristics in mitosis, rather than the result of a dedicated mechanism of Pins subcellular restriction. Remarkably, Myr-Pins also rescues Pins loss-of-function spindle orientation defects.

      They further show that the cortical localization of Pins does not require its interaction with Dlg (unlike what has been suggested in other epithelia). However, spindle orientation requires Dlg, and in particular it requires the direct Dlg/Pins interaction. The activity of Dlg in the FE appears to be independent from khc73 and Gukholder, two of its partners involved in its activity in microtubule capture and spindle orientation in other cell types. Based on all these observations, the authors propose that Dlg serves as an activator that controls Pins activity in a subregion of its localization domain (in this case, the lateral cortex of the mitotic FE cell). They propose to test this idea by relocalizing Pins at the apical cortex, using Inscuteable ectopic expression. With the tools that they use to drive Inscuteable expression, they obtain two populations of cells. One population has a stronger apical that basolateral Insc distribution, and the spindle is reoriented along the apical-basal axis; the other population has higher basolateral than apical levels of Insc distribution, and the spindle remains planar. The authors write that Pins localization is unchanged between the two subsets of cells (although I do not entirely agree with them on that point, see below), and that although Mud is modestly recruited to the apical cortex in the first population, it remains essentially basolateral in both. In this situation, the localization of Insc in the cell is therefore a better predictor of spindle orientation than that of Pins or Mud. Remarkably, removing Dlg in an Insc overexpression context leads to a dramatic shift towards apical-basal reorientation of the spindle, suggesting that loss of Dlg-dependent activation of the lateral Pins complex reveals an Insc-dependent apical activation of the complex. Overall, I find the demonstration convincing and the conclusion appropriate. One of the limitations of the study is the use of different drivers and reporters for the localization of Pins, which makes it hard to compare different situations, but not to the point that it would jeopardize the main conclusions. I do not have major remarks on the paper, only a few minor observations and suggestion of simple experiments that would complete the study.

      Minor:

      What happens to Pins and Mud in Dlg mutant cells that overexpress Insc and behave as InscA? Are they still essentially lateral, or are they more efficiently recruited to the apical cortex?

      This is a terrific question. Of course we would love to know and intend to find out.

      One way to do this (consistent with the manuscript) would be to generate flies that are Dlg[1P20], FRT19A/RFP-nls, hsflp, FRT19A; TJ-GAL4/+; Pins-Tom, GFP-Mud/UAS-Insc. (Note that these flies would only allow us to image Mud; we would have to repeat the experiment using GFP FRT19A; hsflp 38 to see Pins. This isn’t ideal given that we’d like to image both together). Generating these flies is a major technical challenge because of the number of transgenes and chromosomes involved.

      Our preferred way to do this would be to generate flies that are Dlg[1P20]/Dlg[2]; TJ-GAL4/+; Pins-Tom, GFP-Mud/UAS-Insc. So far, we’ve been unsuccessful. We are now undertaking a modified crossing scheme that we hope will solve the problem, though we aren’t overly optimistic about the outcome. We find that the temperature-sensitive mutation Dlg[2] presents an activation barrier; while we are able to generate flies that are Dlg[2] / FM7 in combination with transgenes and/or mutations on other chromosomes, we do not always recover the Dlg[2] / Y males (which must develop at 18degrees) from these complex genotypes.

      In the longer term (outside the scope of revision), we are working to develop more tools for imaging Mud and Pins that we hope will help answer this question.

      Regarding the competition between Pins and Insc for dictating the apical versus basolateral localization of Insc, the Insc-expression threshold model could be easily tested in Pins62/62 mutants, where it is expected that only InscA localization should be observed, even at 25{degree sign}C (unless Pins is required for the cortical recruitment of Insc, as it is the case in NBs - see Yu et al 2000 for example).

      This is another great experiment and one we’d love to carry out. Again, the genetics are currently challenging, only because both UAS-Inscuteable and FRT82B pinsp62 are on the third chromosome. (Right now we’re trying to hop UAS-Inscuteable to the second).

      However, we do have another idea for testing the threshold model, which is to repeat the experiment in which we express UAS-Insc in cells that are DlgIP20/IP20 at 25oC. Because the relevant cells (UAS-Insc OX in Dlg mitotic clones) are relatively rare, we have not yet been able to collect enough examples to make a firm conclusion. However, our preliminary results (only six cells so far!) suggest that more InscB cells are observed at the lower temperature, consistent with the threshold model.

      I do not agree with the authors on P.10 and Figure 6A-D, when they claim that the apical enrichment of Pins is equivalent in both InscA and InscB cells. The number of measured cells is very low, and the ratio of apical/lateral Pins differs between the two sets of cells. The number of cells should be increased and the ratios compared with a relevant statistic method.

      Totally fair. We are working to add more data to these panels (6B and 6D). The trend observed in 6D may be softening in agreement with the reviewer’s prediction, although we currently don’t yet have enough new data points to be confident in that conclusion. Therefore, we have not yet updated the manuscript, though we expect to do so during the revision period. We will also add a statistical comparison. Importantly, as the reviewer suggested, this does not alter our conclusions.

      A lot of the claims on Pins localization rely on overexpression (generally in a Pins null background) of tagged Pins expressed from different promoters or drivers, and fused to different fluorescent tags. Therefore, it is difficult to evaluate to which extent the localization reflects an endogenous expression level, and to compare the different situations. As the cortical localization of Pins relies on interaction with cortical partners (mostly GDP-bound Gai) which are themselves in limiting quantity in the cell, and in the case of Gai-GDP, regulated by Pins GDI activity, this poses a problem when comparing their distribution, because the expression level of Pins may contribute to its cortical/cytoplasmic ratio, but also to its lateral/apical distribution. Although I understand that the authors have been using tools that were already available for this study, I think it would be more convincing if all the Pins localization studies were performed with endogenously tagged Pins, even those with Myr localization sequences. In an age of CRISPR-Cas-dependent homologous recombination, I think the generation of such alleles should have been possible. Although this would probably not change the main claims of the paper, it would have made a more convincing case for the localization studies.

      We don’t disagree at all with this point. We did indeed try to stick with the published UAS-Pins-myr-GFP, not only for convenience but because it allows us to make comparisons to other studies using the same tool (Chanet et al Current Biology 2017 and Camuglia et al eLife 2022). Another consideration is that we used only one driver across our experiments (Traffic jam-GAL4). It is quite weak at the developmental stages that we examine, meaning that overexpression is not a major concern. (Indeed we have struggled with the opposite problem).

      We certainly take the reviewer’s comment seriously and we therefore described it in the manuscript. We are currently working to develop endogenous tools using CRISPR.

      Paragraph added to Discussion – Limitations of our Study:

      “Another technical consideration is that our work makes use of transgenes under the control of Traffic jam-GAL4. While this strategy allows us to compare our results with previous work employing the same or similar tools, a drawback is that we cannot guarantee that Traffic jam-GAL4 drives equivalent expression to the endogenous Pins promoter (Chanet et al., 2017, Camuglia et al., 2022). However, given that Traffic jam-GAL4 is fairly weak at the developmental stages examined, we are not especially concerned about overexpression effects.”

      The authors should indicate in the figure legends or in the methods that the spindle orientation measurements for controls for Pins62/62 are reused between figures 1, 3, 4, 5, 6 , and between figure 3, 4 and 5, respectively.

      Absolutely. Added to the Methods section.

      Reviewer #1 (Significance (Required)):

      Altogether, this study makes a convincing case that the localization of the core members of the pulling force complex, Pins and Mud, is not entirely sufficient to localize active force generation, and that the complex must be activated locally, at least in the FE. The notion of activation of the Pins/LGN complex has probably been on many people's mind for years: Pins/LGN works as a closed/open switch depending on the number of Gai subunits it interacts with, it must be phosphorylated, etc... suggesting that not all cortical Pins/LGN was active and involved in force generation. However the study presented here shows an interesting case where localization and activation are clearly disconnected. The authors show how Dlg plays this role in physiological conditions in the FE, and use ectopic expression of Insc to show that, at least in an artificial context, Insc can have the same "activating activity" (or at least an activating activity that is stronger than its apical recruitment capability and stronger than Dlg's activating activity). It is to my knowledge the first case of such a clear dissociation. In their discussion, the authors are careful not to generalize the observation to other tissues. Although I did not reexplore all that has been published on the Pins/LGN-NuMA/Mud complex over the last 20 years, my understanding is that despite interesting cases of distribution of the complex like that of Mud in the tricellular junction in the notum, the localization model can still explain most of the phenotypes that have been described without invoking an activation step. If it is the case, then the activation model is another variation (an interesting one!) on the regulation of the core machinery, which are plentiful as the authors indicate in their introduction, and is maybe specific to the FE; if not, then it would be interesting to push the discussion further by reexamining previous results in other systems, and pinpointing those phenotypes that could be better explained with an activation step.

      Overall, I find this is an elegant piece of work, which should be of interest to many cell and developmental biologists beyond the community of spindle orientation aficionados.

      Reviewer #2 (Evidence, reproducibility and clarity (Required)): Summary: The manuscript by Neville et al. addressed the mechanism how conserved spindle regulators (Pins/Mud/Gai/Dynein) control spindle orientation in the proliferating epithelia by revising "the canonical model", using the Drosophila follicular epithelium (FE). The authors examined the epistatic relationship among Pins, Mud and Dlg in FE and found that Pins controls the cortical localization of Mud by utilizing mutant analyses, and suggested their localization does not fully overlap using the newly generated knock-in allele. They also showed that Pins relocalization during mitosis depends on cortical remodeling, or passive model, where Pins localization changes with other membrane-anchored proteins. Their data further suggest that Pins cortical localization is not influenced by Dlg, but Pins-interacting domain of Dlg does affect spindle orientation. Based on these results, the authors propose that Dlg controls spindle orientation not by redistributing Pins, but by promoting (or "activating" from their definition) Pins-dependent spindle orientation. Interestingly, ectopic expression of Inscuteable (Insc) suggested that Insc localization, either apical or lateral, correlates with spindle orientation, and their localization is a dominant indicator of spindle orientation, compared to the localization of Pins and Mud, implicating potentially distinct roles of activation and localization of the spindle complex. Overall their genetic experiments are well-designed and provide stimulation for future research. However, their evidence is suggestive, but not conclusive for their proposal. I have several concerns about their conclusion and would like to request more detailed information as well as to propose additional experiments.

      Major concerns: 1. This report lacks technical and experimental details. As the typical fly paper, the authors need to show the exact genotypes of flies they used for experiments. This needs to be addressed for Figures 1-6, and Supplemental Figures. Especially, which Gal4 drivers were used for UAS-Pins wt or mutant constructs in Figure 4 with pins mutant background, Khc73, GUKH mutant backgrounds. Which exact flies were used for mutant clone experiments for Supplemental Figure 3? (A for typical mosaic, and B for MARCM). Without these details, it is impossible to evaluate results and reproduce by others.

      We take this concern very seriously!

      • We listed the GAL4 driver (Traffic jam-GAL4) in the first section of the Materials and Methods: Expression was driven by Traffic Jam-GAL4 (Olivieri et al., 2010). The transgene and relevant citation have been added to Table 1.
      • We explained the background stock for the MARCM experiment in the Materials and Methods: Mosaic Analysis with a Repressible Cell Marker (after the method of Lee and Luo) was carried out using GFP-mCD8 (under control of an actin promoter) as the marker. The transgene and relevant citation have been added to Table 1.
      • In line with other fly studies (eg. Nakajima et al., Nature 2013) and our own Drosophila work (Bergstralh et al Current Biology 2013, Bergstralh*, Lovegrove*, St Johnston NCB 2015, Bergstralh et al Development 2016, Finegan et al EMBO J 2019, Cammarota*, Finegan* et al Current Biology 2020) we were careful to show the relevant genotype components in each figure.
      • We included a fully referenced Supplementary Table (Table 1 – Drosophila genetics) listing every mutant allele or transgene with a citation and a note about availability. We have expanded this table in response to the author’s concern (see above).

        Related to the comment 1, how did the author perform "clonal expression of Ubi-Pin-YFP" in page 5? As far as I understand, Ubi-Pin-YFP is expressed ubiquitously by the ubiquitin promoter.

      The reviewer makes a good point. We regret that we did not make this experiment more clear. Ubi-Pins-YFP was recombined onto an FRT chromosome (FRT82B). We made mitotic clones.

      We have clarified this in the Methods section as follows:

      “Mitotic clones of Ubi-Pins-YFP were made by recombining the Ubi-Pins-YFP transgene onto the FRT82B chromosome”

      1. In page 6, if Pins relocalization is passive and is associated with membrane-anchored protein remodeling during mitosis, its relocalization can be suppressed by disrupting the process of mitotic remodeling (mitotic rounding). The authors should test this by either genetic disruption or pharmacological treatment for the actomyosin should cause defects in Pins relocalization, which bolster their conclusion.

      We agree that this is a cool experiment and are happy to give it another shot. However, we do note that interpretation could be difficult. We don’t know that mitotic rounding and membrane-anchored protein remodeling during mitosis are inextricably linked. Notably, the remodeling we describe reflects cell polarity; apical components are evidently moved to the lateral cortex. This is contrary to understanding of rounding, which reflects isotropic actomyosin activity (Chanet et al., (2017) Curr.Biol. & Rosa et al., (2015) Dev. Cell.). Therefore we don’t understand what a “negative” result would mean, or for that matter that a “positive” result would be safe to interpret.

      We have attempted many strategies to prevent cell rounding in the follicular epithelium, none of which have successfully prevented rounding. 1) We attempted to genetically knockdown Moesin in the FE and did not see an effect on cell rounding. However we couldn’t confirm knockdown and therefore are not confident in this manipulation. 2) It is difficult to interpret the result of genetically disrupting Myosin, because it causes pleiotropic effects, such as inhibition of the cell cycle, and disruption of monolayer architecture. 3) We treated egg chambers with Y-27632 (a Rok-inhibitor) and examined its effect on mitotic cell rounding and on cytokinesis, which are Rok-dependent processes. Our experiments were performed using manually-dissociated ovarioles treated for 45 minutes in Schneider Cell Medium supplemented with insulin. Even at our maximum concentration of 1mM Y-27632, several orders of magnitude above the Ki, we are unable to see any effect on mitotic cell shape or actin accumulation at the mitotic cortex and did not observe any evidence of defective cytokinesis. We also did not observe defects in spindle organization or orientation, as would be expected from failed rounding. We therefore do not believe that the inhibitor works in this tissue. One possible explanation is that the follicle cells are secretory, and likely to pass molecules taken up from the media quickly into the germline. Therefore, we do not anticipate that we can perform this experiment to our satisfaction.

      1. The critical message in this manuscript is that the core spindle complex mediated by Pins-Mud controls spindle orientation by "activation", but not localization. The findings that Pins and Mud localization is not influenced by Insc and that ecotpic Insc expression and genetic Mud depletion (Figure 6) might support their proposal, but these results just suggest their localization does not matter. I wonder how the authors could conclude and define "activation". What does this activation mean in the context of spindle orientation? Can the authors test activation by enzymatic activity or assess dynamics of spindle alignment?

      We intend for the critical message of the manuscript to be that “The spindle orienting machinery requires activation, not just localization.” We absolutely do not make the claim that localization is not important, only that it is not sufficient. The reviewer recognizes this point here and in a subsequent comment: “The authors showed that Pins and Mud localization themselves are not sufficient for the control of spindle orientation with genetic analyses.”

      We also do not claim that Pins and/or Mud localization are not impacted by Inscuteable. On the contrary, we plainly see and report that they are; the intensity profiles in Figure 6 are distinct from those in Figure 2, as discussed in the text.

      We appreciate the reviewer’s point about activation. Since we do not understand these proteins to be enzymes, we aren’t sure what enzymatic activity would be tested. The dynamics of spindle alignment in this slowly developing system are prohibitively difficult to measure: the mitotic index is very low (~2%) and only a very small fraction of those cells will be in a focal plane that permits accurate live imaging in the apical-basal axis. Alternative modes of activation include conformational change and/or a connection with other important molecules. The simplest possibility would be that Dlg allows Pins to bind Mud, but so far our data do not support it. We have added the following paragraph to our discussion:

      “The mechanism of activation remains unclear. While the most straightforward possibility is that Dlg promotes interaction between Pins and Mud, our results show that Mud is recruited to the cortex even when Dlg is disrupted (Figure 4D). Alternatively, Discs large may promote a conformational change in the spindle-orientation complex and/or a change in complex composition. Furthermore, the Inscuteable mechanism is not likely to work in the same way. Dlg binds to a conserved phosphosite in the central linker domain of Pins and should therefore allow for Pins to simultaneously interact with Mud (Johnston et al., 2009). Contrastingly, binding between Pins and Inscuteable is mediated by the TPR domains of Pins, meaning that Mud is excluded (Culurgioni et al., 2011; 2018). While a stable Pins-Inscuteable complex has been suggested to promote localization of a separate Pins-Mud-dynein complex, our work raises the possibility that it might also or instead promote activation.”

      1. In page 7-8, although Pins-S436D rescue spindle orientation, but Pins-S436A does not in pins null clones background, Pins localization is not influenced by Dlg. This questions how exactly Pins and Dlg can interact, and how Dlg affect Pins function. Related to this observation, in the embryonic Pins:Tom localization in dlg mutant does not provide strong evidence to support their conclusion given the experimental context is different from previous study (Chanet et al., 2017).

      We agree with the reviewer. Our data (this paper and previous papers) and the work of others indicate that this interaction is important for spindle orientation (Bergstralh et al., 2013a; Saadaoui et al., 2014; Chanet et al., 2017). However, we show here that Dlg doesn’t obviously impact Pins localization (as proposed in our earlier paper), but does impact the ability of the spindle orientation machinery to work (hence activity).

      The reviewer makes a very good point. Our experimental context is different from the previous study concerning Pins and Dlg in embryos: Chanet et al (2017) performed their work in the embryonic head, whereas we look at divisions in the ventral embryonic ectoderm. These are distinct mitotic zones (Foe et al. (1989) Development) and exhibit distinct epithelial morphologies. We show that Pins:Tom localizes at the mitotic cell cortex in Dlg[2]/Dlg[1P20] in cells in the ventral embryonic ectoderm. Our only conclusion from this experiment is that Pins:Tom can localize without the Dlg GUK domain in another cell type (outside the follicular epithelium). In the current preliminary revision we have softened our claim as follows:

      “We also examined the relationship between Pins and Dlg in the Drosophila embryo. A previous study showed that cortical localization of Pins in embryonic head epithelial cells is lost when Dlg mRNA is knocked down (Chanet et al., 2017). We find that Pins:Tom localizes to the cortex in the ventral ectoderm of early embryos from Dlg1P20/Dlg2 mothers, indicating that Pins localization in the ventral embryonic ectoderm epithelium does not require direct interaction with Dlg. We therefore speculate that Dlg plays an additional role in that tissue, upstream of Pins (Figure 4G).

      Our intention is to elaborate on our findings with additional data from embryos. To this end we have already acquired preliminary control data investigating the spindle angle with respect to the plane of the epithelium, and are in the process of examining spindle angles in dlg mutant embryonic tissue.

      In page 11, the authors state "... that activation of pulling in the FE requires Dlg". I was not convinced by anything related to "pulling". There is no evidence to support "pulling" or such dynamic in this paper, just showing Mud localization, correct?

      We appreciate the reviewer’s concern. The original sentence read that “We interpret [our data] to mean that interaction between Pins and Dlg, which is required for pulling, stabilizes the lateral pulling machinery even if Dlg is not a direct anchor.” This statement is based on work across multiple systems, including the C. elegans embryo (Grill et al Nature 2001), the Drosophila pupal notum (Bosveld et al, Nature 2016), and HeLa cells (Okumura et al eLife 2018), which shows that Mud/dynein-mediated pulling (on astral microtubules) orients/positions spindles. This is described in the introduction.

      To address the reviewer’s particular concern, we have replaced “pulling” with “spindle-orentation machinery,” so that this sentence now reads …“activation of the spindle-orientation machinery in the FE requires Dlg.”

      1. Ectopic expression of Insc (Figure 6) provided a new idea and hypothesis, but the conclusion is more complicated given that Insc is not expressed in normal FE. For example, the statement that "Inscuteable and Dlg mediate distinct and competitive mechanism for activation of the spindle-orienting machinery in follicle cells" is probably right, but it does not show anything meaningful since Insc does not exist in normal FE. Is Dlg in a competitive situation during mitosis of FE? If so, which molecules are competitive against Dlg? The important issue is to provide a new interpretation of how spindle orientation is controlled epithelial cells. I strongly recommend to add models in this manuscript for clarity.

      We considered the addition of model cartoons very carefully in preparing the original manuscript, and again after review. While we are certainly not going to “dig in” on this point, our concern is that model figures would obscure rather than clarify the message. As the reviewer points out, we do not understand how activation works, and as discussed in the manuscript we don’t think it’s likely to work the same way in follicle cells (Dlg) as it does in neuroblasts (Insc). Therefore model figure(s) are premature.

      We do not agree with the statement that "Inscuteable and Dlg mediate distinct and competitive mechanism for activation of the spindle-orienting machinery in follicle cells… does not show anything meaningful.” This is a remarkable finding because it suggests that there is more than one way to activate Pins. Given the critical importance of spindle orientation in the developing nervous system, and the evolutionary history of the Dlg-Pins interaction, we think that this finding supports a model in which the Dlg-Pins interaction evolved in basal organisms, and a second Inscuteable-Pins interaction evolved subsequently to support neural complexity. These ideas are raised in the Discussion.

      The reviewer also writes that “The important issue is to provide a new interpretation of how spindle orientation is controlled epithelial cells.” We find this concern perplexing, since the reviewer clearly recognizes that we have provided a new interpretation: Dlg is not a localization factor but rather a licensing factor for Pins-dependent spindle orientation.

      Minor comments: 8. Some sections were not written well in the manuscript. "It does not" in page 6. "These predictions are not met". I just couldn't understand what they stand for. Their writing has to be improved.

      Again, we are not going to dig in here, but we would prefer to retain the original language, which in our opinion is fairly clear. Our study is hypothesis-driven and based on assumptions made by the current model. We used direct language to help the reviewer understand what happened when we tested those assumptions.

      1. In page 9, Supplementary Figure 4 should be cited in the paragraph (A potential strategy for..), not Supplemental Figure 1A, and 1B.

      Good catch, thank you! We have corrected this.

      1. In page 10, the authors examine aPKC localization in Insc expressing context of FE. Does aPKC localization correlate with Insc localization (Insc dictates aPKC?)? aPKC is not involved in spindle orientation from the author's report (Bergstralh et al., 2013), so it does not likely provide any supportive evidence.

      I’m afraid we don’t entirely understand this comment. The interdependent relationship between aPKC and Inscuteable localization is long-established in the literature and was previously addressed in the follicle epithelium (Bergstralh et al. 2016). We do not make the claim here that aPKC governs spindle orientation. We are emphasizing that the difference between InscA and InscB cells extends to the relocalization of polarity components involved in Insc localization. As described in the manuscript, these data are provided to support our threshold model:

      “In agreement with interdependence between Inscuteable and the Par complex, we find that aPKC is stabilized at the apical cortex in InscA cells but enriched at the lateral cortex in InscB cells (Figure 6E). This finding is consistent with an Inscuteable-expression threshold model; below the threshold, Pins dictates lateral localization of Inscuteable and aPKC. Above the threshold, Inscuteable dictates apical localization of Pins and aPKC.”

      1. In Dicussion page 12, "In addition, we find that while the LGN S408D (Drosophila S436D) variant is reported to act as a phosphomimetic, expression of this variant has no obvious effect on division orientation (Johnston et al., 2012)". Where is the evidence for this? I interpret that this phosphomimetic form can rescue like wt-Pins not like unphospholatable mutant S436A, so it means that have an effect on spindle orientation, correct?

      The reviewer makes a good point. We regret the confusion. We mean to point out that the S436D variant is no different from the wild type. We have amended the text to clarify:

      “In addition, we find that while the LGN S408D (Drosophila 436D) variant is reported to act as a phosphomimetic, this variant does not cause an obvious mutant phenotype in the follicular epithelium (Johnston et al., 2012). What then is the purpose of this modification? Since the phosphosite is highly conserved through metazoans, one possibility to consider is that the phosphorylation regulates the spindle orientation role of Pins, whereas unphosphorylated Pins plays a different role (Schiller and Bergstralh, 2021).”

      Reviewer #2 (Significance (Required)):

      The authors showed that Pins and Mud localization themselves are not sufficient for the control of spindle orientation with genetic analyses. While the authors tried to challenge the concept of "canonical model", there is no clear demonstration of "activation" of the spindle complex. I appreciate their genetic evidence and new results, and understand the message that Pins and Mud effects are beyond localization, but there is no alternative mechanism to support their model. At the current stage, their evidence provides more hypothesis, not conclusion. Based on my expertise in Developmental and Cell biology, I suggest that the work has an interest in audience who studies spindle machinery, but for general audience.

      We think that the reviewer fundamentally shares our perspective on the study. Our work tests assumptions made by the canonical model and shows that they aren’t always met (meaning that the question of how spindle orientation works in epithelia at least is still unsolved), and that in the FE at least one component (Dlg) has been misunderstood. We reach a major conclusion, which is that localization of Pins is not enough to predict spindle orientation in the FE.

      It’s absolutely true that the precise molecular role of Dlg has not been solved by our study. This is a major question for the lab, and we are currently undertaking biochemical work to address it. It’s probably more work than we can (or should) do on our own, which is just one reason to share our current results with colleagues.

      One fundamental reason for undertaking this study is that 25 years of spindle orientation studies released into an environment in which “positive” conclusions are the bar for publication success may have burdened the field with claims that are overly-speculative. We appear to have contributed to this problem ourselves in 2013. With that in mind we contend that providing an alternative molecular mechanism at this point is premature and would impair rather than improve the literature.

      Reviewer #3 (Evidence, reproducibility and clarity (Required)):

      Neville et al re-examine the role and regulation of Pins/LGN in Drosophila follicular epithelial cells. They argue that polar or bipolar enrichment of Pins localisation at the plasma membrane is not crucial for spindle orientation, and therefore propose that Pins must be somehow activated to function. These interpretations are not supported by the data. However, the data strongly suggest an alternative interpretation which is of major biological significance.

      As an initial point, we disagree with the summary above. We do not argue that enrichment of Pins is not crucial for spindle orientation. We argue that enrichment of Pins is not sufficient. This is why we titled the paper “The spindle orienting machinery requires activation, not just localization” instead of “The spindle orienting machinery requires activation, not localization.”

      Although we disagree with reviewer, we appreciate their criticism of our manuscript and are glad for the opportunity to clarify our findings. In our responses to the specific comments (below) we explain why our data contradict the reviewer’s model and what we will do to improve the manuscript.

      Comments:

      1. In the experiments on Dlg mutants (Fig 4D, S3) visualising Pins:Tom, the wild-type needs to be shown next to the Dlg mutant image, otherwise a comparison cannot be made. For example, Pins:Tom looks strongly enriched at the lateral membranes in the wild-type shown in Fig 2B&C, but much more weakly localised at the lateral membranes in Dlg1P20/2 mutants in Fig 4D. Thus, it looks like the Dlg GUK domain is required for full enrichment of Pins:Tom at lateral membranes, even if some low level of Pins can still bind to the plasma membrane in the absence of the Dlg GUK domain. Quantification would likely show a reduction in Pins:Tom lateral enrichment in the Dlg1P20/2 mutants - consistent with the spindle misorientation phenotype in these mutants.

      The reviewer raises a reasonable concern about Figure 4D. We noted the difficulty of imaging Pins:Tom, which is exceedingly faint, in our original manuscript. For technical reasons, only one copy of the transgene was imaged in the experiment presented in 4G (two copies were used in Figure 2B), and the lack of signal presented an even greater challenge. In the manuscript we went with the clearest image. To address the reviewer’s concern, we have added signal intensity plots to this figure showing that Pins:Tom and Pins-myr are both laterally enriched at mitosis in Dlg[1P20]/Dlg[2] mutants. These data have been added as a new panel (E) in Figure 4. We were also able to replace the pictures in 4D with new ones generated after review.

      1. In Fig 4E, the phosphomutant PinsS436A-GFP looks more strongly apical and less strongly lateral than the wildtype Pins-GFP, consistent with the spindle misorientation phenotype in S436A rescued pins mutants.

      The reviewer has an eagle eye! We did not detect a difference in localization across the three transgenes, though we were certainly looking for it (that’s why we generated these flies in the first place). Again, the strength of signal was a major challenge in these experiments, and we therefore went with the cleanest image. In response to the reviewer’s concern, we note that the S436A and S436D examples shown have equivalent apical signal, but only the S436A fails to rescue spindle orientation.

      Together, Reviewer Comments 1 and 2 suggest a model in which Dlg is required for lateral enrichment of Pins at mitosis. As described in the manuscript, this is the very model proposed in our own previous study (Bergstralh, Lovegrove, and St Johnston; 2013), and reiterated in a subsequent review article (Bergstralh, Dawney, and St Johnston; 2017). We point these publications out because the senior author of the current manuscript is not especially enthusiastic about showing himself to be wrong (twice!) in the literature. He therefore insisted on seeing multiple lines of evidence before making the counterargument:

      • The reviewer’s model (the 2013 model) is firstly challenged by work shown in Figure 3. We find that membrane-anchored proteins (even just myristoylated RFP!) demonstrate lateral enrichment at mitosis, regardless of whether or not they interact with the Dlg-GUK domain.
      • Even stronger evidence is shown in Figure 4F. Pins-myr-GFP is very plainly enriched at the lateral cortex in Dlg[IP20]/Dlg[2] mutant cells (now demonstrated with signal intensity plots in FIGURE 4E). However, the spindle doesn’t orient correctly (quantified in 4C). Since Dlg is impacting spindle orientation independently of Pins localization, these data support our “claim in the final sentence of the abstract ‘Local enrichment of Pins is not sufficient to determine spindle orientation; an activation step is also necessary’.”

        In the InscA examples, Pins:Tom looks apical. In the InscB examples, Pins:Tom looks more laterally localised, consistent with the spindle orientations in these experiments.

      These figures (6A-D) don’t only show/quantify Pins:Tom localization. They also show localization of GFP-Mud. Whereas Pins:Tom is certainly apically enriched in the InscA examples, the interesting finding is that GFP-Mud is not. In strong contrast, it instead shows a weak apical localization and a strong lateral enrichment. As described in the manuscript, this pattern of Mud localization predicts normal spindle orientation, which is not observed in these cells.

      Thus, these data appear to support the existing model that Pins enrichment at the plasma membrane is a key factor directing mitotic spindle orientation in these cells. The author's claim in the final sentence of the abstract "Local enrichment of Pins is not sufficient to determine spindle orientation; an activation step is also necessary" is not supported by the data.

      We are pleased that the reviewer shared this quote; our claim is that Pins localization is not sufficient, not that it is unnecessary (see above). We absolutely do not dispute that “Pins enrichment at the plasma membrane is a key factor directing mitotic spindle orientation.”

      The open question posed by the data is why GFP-Mud is excluded apically & basally during mitosis, while Pins:Tom is not. The simple alternative model is that Mud only localises to the plasma membrane where Pins is most strongly concentrated, such that Mud strongly amplifies any Pins asymmetry. Thus, even myr-Pins can still rescue a pins n mutant, because myr-Pins is still enriched laterally compared to apically (or basally).

      Thus, I would strongly suggest re-titling the manuscript to: "Mud/NuMA amplifies minor asymmetries in Pins localisation to orient the mitotic spindle".

      Well, that is a good-looking title, and we’re therefore sorry to decline the suggestion. However, as described above, Figure 4D shows that Pins enrichment does not always predict spindle orientation. More importantly, Figure 6A (cited by the reviewer in Comment 3) very plainly shows that Mud does not “only locali[ze] to the plasma membrane where Pins is most strongly concentrated.” In this picture – and across multiple InscA cells (Figure 6B) - Pins is strongly concentrated at the apical surface, whereas Mud is not.

      Mud/NuMA presumably achieves this amplification by binding to the plasma membrane only where Pins is concentrated above a critical threshold level. This would mean a non-linear model based on cooperativity among Pins monomers that increases the binding avidity to Mud above the threshold concentration of Pins monomers.

      This is essentially a minor revision of the standard model, which we expected would hold true in the FE. As described above, it is not supported by our data.

      Reviewer #3 (Significance (Required)):

      The manuscript is focused on the question of mitotic spindle orientation in epithelial cells, which is a fundamental unsolved problem in biology. The data reported are impressive and important, providing new insights into how the key spindle orientation factors Mud/NuMA and Pins/LGN localise during mitosis in epithelia. I recommend publication after major revisions.

      We are delighted that the reviewer finds our data impressive and important, and our experiments insightful. We understand that the “major revisions” requested are meant to bring the paper in line with their model (our own earlier model). Since the data in our original manuscript contradict that model, the revisions are instead focused on clarifying and strengthening our message.

    1. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #3

      Evidence, reproducibility and clarity

      Summary

      This manuscript attempts to link different aspects of HDAC1 function to Plasmodium falciparum biology. HDAC1 is essential so is likely to have important functions in parasite development.

      The emphasis is upon the potential gene regulation aspects of HDAC1 function, but it is well known that acetylation of other proteins is regulated by HDAC1 orthologues. While they examine the genome occupancy of HDAC1, it's not clear whether the phenotypic effects described can be ascribed to effects upon histone modifications. For RNA-seq analysis and ChIPseq, they generally use one time point so they have not controlled for potential differences in cell cycle to explain differences in gene expression or genome occupancy. These weaknesses in the experimental design make it difficult to evaluate the significance of their data with artemesinin and drug resistant lines.

      The authors suggest that CKII is important for regulating the function of HDAC1. This is biologically plausible, but the link could be more convincing. In addition, the evidence that the potential gene regulation effects are critical for the phenotype observed could be stronger.

      Major comments:

      Figure 1: They perform phosphorylation studies with recombinant CKII and HDAC1, but they do not demonstrate whether the phosphorylated residues correspond to the predicted residues S391, S397 and S440 or if mutation of the predicted residues affects activity.

      The inhibitor data are consistent with the predicted effects, but kinase inhibitors do not always have the same target in vivo or in cells as they do in protein assays. Concentrations of inhibitors used should be provided in the materials and methods.

      They also claim that CK2 and HDAC1 interact in parasites (p5). They do not provide data to support this statement, nor do they provide any data about other proteins that might be interacting with HDAC1. If they were able to purify enough HDAC1 for mass spec identification, they should provide further documentation about interacting proteins and potential post-translational modifications.

      In addition, they should provide more detailed characterization with Western/IFA of when HDAC1 is expressed and whether CKII is always expressed at the same time.

      Overall the importance and significance of CKII in regulation of HDAC1 activity is not clear and would be much strengthened if experiments performed with recombinant protein could be replicated in IP parasite lysates with appropriate controls and a time series.

      Figure 2: Using an HDAC1 GFP line they perform ChIP-seq. The ChIP-seq experiments seem to be well performed with high correlation between replicates but were performed at a single time point in the life cycle of erythrocytic stages. It's not clear if the distribution or abundance of HDAC1 changes during the cell cycle, though they suggest it does, and given changes noted in genome occupancy, one cannot determine if the differences seen could be completely explained by parasites being in different stages of the cell cycle with different levels of HDAC1. They show enrichment of different pathways, but do not comment on whether these are just pathways that are enriched in trophozoites.

      Figure 3 They characterize the growth rate of parasites treated with sublethal concentrations of HDAC1 inhibitor and see effects. The images presented in panel A are not good quality and parasite morphology is difficult to evaluate. They perform RNA-seq at a single time point and the choice of time point and drug concentration used is not justified. Changes are reported but again with a single time point, it's difficult to interpret the significance of the changes-are these dying parasites or parasites slowly progressing through the life cycle? To really understand the effects of these drugs a better characterization of dose response and time point series is needed.

      Figure 4 Upon overexpression of PfHDAC1-GFPglmS there appear to be more parasites. It is unclear if this due to more merozoites per schizont, better invasion with more rings. Again, better characterization of time points would be helpful to understand how overexpression of HDAC1 affects proliferation.

      Figure 5. They state that there is less HDAC1 in art resistant lines, but given that they have not provided any information about cell cycle expression of HDAC1 and growth of these lines in comparison to wild-type, it is unclear if there are differences in biology or if the cells differ where they are in the cell cycle.

      This is particularly important because of the known differences of artemisinin effects depending upon cell cycle stage.

      Figure 6 Genome occupancy data are difficult to interpret given possible differences in cell cycle.

      Minor comments:

      The general quality of images and gels should be improved.

      More information should be provided about the validation and specificity of the in house HDAC1 antibodies.

      Concentrations of inhibitors used should be provided.

      Referees cross-commenting

      There is consensus amongst all reviewers that the experiments as presented cannot be readily interpreted and are lacking adequate controls. The amount of experimental work and further analysis is considerable.

      Significance

      Understanding gene expression and the role of HDAC1 is potentially significant, particularly if these can be linked to important biological processes such as artemisinin resistance. Potentially the audience would be broad. The link between these processes is not well supported by the data as currently presented.

      Expertise: epigenetics, parasite gene expression.

    2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Referee #1

      Evidence, reproducibility and clarity

      This study characterises a Plasmodium class I Histone deacetylase (PfHDAC1). The manuscript reports a wide range of experiments - some of them complex and involved, but not all of these experiments appear to be well controlled, and some are insufficiently described to know if they have been appropriately designed and interpreted. A link to HDAC1 regulation and artemisinin resistance is advanced, but the evidence here is very indirect and inconclusive.

      The study shows that HDAC1 interacts with PfCKII- a homologue of the mammalian casein kinase known to interact with mammalian HDAC1. They also demonstrate that, at least in vitro, HDAC1 can serve as a substrate for phosphorylation by PfCKII, and that this phosphorylation impacts HDAC1's deacetylation of histones. Such assays where a kinase is provided with a single, abundant substrate in vitro, are not always rigourous tests for kinase specificity, but do in this case at least indicate that HDAC1 associated with its activity.

      Major issues:

      1. The authors conduct CHiP seq experiments on a GFP tagged HDAC. It is unclear from the methods and results section what control is used in these experiments. The ENCODE consortium has established minimum standards (Landt et al 2012) for conducting and reporting CHiP seq experiments, and states that the "recommended control for epitope-tagged measurements is an immunoprecipitation using the same antibody against the epitope tag in otherwise identical cells that do not express the tagged factor.". These experiments appear to lack that control and the enrichments reported should be approached with caution in the absence of such a control.
      2. The genes with apparently altered ChiP seq were subjected to gene ontology enrichment analysis, and the authors report potential enrichments - which appear to impact a range of unconnected biological pathways throughout the parasite and throughout the lifecycle, despite the CHIP seq being conducted only at a single time stage. No mention is made of correction for multiple hypothesis testing, known to present a considerable problem for such analyses, and no correction is described for background GO distributions in the P. falciparum genome, so again it's unknown if or how that was performed. The reported enriched categories must be also treated with considerable caution given the absence of description of these crucial steps. The authors report from this section that HDAC1 is associated with stress responses, but really, by their criteria, HDAC1 is associated with 1/3 of the whole genome, so it's a bit selective to regard it as a stress regulator
      3. The authors preform a well-designed series of transfection experiments with modulation of HDAC1 to show that an overexpression of HDAC1 leads to increased growth rate, and that this increase reduces when the overexpression of HDAC1 is inducibly repressed. However, I found the presentation of results from these experiments difficult to understand and there is considerable transformation of the data prior to plotting - they would be easier to understand if no background subtraction to normalise for GFP were conducted, and if all strains were plotted on the same axes. A potential confounding factor in this experiment is that many lines overexpressing GFP grow more slowly, and that this growth defect can be localisation dependent, so that over-expression of GFP alone may cause a different growth penalty than GFP on a nuclear protein. I am uncertain that the conclusion of 50% faster growth is a safe one based on these graphs - at some time intervals the over-expressor appears to grow just as slow or even slower (as a percentage of the previous timepoint) than the control, and these appear to have been based on technical replicates of a single biological experiment. The authors contend that the growth rate is due to changed expression of invasion genes (among many other substrate gene categories) giving rise to enhanced invasion - such a phenomenon is readily testable, and the authors should dissect this if they wish to substantiate the frankly surprising claim that overexpression of HDAC leads to increased growth rate.
      4. The authors also report an apparent down regulation of HDAC abundance in artemisinin resistant parasites. This conflicts with previous global proteomic analyses of artemisinin resistant parasites which found no such change in HDAC1 regulation or abundance (eg Siddiqui et al 2017, Yang et al 2019). Stage matching is a particular challenge in such experiments given the differences in cycle progression between ARTR and ARTS parasites, and it isn't clear that this has been adequately controlled for to have confidence in these results, particularly given their contradiction of previous analyses. The abundance of PfHDAC1 changes considerably throughout the asexual intraerythrocytic cycle, (out of synch with the control used here actin), so potential stage-mismatch might contribute to apparent differences here. Again, explicit mention of replicates is lacking. The authors also mention genes regulated by HDAC1 as including genes related to processes related to artemisin resistance, but this is hard to sustain - indeed with so many genes apparently substrates of HDAC1 it would be highly surprising if there were no overlap with some genes in pathways related to artemisin resistance. An accompanying experiment demonstrating an increase in survival (of both ART resistant and ART sensitive lines) in an artemisinin ring stage survival assay is intriguing, after using a possible inhibitor of HDAC but these results are hard to reconcile with a dynamic transcriptional response. (Why was this done with an uncharacterised inhibitor, rather than the more specific HDAC1 overexpressor/knockdown system? An accompanying RNAseq analysis is described, but the analysis is piecemeal and selective, with the authors pointing out candidate genes representing categories plausibly linked to artemisinin resistance. I found this section unconvincing and indirect - lots of genes are changed in these experiments, and so they inevitably include some that are feasibly linked to artemisinin resistance, but the one gene convincingly known to modulate resistance, K13, isn't mentioned, and presumably wasn't specifically changed in this analysis.
      5. A previous study by the laboratory of Christian Doerig (Eukaryot Cell. 2010 Jun; 9(6): 952-959.) reported that HDAC1 activity (unclear which of the HDACs) is associated with Pfcrk-3). This activity may not correspond to the HDAC1 characterised here, but deserves some discussion.
      6. The Western blots are letterboxed and in some cases appear to crop out bands on the limit of the image (eg Fig 5, 6). Please provide fuller pictures of the blots and indicate the relevant bands if there are several background bands.

      Minor issues

      The text uses breaking spaces for the gap between genus abbreviation and species throughout. Replace with non-breaking spaces. Abstract: "is correlated with parasitemia progression" - Unclear meaning. Reword. Introduction "closes in on 400,000 deaths annually" Unclear meaning/vernacular usage. Reword. Very long paragraph on pages 3-4. Reorder logical flow and break into smaller paragraphs to make this more easily read. "Given the evidence of the role of HDAC inhibition in the emergence of chemotherapeutic resistance in mammalian system" - needs a reference - no mention of this phenomenon up until this point of the manuscript

      Referees cross-commenting

      I agree with the other reviewers comments. Although the manuscript contains a very large number of complex experiments, necessary controls, sufficient replicates, and appropriate analysis are missing from many of the experiments.

      I appreciate that the experiments referred to would require a very substantial time and resource commitment to complete, but in their current form, many of these experiments are not safely interpretable.

      Significance

      This manuscript makes major claims for HDAC1, in particular for its role in artemisinin resistance. Such a link would be significant, but I regard few of these claims as having been robustly substantiated in this manuscript. The CHIP-seq evidence is of interest as a useful dataset, particularly if accompanied by relevant controls

    1. And it’s like, no, no, you know? This is an adaptation thing. You know, computers are almost as old as television now, and we’re still treating them like, “ooh, mysterious technology thing.” And it’s like, no, no, no! Okay, we’re manipulating information. And everybody knows what information is. When you bleach out any technical stuff about computers, everybody understands the social dynamics of telling this person this and not telling that person that, and the kinds of decorum and how you comport yourself in public and so on and so forth. Everybody kind of understands how information works innately, but then you like you try it in the computer and they just go blank and you know, like 50 IQ points go out the window and they’re like, “doh, I don’t get it?” And it’s the same thing, it’s just mediated by a machine.
  7. whitepaper.welcometonor.com whitepaper.welcometonor.com
    1. Modern games insist they are “free,” then flood us with microtransactions: paywalls, purchasable “bonus” content, loot boxes, subscriptions, extra lives.

      They still are free. We have become so entitled that we are upset that a game that they likely paid money to get us to install also has things to sell to people who want them? It's actually not the norm for a F2P game to force you to buy anything. They apply pressure sure but you are also free to just delete the game and know that you didn't have to pay anything, just the 30 seconds it took to download the game.

    1. so for this vaccine the kova vaccine all you need is a little bit of that mrna spike protein to put inside your body just a little bit and that's enough for your body to recognize it and make enough antibodies okay so you don't have

      I actually wish she had explained this somewhat differently, because to me, it sounded as though mRNA vaccines are essentially the same thing as traditional vaccines where it's essentially a tiny portion of live virus. I feel like this might have been a missed opportunity to explain what makes mRNA vaccines truly unique – and the fact that mRNA technology has been in development for years, which is what made it possible for this vaccine to be turned around so quickly. It's almost as if there's potentially damage done if you oversimplify the science. In my opinion, if you strike the right balance between simplifying or "translating" the science and speaking only in technical jargon, you have the ability to leave someone feeling empowered that they have new, scientific knowledge about a certain innovation/medicine/invention/etc.

      – Alyssa Tomlinson

    2. particularly mrna or messenger rna and i want to tell you how that works okay so this is what coronavirus looks like a bugger isn't it it's crazy how this little thing can cause so much havoc in our bodies so 00:01:09 anyway the chromovirus has these proteins on the outside they're called spike proteins because they look like little spikes and so these proteins are what our body recognizes when it invades our body 00:01:21 okay and this guy right here is mr antibody he's one of the good guys part of our white blood cells okay when these guys are around they find all the bad guys in our body and they make 00:01:33 sure that they can't hurt us anymore all right but you can't get these guys unless you get sick or invaded by a bad guy that's when these things come out so there's no bad guys then there's no good guys okay because 00:01:46 good guys come after the back and for a room take care of all the bad guys does that make sense so what a vaccine does is it mimics our body's natural defense mechanisms when something was a foreign invader or a bad 00:01:58 guy gets into your body your body recognizes it as a bad guy who doesn't belong there and creates antibodies antibodies then fight off the infection okay and with a vaccine instead of having to get lots of bad guys into your 00:02:12 body and get the full-blown symptoms of being sick you take a little bit of the bad guy and putting into your body and your body can still recognize it as a bad guy and so now you have a bunch of antibodies being made antibodies fight 00:02:24 off the bad guys so for the copa vaccine the main ingredient or the active ingredient in the vaccine is mrna made from pieces of the spike protein of the coronavirus 00:02:37 so for this vaccine the kova vaccine all you need is a little bit of that mrna spike protein to put inside your body just a little bit and that's enough for your body to recognize it and make enough antibodies okay so you don't have 00:02:49 to get the full-blown disease or symptoms of that disease now you have a bunch of antibodies so the next time you come in contact with the coronavirus you get from somebody else or in a space where someone has it your body should have enough antibodies to fight off the 00:03:02 infection without you getting the full-blown disease i hope that's helpful

      Great touch with avoiding the usual jargon associated with this topic!

    1. the diverse and wide-ranging projects that name and challenge sexism and other forces of oppression, as well as those which seek to create more just, equitable, and livable futures

      I love this but unfortunately, that's not how this world works. We've taken so many steps forward and steps back in female rights and human rights in general. Roe v Wade is an example. The only thing I'll say that it was a huge step back for women's rights. All genders should have equal rights and opportunities. It's about about respecting diverse women's experiences, identities, strengths, knowledge. and striving to empower all women to realize their FULL rights!

    2. “Well, nobody’s ever complained,” he told Darden. “The women seem to be happy doing that, so that’s just what they do.”

      Gross. Systemic sexism at it's best. They didn't complain because they didn't want to be shamed, get indecent proposals, or fired.

    1. In short, a zettelkasten is not a life operating system. LYT is. Though a zettelkasten can include notes on literally any subject a person has an interest in, these notes are intended to yield something tangible. LYT is not bound by output, and thus includes more. People, projects, calendars, ephemera, the stuff we manage in our day-to-day lives, all of it can have a place in LYT. So, while both methodologies deal with the same "stuff"—knowledge—and both engage notes as their primary units for knowledge exploration, each has a different expectation as to what should be done with it all.

      Author posits ZK is for writing, Milo's type of stuff not aimed at output but at 'progressing' in multiple ways. I'd assumed that would be clear to all. Any (p)km is geared towards action, or at least towards increased ability to act. It's rarely the aimed for ability is just academic written output. My pkm has always contained a 'get stuff done' component as well as a 'conceptual stuff' component. My ZK so you will is a trio (Notions, Notes, Ideas) of folders of networked elements inside a more hierarchically oriented larger set of folders (GTD like) to keep moving forward on everything I'm involved in. I've at times wondered what Luhmann did to manage his academic work, in terms of notes. Is there another kartei somewhere? It's one thing to write a lot, another to get it published / organise academic life.

    1. The Rules of DaylightingAs with any profound power, there are guidelines on how it is to be wielded by us mere mortals, lest we destroy ourselves with it. So, these are the commandments I’ve come to live by to keep my daylighting in check:Never talk about daylighting. It can be tempting to flaunt your balancing act and newfound wealth, but it’s just generally not a good idea — especially to your coworkers. Even if you need an excuse to get out of Company A’s meeting for Company B’s, the fewer people who know, the better. People can be messy. I’ll leave it at that.Target roles with the fewest meetings. To be successful at daylighting, you need to keep two jobs. To keep two jobs, you need to not get fired. Meeting overlap can often be inevitable in the world of daylight, but minimizing your total meeting count can lower that statistic to be as infrequent as possible.

      .c1

    2. daylighting only really works if you work remote. Beyond that, it only works if you’re also really good at what you do.“That’s ridiculous. How can you give 100% to 2 jobs at once in the same 9–5 window?”It’s simpler than you might think, really. You just…don’t.

      .c3

    1. In this example the annotation is not marked “Only me” so it’s visible to everyone. But you can also mark this type of annotation “Only me.”

      Is there anyway that you could just make it open to just a select group of people and not just for everyone or just for you?

    1. It does this both personally, by providing tacticsfor managing how you engage with a digitally mediated world, andcollectively by pooling knowledge, cooperatively acting together,and mobilizing political power to shift public debate and influencethe regulation of questionable practices.

      Personally, I and most people can agree that many social media sources will use you for profiting off your personal data and personal adaptations of what we have experienced and our opinions. So in other words, it's less objectively trying to create just a digitally mediated world, but more about the money

    1. We cannot define religion as a coffee pot and expect tomake progress investigating the features of religion.

      This is an interesting example of how we cannot define things as anything we want to call them, especially if we expect productive outcomes. Just because you call a thing by a certain definition does not mean it will actually be that thing. It's a dead end to say a religion is defined as a coffee pot, there is nothing more to explore there.

    1. While Heyde outlines using keywords/subject headings and dates on the bottom of cards with multiple copies using carbon paper, we're left with the question of where Luhmann pulled his particular non-topical ordering as well as his numbering scheme.

      While it's highly likely that Luhmann would have been familiar with the German practice of Aktenzeichen ("file numbers") and may have gotten some interesting ideas about organization from the closing sections of the "Die Kartei" section 1.2 of the book, which discusses library organization and the Dewey Decimal system, we're still left with the bigger question of organization.

      It's obvious that Luhmann didn't follow the heavy use of subject headings nor the advice about multiple copies of cards in various portions of an alphabetical index.

      While the Dewey Decimal System set up described is indicative of some of the numbering practices, it doesn't get us the entirety of his numbering system and practice.

      One need only take a look at the Inhalt (table of contents) of Heyde's book! The outline portion of the contents displays a very traditional branching tree structure of ideas. Further, the outline is very specifically and similarly numbered to that of Luhmann's zettelkasten. This structure and numbering system is highly suggestive of branching ideas where each branch builds on the ideas immediately above it or on the ideas at the next section above that level.

      Just as one can add an infinite number of books into the Dewey Decimal system in a way that similar ideas are relatively close together to provide serendipity for both search and idea development, one can continue adding ideas to this branching structure so they're near their colleagues.

      Thus it's highly possible that the confluence of descriptions with the book and the outline of the table of contents itself suggested a better method of note keeping to Luhmann. Doing this solves the issue of needing to create multiple copies of note cards as well as trying to find cards in various places throughout the overall collection, not to mention slimming down the collection immensely. Searching for and finding a place to put new cards ensures not only that one places one's ideas into a growing logical structure, but it also ensures that one doesn't duplicate information that may already exist within one's over-arching outline. From an indexing perspective, it also solves the problem of cross referencing information along the axes of the source author, source title, and a large variety of potential subject headings.

      And of course if we add even a soupcon of domain expertise in systems theory to the mix...


      While thinking about Aktenzeichen, keep in mind that it was used in German public administration since at least 1934, only a few years following Heyde's first edition, but would have been more heavily used by the late 1940's when Luhmann would have begun his law studies.

      https://hypothes.is/a/CqGhGvchEey6heekrEJ9WA


      When thinking about taking notes for creating output, one can follow one thought with another logically both within one's card index not only to write an actual paper, but the collection and development happens the same way one is filling in an invisible outline which builds itself over time.

      Linking different ideas to other ideas separate from one chain of thought also provides the ability to create multiple of these invisible, but organically growing outlines.

    1. So, we decouple content from domains, but now we have a trust problem. The same-origin security model anchors trust to domains. We’ll need another way.What if we cryptographically signed everything that got published to the network? Now we don’t have to care about origin. Instead we can verify the signature of content.UCAN (User Controlled Authorization Network) offers a promising primitive for authorizing users without a backend. Even better, UCANs are self-sovereign. You own and control your keys, not some app.

      I get the keys and trust part, but not how that is going to help the noosphere. Trust now anchored to domains, yes, not just technically but socially as well. Anything that has FB as its domain e.g. goes into the 'untrustworthy' bin. UCN removes trust from a domain to the person signing. That's fine if I know someone well enough to have a pet name for it on my phone/network. But not if it's some random person on the internet. I assume someone's access and participation in the noosphere isn't meant to be limited to people I have in my pet name list in my phone book, and that I can see stuff by many other people outside my network (if not I can tell you where the next centralisation will be with certainty) Then 'this was properly signed by -random person-' is meaningless, Unless I can trace back what else that random person has shared, what people thought about it, the persons general reputation etc. Meaning we're back at the social level where this tech doesn't help us.

    1. It is seductive, really, to want acceptance at all costs.

      I related to this a lot. In society everyone wants to feel accepted whether by just one person or if it's by everyone you talk to. The feeling of acceptance is something that I didn't realize was so addictive until this statement.

    1. Author Response

      Reviewer #1 (Public Review):

      This manuscript by de la Vega and colleagues describes Neuroscout, a powerful and easy-to-use online software platform for analyzing data from naturalistic fMRI studies using forward models of stimulus features. Overall, the paper is interesting, clearly written, and describes a tool that will no doubt be of great use to the neuroimaging community. I have just a few suggestions that, if addressed, I believe would strengthen the paper.

      Major comments

      1) How does Neuroscout handle collinearity among predictors for a given stimulus? Does it check for this and/or throw any warnings? In media stimuli that have been adopted for neuroimaging experiments, low-level audiovisual features are not infrequently correlated with mid-level features such as the presence of faces on screen (see Grall & Finn, 2022 for an example involving the Human Connectome Project video clips). How to disentangle correlated features is a frequent concern among researchers working with naturalistic data.

      We agree with the reviewer that collinearity between predictors is one of the biggest challenges for naturalistic data analysis. However, absent consensus on how to best model these data, we find that it is out of scope of the present report to make strong recommendations. Instead, our goal was to design an agnostic platform that would enable users to thoughtfully design statistical models for their particular goal. Papers such as Grall & Finn (2022) will be critical in advancing the debate on how to best analyze and interpret such data.

      We explicitly address this challenge in a new paragraph in the discussion under “Challenges and future directions:

      “A major challenge in the analysis of naturalistic stimuli is the high degree of collinearity between features, as the interpretation of individual features is dependent on co-occurring features. In many cases, controlling for confounding variables is critical for the interpretation of the primary feature— as is evident in our investigation of the relationship between FFA and face perception. However, it can also be argued that in dynamic narrative driven media (i.e. films and movies), the so-called confounds themselves encode information of interest that cannot or should not be cleanly regressed out (Grall & Finn, 2022).[…] Absent a consensus on how to model naturalistic data, we designed Neuroscout to be agnostic to the goals of the user and empower them to construct sensibly designed models through comprehensive model reports. An ongoing goal of the platform—especially as the number of features continues to increase—will be to expand the visualizations and quality control reports to enable users to better understand the predictors and their relationship. For instance, we are developing an interactive visualization of the covariance between all features in Neuroscout that may help users discover relationships between a predictor of interest and potential confounds.” (pg. 11)

      Note we shortened the second paragraph of the discussion by two sentences as it had touched on this subject, and was better addressed separately.

      In addition, we ensured to highlight the covariance structure visualization in the Results section:

      “At this point, users can inspect the model through quality-control reports and interactive visualizations of the design matrix and predictor covariance matrix, iteratively refining models if necessary.” (pg. 3)

      2) On a related note, do the authors and/or software have opinions about whether it is moreappropriate to run several regressions each with a single predictor of interest or to combine all predictors of interest into a single regression? (Or potentially a third, more sophisticated solution involving variance partitioning or another technique to [attempt to] isolate variance attributable to each unique predictor?) Does the answer to this depend on the degree of collinearity among the predictors? Some discussion of this would be helpful, as it is a frequent issue encountered when analyzing naturalistic data.

      This is a very sensitive methodological point, but one for which it is hard to find a univocal answer in the literature. While on the one hand it can be deceptive to model a single feature in isolation (as illustrated by our face perception analyses), more complex models pose different challenges in terms of robust parameter estimation and variance attribution. Resolving these challenges goes beyond the scope of our work, and it is ultimately our goal to provide a flexible tool which will enable these types of investigations, and enable users to take responsibility and provide motivations for methodological choices made using the platform. We touch on Neuroscout’s agnostic philosophy on this issue under “Challenges and future directions” (pg. 11; quoted above).

      However, we also agree that in part the solution to this problem will be methodological. This is particularly true for modeling deep learning based embeddings, which can have hundreds of features in a single model. We are currently working on expanding beyond traditional GLM models in Neuroscout, opening the door to more sophisticated variance partitioning techniques, and more robust parameter estimation in complex models. We highlight current and future efforts to expand Neuroscout’s statistical models in the following paragraph:

      “However, as the number of features continues to grow, a critical future direction for Neuroscout will be to implement statistical models which are optimized to estimate a large number of covarying targets. Of note are regularized encoding models, such as the banded-ridge regression as implemented by the Himalaya package. These models have the additional advantage of implementing feature-space selection and variance partitioning methods, which can deal with the difficult problem of model selection in highly complex feature spaces such as naturalistic stimuli. Such models are particularly useful for modeling high-dimensional embeddings, such as those produced by deep learning models. Many such extractors are already implemented in pliers and we have begun to extract and analyze these data in a prototype workflow that will soon be made widely available. “ (pg. 11)

      3) What the authors refer to as "high-level features" - i.e., visual categories such as buildings,faces, and tools - I would argue are better described as "mid-level features", reserving the term "high-level" for features that are present only in continuous, engaging, narrative or narrative-like stimuli. Examples: emotional tone or valence, suspense, schema for real-world situations, other operationalizations of a narrative arc, etc. After all, as the authors point out, one doesn't need naturalistic paradigms to study brain responses to visual categories or single-word properties. Much of the work that has been done so far with forward models of naturalistic stimuli has been largely confirmatory (e.g., places/scenes still activate PPA even during a rich film as opposed to a serial visual presentation paradigm). This is a good first step, but the promise of naturalistic paradigms is ultimately to go beyond these isolated features toward more holistic models of cognitive and affective processes in context. One challenge is that extracting true high-level features is not easily automated, although the ability to crowdsource human ratings using online data collection has made it feasible to create manual annotations. However, there are still technical challenges associated with collecting continuous-response measurement (CRM) data during a relatively long stimulus from a large number of individuals online. Does Neuroscout have any plans to develop support for collecting CRM data, perhaps through integration with Amazon MTurk and/or Prolific? Just a thought and I am sure there are a number of features under consideration for future development, but it would be fabulous if users could quickly and easily collect CRM data for high-level features on a stimulus that has been uploaded to Neuroscout (and share these data with other end users).

      The reviewer makes a very good point regarding the fact that many so-called “high-level” features are best called “mid-level”. As such, we have changed our use of “high-level” to “mid-level perceptual features” throughout the manuscript.

      “Currently available features include hundreds of predictors coding for both low-level (e.g., brightness, loudness) and mid-level (e.g., object recognition indicators) properties of audiovisual stimuli…” (pg. 3)

      That said, we do believe that as machine learning (and in particular deep learning) models evolve, it will become more feasible to extract higher level features automatically. This has already been shown with transformer language models, which are able to extract higher-level semantic information from natural text. To this end, we have ensured to design our underlying feature extraction platform, pliers, to be easily extensible, to ensure the continued growth of the platform as algorithms evolve. We ensure to highlight this in the Results section ‘Automated annotation of stimuli’:

      “The set of available predictors can be easily expanded through community-driven implementation of new pliers extractors, as well as public repositories of deep learning models, such as HuggingFace and TensorFlowHub. We expect that as machine learning models continue to evolve, it will be possible to automatically extract higher-level features from naturalistic stimuli.” (pg. 3)

      We also ensured to highlight the extensibility of pliers to increasingly power deep learning models in the Discussion by revising this sentence

      “As a result, we have designed Neuroscout and its underlying feature extraction framework pliers to facilitate community-led expansion to novel extractors— made possible by the rapid increase in public repositories of pre-trained deep learning models such as HuggingFace and TensorFlow Hub” (pg. 10)

      As to the point of a potential extension to Neuroscout for easily collecting crowd source stimuli annotations, we are in full agreement that this would be very useful. In fact, this feature was part of the original plan for Neuroscout, but fell out of scope as other features took priority. Although we are unsure if this extension is a short term priority for the Neuroscout team (as it likely would take substantial effort to develop a general purpose extension), the ability to submit user-generated features to the Neuroscout API should make it possible to design a modular extension to Neuroscout to collect such features.

      We mention this possibility briefly in the future directions section:

      “Other important expansions include facilitating analysis execution by directly integrating with cloud-based neuroscience analysis platforms (such as Brainlife.io) and facilitating the collection of higher-level stimulus features by integrating with crowdsourcing platforms such as MechanicalTurk or Prolific.” (pg. 11)

      4) Can the authors talk a bit more about the choice to demean and rescale certain predictors, namely the word-level features for speech analysis? This makes sense as a default step, but I wonder if there are situations in which the authors would not recommend normalizing features prior to computing the GLM (e.g., if sign is meaningful, if the distribution of values is highly skewed if the units reflect absolute real-world measurements, etc). Does Neuroscout do any normalization automatically under the hood for features computed using the software itself and/or features that have been calculated offline and uploaded by the user?

      In keeping with Neuroscout’s philosophy to be a general purpose platform, we have not performed any standardization of features. Instead, users can choose to modify raw predictor values by applying transformations on a model-by-model basis. Currently available transformations through the web interface include: scale, orthogonalize and threshold. Note that there is a wider range of transformations available in the BIDS Stats Model, but we are hesitant to advertise these yet, as they are more difficult to use.

      We revised our description of transformations in the Result section to clarify these transformations are model specific:

      “Raw predictor values can be modified by applying model-specific transformations such as scaling, thresholding, orthogonalization, and hemodynamic convolution.” (pg. 3)

      We also clarify that variables are ingested without any in-place modifications in the Methods section. The only exception is that we down-sample highly dense variables (such as those from auditory files, which can result in thousands of value per second), to save disk space:

      “Feature values are ingested directly with no in place modifications, with the exception of down sampling of temporally dense variables to 3hz to reduce storage on the server.” (pg. 13)

      With respect to the word frequency analysis, the primary reason we scaled variables was to facilitate imputing missing values for words not found in the look-up dictionary. By scaling the variable, we were able to replace missing values with zero, effectively assigning them the average word frequency value. We clarified this strategy in the Methods section:

      “In all analyses, this variable was demeaned and rescaled prior to HRF convolution. For a small percentage of words not found in the dictionary, a value of zero was applied after rescaling, effectively imputing the value as the mean word frequency.” (pg. 17)

      On a more general note, when interpreting a single variable with a dummy coded contrast (i.e. 1 for the predictor of interest, and 0 for all other variables), it’s not necessary to normalize features prior to modeling, as fMRI t-stat maps are scale-invariant (although the parameter estimates will be affected).

      We added a note with our recommendations in the Neuroscout Documentation: https://neuroscout.github.io/neuroscout-docs//web/builder/transformations.html#scale

      Reviewer #2 (Public Review):

      The authors present a new platform for constructing and sharing fMRI analyses, specifically geared toward analyzing publicly-available naturalistic datasets using automatically-extracted features. Using a web interface, users can design their analysis and produce an executable package, which they can then execute on their local hardware. After execution, the results are automatically uploaded to NeuroVault. The paper also describes several examples of analyses that can be run using this system, showing how some classical feature-sensitive ROIs can be derived from a meta-analysis of naturalistic datasets.

      The Neuroscout system is impressive in a number of ways. It provides easy access to a number of publicly-available datasets (though I would like to see the current set of 13 datasets increase in the future), has a wide variety of machine-learning features precomputed on the video and audio features of these stimuli, and builds on top of established software for creating and sandboxing analysis workflows. Performing meta-analyses across multiple datasets are challenging both practically and statistically, but this kind of multi-dataset analysis is easy to specify using Neuroscout. It also allows researchers to easily share a reproducible version of their pipeline simply by pointing to the publicly-available analysis package hosted on Neuroscout. The platform also provides a way for researchers to upload their own custom models/predictors to extend those available by default.

      The case studies described in the paper are also quite interesting, showing that traditional functional ROIs such as PPA and VWFA can be defined without using controlled stimuli. They also show that, running a contrast for faces does not produce FFA until speech (and optionally adaptation) is properly controlled for, and that VWFA shows relationships to lexical processing even for speech stimuli.

      I have some questions about the intended workflow for this tool: is Neuroscout meant to be used for analysis development in addition to sharing a final pipeline? The fact that the whole analysis is packaged into a single command is excellent for reproducibility but seems challenging to use when iterating on a project. For example, if we wanted to add another contrast to a model, it appears that this would require cloning the analysis and re-starting the process from scratch.

      An important principle of Neuroscout from the onset of the project was to minimize undocumented researcher degrees of freedom, and maximize transparency in order to reduce the file drawer effect which can contribute to biased results in the published literature. As such, we require analyses to be registered and locked as the modal usage of our application. In the case of adding a contrast, it is true that this would require a user to clone the analysis. Although all of the information from the previous model would be encoded in the new model, this would require re-estimating the design matrix which could be time consuming. However, in our experience, users almost always add new variables to the design-matrix when a study is cloned, which would in any case require re-estimating the design matrix for all runs and subjects. We believe this trade-off is worthwhile to ensure maximal reproducibility, but also point out that since Neuroscout’s data is freely available via our API, power users could directly access the data if they need to use it in a less constrained manner.

      We believe that these important distinctions are best addressed in the newly developed Neuroscout documentation which we now reference throughout the text (https://neuroscout.org/docs/web/browse/clone.html).

      I'm also unsure about how versioning of the input datasets and the predictors is planned to be handled by the platform; if datasets have been processed with multiple versions of fmriprep, will all of those options be available to choose from? If the software used to compute features is updated, will there be multiple versions of the features to choose from?

      The reviewer makes an astute observation regarding the versions of input data (predictors & datasets). Currently we have only pre-processed the imaging data once per data, and as such this has not been an issue. However, in the long run we certainly agree this would be important to give users the ability to choose which pre-processed version of the raw dataset they want to use, as certainly there could be differing but equally valid versions. We have opened an issue in Neuroscout’s repository to track this issue, and plan to incorporate this ability in a future version (https://github.com/neuroscout/neuroscout/issues/1076).

      With respect to feature versions, every time a feature is re-extracted, a new predictor_id is generated, and the accompanying meta-data such as time of extraction is tracked for that specific version. As such, if a feature is updated and re-extracted, this will not change existing analyses. By default, we have chosen to obscure this from the user to make the user experience simpler. However, there is an open issue to expand the frontend’s ability to explicitly display different versions, and allow users to update older analyses with newer versions of features. Advanced users already have access to this functionality by using the Python API (PyNS) to directly access all features, and create analyses with more precision.

      We have made a note regarding this behavior in the Neuroscout Documentation: https://neuroscout.github.io/neuroscout-docs/web/builder/predictors.html

      I also had some difficulty attempting to test out the platform, so additional user testing may be necessary to ensure that novice users are able to successfully run analyses.

      We thank the reviewer for this bug report, which allowed us to fix a previously unnoticed issue with a subset of Neurosout datasets. We have been incontact with the reviewer to ensure that this issue was successfully addressed.

    1. Three arguments for phenomenology as the most fundamental of all sciences, and how to refute them.

      Apple-and-Oranges

      Dennett accuses phenomenology as structuralist psychology, and thus suffers the same problems of it.

      The major tool of structuralist psychology was introspection (a careful set of observations made under controlled conditions by trained observers using a stringently defined descriptive vocabulary). Titchener held that an experience should be evaluated as a fact, as it exists without analyzing the significance or value of that experience.

      Zahavi replies: phenomenology is not structuralist psychology, but transcendental philosophy of consciousness. It studies the '‘nonpsychological dimension of consciousness,’ those structures that make experience possible.

      Consequently, it is transcendental, and immune to any empirical science, even though it has applications for empirical science.

      Phenomenology is not concerned with establishing what a given individual might currently be experiencing. Phenomenology is not interested in qualia in the sense of purely individual data that are incorrigible, ineffable, and incomparable. Phenomenology is not interested in psychological processes (in contrast to behavioral processes or physical processes).

      Phenomenology is interested in the very dimension of givenness or appearance and seeks to explore its essential structures and conditions of possibility. Such an investigation of the field of presence is beyond any divide between psychical interiority and physical exteriority, since it is an investigation of the dimension in which any object—be it external or internal—manifests itself. Phenomenology aims to disclose structures that are intersubjectively accessible...

      Bakker replies: You can't do phenomenology except by thinking about your first-person experience, so phenomenology looks the same as structuralist psychology. Sure, phenomenologists would disagree, but everyone outside their circle aren't convinced. Just standing tall and say "but we take the phenomenological attitude!" is not going to cut it.

      first-person phenomena remain the evidential foundation of both. If empirical psychology couldn’t generalize from phenomena, then why should we think phenomenology can reason to their origins, particularly given the way it so discursively resembles introspectionism? Why should a phenomenological attitude adjustment make any difference at all?

      Ontological Pre-emption

      Zahavi: to do science, you need to assume intuition. Zombies can't do science.

      As Zahavi writes, “the one-sided focus of science on what is available from a third person perspective is both naive and dishonest, since the scientific practice constantly presupposes the scientist’s first-personal and pre-scientific experience of the world.”

      Reply: dark phenomenology shows that phenomenologists have problems that they can't solve unless they resort to third-person science -- they are not so pure and independent as they claim.

      Reply: human metacognition ability is a messy, inconsistent hack "acquired through individual and cultural learning", made of "whatever cognitive resources are available to serve monitoring-and-control functions",

      people exhibit widely varied abilities to manage their own decision-making, employing a range of idiosyncratic techniques. These data count powerfully against the claim that humans possess anything resembling a system designed for reflecting on their own reasoning and decision-making. Instead, they support a view of meta-reasoning abilities as a diverse hodge-podge of self-management strategies acquired through individual and cultural learning, which co-opt whatever cognitive resources are available to serve monitoring-and-control functions.

      Abductive

      Phenomenology is a wide variety of metacognitive illusions, all turning in predictable ways on neglect.

      If phenomenology is bunk, why do phenomenologists arrive independently on the same answers for many questions? Surely, it's because they are in touch with a transcendental truth, the truth about consciousness!

      with a tremendous amount of specialized training, you can actually anticipate the kinds of things Husserl or Heidegger or Merleau-Ponty or Sarte might say on this or that subject. Something more than introspective whimsy is being tracked—surely!

      If the structures revealed by the phenomenological attitude aren’t ontological, then what else could they be?

      Response: Phenomenology is psychology done by introspection. Discoveries of phenomenology are not ontological, transcendental, prior to all sciences, but psychological, and can be reduced to other sciences like neuroscience and physics. Phenomenologists agree not because they are in touch with transcendental truth, but with similarly structured human brains.

      The Transcendental Interpretation is no longer the only game in town.

      Response: we can use science to predict what phenomenologists predict.

      Source neglect: we can't perceive where conscious perception came from -- things simply "appear in the mind" without showing what caused them to appear, or in what brain region they were made. This is because the brain doesn't have time to represent sources for the fundamental perceptions which must serve as a secure, undoubtable bedrock for all perceptions. If they are not represented like bedrock, people would waste too much time thinking about alternative interpretations of them, and thus fail to reproduce well.

      Scope neglect: we can't perceive the boundaries of perception. The visual scene looks complete, without a black boundary. Similar to source neglect, if the brain represents the boundary, then the boundary's boundary also needs to be represented, and so on, so the infinite descent is cut off as soon as possible, to save time.

      We should expect to be baffled by our immediate sources and by our immediate scope, not because they comprise our transcendental limitations, but because such blind-spots are an inevitable by-product of the radical neurophysiological limits

    1. Looking for books with wider margins for annotations and notes

      https://www.reddit.com/r/books/comments/wue2ex/looking_for_books_with_wider_margins_for/

      Not long after I posted this it had about 3 upvotes, including my automatic 1. It's now at 0, and there are several responses about not writing in books at all. It seems like this particular book community is morally opposed to writing in one's books! 🤣

      Why though? There's a tremendously long tradition of writing in books, and probably more so when they were far more expensive! Now they're incredibly inexpensive commodities, so why should we be less inclined to write in them, particularly when there's reasonable evidence of the value of doing so?

      I might understand not writing in library books as part of their value within the commons, but https://booktraces.org/ indicates that almost 12% or more of the books they've tracked prior to 1924 have some sort of mark, writing, or evidence that it was actively read.

      Given what I know of the second hand markets, it's highly unlikely that my books (marked up or not) will ever be read by another person.

      There's so much more to say here, but I just haven't the time today...

    1. Note: This rebuttal was posted by the corresponding author to Review Commons. Content has not been altered except for formatting.

      Learn more at Review Commons


      Reply to the reviewers

      Manuscript number: RC-2022-01541

      Corresponding author(s): Hubert Hilbi

      1. General Statements

      Upon infection of eukaryotic host cells, Legionella pneumophila forms a unique compartment, the Legionella-containing vacuole (LCV). While the role of vesicle trafficking pathways for LCV formation has been quite extensively studied, the role of putative membrane contact sites (MCS) between the LCV and the ER has been barely addressed. In our study, we provide a comprehensive analysis of the localization and function of protein and lipid components of LCV-ER MCS in the genetically tractable amoeba Dictyostelium discoideum.

      We would like to thank the 3 reviewers for their thorough and constructive reviews. Overall, the reviewers state that the study is of interest to researchers in the field of Legionella and other intracellular pathogens (Reviewer 2), as well as to cell biologists (Reviewer 3). Reviewer 1 does not ask for additional experiments but is critical about the overall structure of the manuscript and the proteomics approach. As requested by the reviewer, we have substantially restructured the revised manuscript, now clearly outline the hypotheses put forward in the study and streamlined the proteomics data. Reviewer 2 asks for additional experiments to support our model of LCV-ER MCS. In the revised manuscript, we have included additional experiments addressing lipid exchange at the MCS, and we plan to perform further co-localization experiments. Reviewer 3 appreciates the comprehensive LCV proteomics and asks for only minor revisions, which we have incorporated in the revised version of the manuscript. We include below a point-by-point response to all the comments made by the reviewers.

      2. Description of the planned revisions

      Reviewer #2

      Major comment

      1) MCS contain protein complexes or a group of proteins, but the proteins here are studied in isolation and do not support the model shown in Figure 7. Co-localization studies of the putative LCV-ER MCS proteins are critical, especially given that the authors hypothesize the proteins are working together to modulate PI(4)P levels.

      Response: As suggested by the reviewer, we will perform additional co-localization experiments with MCS components. To this end, we will construct mCherry-Vap, and we will co-transfect the parental D. discoideum strain Ax3 with plasmids producing mCherry-Vap and OSBP8-GFP or GFP-OSBP11. Using these dually fluorescence labelled D. discoideum strains, the co-localization of Vap with the OSBPs will be assessed at 1, 2, and 8 h post infection. The data will be presented as fluorescence micrographs, and co-localization of Vap with the OSBPs will be quantified using Pearson’s correlation coefficient and fluorescence intensity profiles. The data will be outlined in the text (l. 258 ff.) and shown in the new Fig. 2 and__ Fig. S4__.

      3. Description of the revisions that have already been incorporated in the transferred manuscript

      Reviewer #1 (Evidence, reproducibility and clarity):

      In the manuscript by Vormittag, et al., the authors perform proteomics identification of proteins associated with the Legionella-containing vacuole (LCV) in the model amoeba Dictyostelium discoideum comparing WT to atlastin knockout mutants. The authors find approximately half the D. discoideum proteome associated with the LCV, but there was enrichment of some proteins on the WT relative to the mutant. They focus on proteins involved in forming membrane contact sites (MCS) that previously were shown to be important for expansion of the Chlamydia-containing vacuole. Most significant are the oxysterol binding proteins (OSBP) and VapA (similar to that seen in Chlamydia). The authors show differential association of these proteins with either the LCV or presumably the ER associated with the LCV. Using a linear scale over 8 days, they show that mutations in some of the MCS reduce yields in two of the OSPB knockout mutants and the growth rate of the vap mutant is slowed but ultimate yield is increased. Using some nice microscopy techniques, they measure LCV size, and the osbK mutant appears particular small relative to other strains, whereas the osbH mutant generates large vacuoles. This doesn't necessarily correlate with the PI4P quantities on the vacuoles (which is higher in all of them), but I am not totally sure how this is measured, and whether is it PI4P/pixel or PI4P/LCV. In all cases, this was reduced by Sac1 mutation. Surprisingly, even though there was uniform increase in PI4P in each of the mutants, loss of PI4P only affects localization of some of the proteins. Finally, in what seems to be a peripherally related experiment, the authors show that a pair of Legionella translocated effectors are required to maintain PI4P levels, although it is not clear how this is related to the other data in the manuscript.

      It is not clear from the manuscript if the authors are just cataloging things or trying to test a hypothesis. This is an extremely difficult manuscript to read and reconstruct what the authors showed. I really think that the only people who will understand what is written are people who are familiar with the work in Chlamydia starting in 2011 in Engel's and Derre's laboratories, which clearly showed that MCS and most specifically Vap/OSBPs are involved in vacuole expansion. If the authors could rewrite the manuscript along these lines, perhaps comparing their data to the Chlamydia data it would help a lot. Otherwise, I don't think anyone else will understand why they are focusing on these things. I don't recommend new experiments (although re-analyzing data is necessary), but the manuscript has to be taken apart and claims removed, and data be interpreted properly. Otherwise, the manuscript seems like just a clearing house for data.

      Response: Thank you for the concise summary of our data and pointing out the need to restructure the manuscript and to clearly outline the hypotheses underlying the study. According to the reviewer’s suggestions, we have now re-structured the manuscript. In the revised manuscript the story unfolds from the observation that the ER tightly associates with (isolated) LCVs, and the proteomics approach is used as a validation of the presence of MCS proteins at the LCV-ER MCS.

      As suggested by the reviewer, we now highlight the seminal work on Chlamydia by the Engel and Derré laboratories not in the Discussion section (as in the original version of the manuscript) but already in the Introduction section (l. 142-148). We believe that it makes a stronger case to start out an analysis of LCV-ER MCS with a Legionella-specific cell biological finding (LCV-ER association) and an unbiased proteomics approach, as compared to a more derivative and defensive approach starting out with what is known about Chlamydia.

      The reviewer’s comment “This is an extremely difficult manuscript to read” appears overly harsh and conflicts with the positive evaluation of Reviewer #2 and Reviewer #3. Finally, we respectfully disagree with the reviewer’s statement that experiments characterizing L. pneumophila effectors implicated in the formation and function of LCV-ER MCS are peripheral. These experiments significantly contribute to a mechanistic understanding of how L. pneumophila forms and exploits LCV-ER MCS, and they are central for studies on pathogen-host interactions. The studies are analogous to the work on Chlamydia effectors by the Engel and Derré laboratories, but the mode of action of Legionella and Chlamydia effectors is obviously different. Another important distinction of our work to the studies on Chlamydia is the use of the genetically tractable amoeba, D. discoideum, which allows an analysis of LCV-ER MCS by fluorescence microscopy at high spatial resolution.

      Specific comments

      1. The problems start with the first figure, in which the authors state that almost half the D. discoideum proteome is LCV-associated. I doubt that this is correct, and they should base this on some selective criterion. Furthermore in Fig. 1A, they show Venn diagrams for how they whittled this down, but the Supplemental Dataset gives us no clue on how this was done. I can only sit down myself with the dataset and try to figure that out, but that is an unreasonable expectation for the reader. The dataset provided should have a series of sheets, describing how the large protein set was whittled down and how they were sorted, so the reader can evaluate how robust the final results were. To me (at least), if they said: "look we got this surprising result that suggests MCS are involved in promoting LCV formation, and although this is well recognized in Chlamydia but poorly recognized in Legionella", that would be satisfactory to me.

      Response: According to the reviewer’s suggestions, we have now thoroughly re-structured the manuscript. In the revised manuscript the story unfolds from the observation that the ER tightly associates with LCVs in infected cells and with isolated LCVs. The proteomics approach is now used as a validation of the presence of MCS proteins at the LCV-ER MCS and relegated to the Supplementary Information section (former Fig. 1, now Fig. S3).

      For the proteomics analysis, all protein identifications have been filtered for robustness applying a constant FDR (false discovery rates) of protein and PSM (peptide spectrum match) of 0.01, which is a commonly accepted threshold in the field. Moreover, two identified unique peptides were required for protein identification. The parallel application of both filter criteria results in very robust and reliable data sets. This is outlined in the Material and Methods section (l. 683-693).

      In the data set of LCV-associated proteins, 2,434 D. discoideum proteins have been identified (Table S1). This is 18.5% of the total of 13,126 predicted D. discoideum proteins (UniprotKB) and considerably less than “almost half the D. discoideum proteome”, as stated by the reviewer. Moreover, 1,224 L. pneumophila proteins have been identified (among 3,024 predicted L. pneumophila proteins in the database). This is a reasonable number of proteins identified from an intracellular vacuolar pathogen, given the LCV isolation and proteomics methods applied. We now outline these findings more extensively in the Results section (l. 207-213). Moreover, to render Table S1 more reader-friendly, we added to the datasheet “All data” the datasheets “Dictyostelium”, “Legionella” and “Info”.

      The Venn diagram in Fig. S3A (previously Fig. 1A) does not show a subset of proteins “whittled down” from the entire proteomes, but simply summarizes LCV-associated proteins, which were either identified exclusively in the parental strain Ax3 but not in the Δsey1 mutant strain, or only in Δsey1 but not in Ax3, thus identifying possible candidates relevant for the LCV-ER MCS. This information is now outlined more clearly in the text (l. 238-241). Moreover, we now explicitly define in the Material and Methods section (l. 697-704) the “on” and “off” proteins shown in Fig. S3A.

      The overall rational for the comparative proteomics approach was our previous finding that compared to the D. discoideum parental strain Ax3, the Δsey1 mutant strain accumulates less ER around LCVs (PMID: 28835546, 33583106). This finding suggests that formation of the LCV-ER MCS might be compromised in the Δsey1 mutant strain. This hypothesis is now outlined at the beginning of the Results paragraph (l. 204-207).

      I am clueless regarding how Fig. 6 fits with the rest of the manuscript. If this is about MCS, there is no demonstration these effectors are directly involved in MCS other than the somewhat diffuse argument that there is some correlative connection to PI4P levels, that I am not particularly convinced by.

      Response: The PtdIns(4)P gradient between two different cellular membranes is an intrinsic feature of MCS. To date, a quantification of PtdIns(4)P levels on LCVs in response to the presence or absence of specific L. pneumophila effectors is lacking. Accordingly, we opted for quantifying the PtdIns(4)P levels on LCVs in presence and absence of an L. pneumophila effector putatively generating PtdIns(4)P on LCVs, the phosphoinositide 4-kinase LepB, or titrating PtdIns(4)P on LCVs, the PtdIns(4)P-binding ubiquitin ligase SidC. To address the concerns of Reviewer 1 and Reviewer 3 (see below), we now outline in detail the rational to assess the role of LepB and SidC for MCS function (l. 385-387). Importantly, we now also provide data that at LCV-ER MCS PtdIns(4)P/cholesterol lipid exchange is functionally important (new Fig. 6 and Fig. S10). In the revised version of the manuscript, this new data is preceding the experiments with the L. pneumophila effectors, which should render our choice of effectors more comprehensible to the reader and increase the flow of the manuscript.

      Line 146 and associated paragraph. We don't need a catalog of proteins in narrative. There is more detail in the narrative than there is in the tables and figures, which would be a more appropriate way to present the data.

      Response: As suggested by the reviewer, we summarized the LCV-associated D. discoideum proteins and considerably reduced the list in the text (l. 214-230).

      Line 186. There is nothing wrong with pursuing MCS based on the idea that this was seen before with Chlamydia and you wanted to test if this was a previously unappreciated aspect of Legionella biology. I don't see the rationale based on the proteomics, partly because I don't understand how the proteomics dataset was parsed.

      Response: As suggested by the reviewer, we thoroughly re-structured the manuscript and now highlight the seminal work on Chlamydia by the Engel and Derré laboratories already in the Introduction section (not in the Discussion section as in the original version of the manuscript). We believe that it makes a stronger case to start out an analysis of LCV-ER MCS with a Legionella-specific cell biological finding (LCV-ER association) and an unbiased proteomics approach, as compared to a more derivative and defensive approach starting out with what is known about Chlamydia.

      Figure 3: These growth curves are super-weird. I am not used to looking at 8 days of logarithmic growth in a linear scale and seeing no (apparent) growth for 4 days. Considering all the microscopy data are performed in the first 18 hrs of infection, it’s hard to see how this is related to data at 8 days post infection. If this were plotted in logarithmic scale, as microbiologists are used to doing, then perhaps we could see a connection. Also, in some cases, it might be helpful to calculate a growth rate, because it’s possible the author may now see some effects by comparing logarithmic growth rates.

      Response: We have been characterizing growth of L. pneumophila in D. discoideum in several studies using growth curves with RFU vs. time plotted in linear scale (e.g., Finsel et al., 2013, Cell Host Microbe 14:38; Rothmeier et al., 2013, PloS Pathog 9: e1003598; Swart et al., 2020, mBio 11: e00405-20). The D. discoideum-L. pneumophila infection model is peculiar, since the amoebae do not survive temperatures beyond 26 degC. This is substantially below the optimal growth temperature of L. pneumophila (35-40 degC). This means that - due to the many genetic tools available - D. discoideum is an excellent model to investigate cell biological aspects of the infection at early time points (ca. 1-18 h p.i.), but the amoebae are not an optimal system to quantify (several rounds) of intracellular growth.

      Figure 2: The images don't necessarily show what the bar graphs show. In particular, look at Osp8. That image doesn't make sense to me.

      Response: The individual channels of the merged images in Fig. 1 (formerly Fig. 2) are shown in Fig. S2. By looking at the individual channels, it becomes clear that OSBP8-GFP co-localizes with calnexin-mCherry (overlapping signals), but not with P4C-mCherry or AmtA-mCherry (adjacent signals). Co-localization was quantified in a non-biased manner by Pearson’s correlation coefficient. To further visualize co-localization, we now also provide fluorescence intensity profiles for all confocal micrographs (amended Fig. 1).

      In summary, I think the authors hit on something that is probably important for Legionella biology, but it’s not clear what they want to show. They are very invested in connecting everything to PI4P levels, which may or may not be correct, but it seems to me that perhaps taking more care in showing the importance of the Vap/OSPB nexus in supporting Legionella growth should be the first priority.

      Response: Given the importance of the PtdIns(4)P gradient for lipid exchange at MCS, we believe it is justified to put considerable emphasis on this lipid. To further substantiate a functional role of PtdIns(4)P at LCV-ER MCS, we now also show that an increase in PtdIns(4)P at the LCV correlates with a decrease of cholesterol (new Fig. 6 and Fig. S10). The inverse correlation of these two lipids is in agreement with the notion that cholesterol is a counter lipid of PtdIns(4)P at LCV-ER MCS.

      It is not clear from the manuscript if the authors are just cataloging things or trying to test a hypothesis.

      Response: In the revised version of the manuscript, we put forward several specific hypotheses, which we then tested in our study (l. 152-155).

      If I understand Fig. 1, only one of the candidates (VapA) was verified as being more enriched in WT relative to atlastin mutants. This argues even more strongly that the authors have to describe their criteria for choosing these candidates.

      Response: As outlined above (specific point 1), we have now re-structured the manuscript according to the reviewer’s suggestions. In the revised manuscript the story unfolds from the observation that the ER tightly associates with LCVs in infected cells and with isolated LCVs. The proteomics approach is now used as a validation of the presence of MCS proteins at the LCV-ER MCS and relegated to the Supplementary Information section (formerly Fig. 1, now Fig. S3). We consider the proteomics approach a powerful hypothesis generator, and the experimental identification of several MCS proteins by proteomics validated the cell biological and bioinformatics insights.

      Reviewer #1 (Significance (Required)):

      As stated above, the manuscript can't decide if it’s about MCS or PI4P, and I would argue strongly that the emphasis on PI4P detracts from the manuscript, as well as its inability to draw connection to previous work that is likely to be important.

      Response: We respectfully disagree with the reviewer on this important point and hold that proteins as well as lipids are crucial functional determinants of MCS. The PtdIns(4)P gradient is a pivotal process for lipid exchange at MCS. Therefore, we believe it is justified to put considerable emphasis on this lipid. In the Introduction section, we now specify several hypotheses on the localization and function of lipids and proteins at LCV-ER MCS (l. 152-155). Moreover, we now also refer to the previous work on Chlamydia MCS in the Introduction section (l. 142-148).

      Reviewer #2 (Evidence, reproducibility and clarity):

      Summary of paper and major findings

      Membrane contact sites (MCS) are locations where two membranes are in close proximity (10-80nm). MCS have a defined protein composition which tether the membranes together and function in small molecule and lipid exchange. Typically, MCS proteins contain structural (e.g., tethers) and functional (e.g., exchange lipids) proteins, in addition to proteins which regulate the structure and function of the MCS. In this manuscript, Vormittag et al describe protein components of MCS between the Legionella-containing vacuole (LCV) and the host endoplasmic reticulum (ER) in the amoeba Dictyostelium. Proteomics of isolated LCVs followed by microscopy analysis identified several proteins which localize to either the LCV-associated ER (OSBP8), the LCV (OSBP11), or both (VAP and Sac1). The mammalian homologs of these proteins have been shown to play important roles in ER MCS, with VAP serving a structural role, Sac1 a PI(4P) phosphatase regulating PI(4)P levels, and OSBP8 and OSBP11 lipid transferring proteins. Given the importance of PI(4)P in formation and maintenance of the Legionella-containing vacuole, the authors used dicty mutants to determine the importance of these proteins in bacterial growth, LCV size, and PI(4)P levels on the LCV. While VAP and OSBP11 appear to promote Legionella infection, OSBP8 appears to restriction infection, although all identified MCS components appear to play a role in decreasing PI(4P) shortly after infection. Finally, VAP and OSBP8 localization to the LCV is PI(4)P-dependent. Overall, the authors conclude that these MCS components play a role in modulating PI(4)P levels on the LCV.

      Overall, this is an interesting study further exploring the role of PI(4)P in LCV-ER interactions, and how PI(4)P levels are regulated. The figures are clearly presented, there is an impressive amount of data, and rigor appears to be strong with appropriate replicates and statistical analysis. The phenotypes are often mild, but the authors are careful to not overinterpret the data. While this is an interesting study, additional experiments are necessary to support the overall model and the text needs to put the findings into the larger context.

      Response: We would like to thank the reviewer for this positive and constructive assessment. We performed and planned additional experiments to further strengthen the study and support our model.

      Major comments

      1) MCS contain protein complexes or a group of proteins, but the proteins here are studied in isolation and do not support the model shown in Figure 7. Co-localization studies of the putative LCV-ER MCS proteins are critical, especially given that the authors hypothesize the proteins are working together to modulate PI(4)P levels.

      Response: To further explore the possible interactions between Vap and OSBP proteins, we plan co-localization experiments using D. discoideum strains producing mCherry-Vap and either OSBP8-GFP or GFP-OSBP11, as outlined above (Section 2, new__ Fig. 2__ and Fig. S4).

      Moreover, we included additional data on PtdIns(4)P/cholesterol lipid exchange (Fig. 6 __and Fig. S10__), which have been incorporated into the model (amended Fig. 8). Based on the available data, we do not postulate direct interactions between Vap and OSBP proteins. The previous model, which now has been amended, might have been misleading in that respect.

      2) The phenotypes are relatively mild, suggesting functional redundancy. Double knockouts, particularly in VAP and OSBP11, may generate a stronger phenotype that better supports the hypothesis and demonstrate the importance during infection.

      Response: Thank you for this interesting suggestion. Please see Section 4 below for our arguments, why we believe that this intriguing approach is beyond the scope of the current study.

      3) The timing of PI(4)P and MCS protein localization during infection is critical to understanding how MCS might be functioning. Based on Figure 6C, PI(4)P levels decrease on the LCV during infection, but this is not fully explained in the context of what's known in the literature and what is observed the previous figures. How does localization of different MCS components change during infection, and does this correlate with the changes in growth or LCV size? A better description in the Introduction on LCV-associated PI(4)P levels would be beneficial in orienting the reader to why PI(4)P levels are modulated.

      Response: As suggested by the reviewer, we added to the Introduction section more detail about the kinetics of PtdIns(4)P accumulation on LCVs (l. 65-71), and we discuss the limited spatial resolution of the IFC approach (formerly Fig. 6C, now Fig. 7C; l. 407-408). Importantly, we also provide new data showing that within 2 h p.i. an increase in PtdIns(4)P at the LCV coincides with a decrease of cholesterol (new Fig. 6 and Fig. S10). The new data is put into this context in the Discussion section (l. 449-454).

      4) OSW-1 has other targets besides OSBPs, and depleting Sac1 and Arf1 in A549 cells is not specifically targeting the MCS, as these proteins have other functions. The data in mammalian cells is not convincing and should be removed.

      Response: As suggested by the reviewer, we removed the data on depleting Sac1 in A549 cells (Fig. 3D, and Fig. S6BC). We propose to leave the pharmacological data on inhibition of L. pneumophila replication by OSW-1 in the manuscript, but to clearly point out that OSW-1 has other targets besides OSBPs (l. 297-299).

      Minor comments

      1) Figure 2 is missing details on number of experiments/replicates and statistical analysis.

      Response: Thank you for having noted this oversight. The number of independent experiments and statistical analysis have now been added to Fig. 1 (formerly Fig. 2) (l. 1009-1010).

      2) Can the authors hypothesize why VAP promotes growth early during infection, but appears to restrict growth at later timepoints (Figure 3A)?

      Response: Thank you for raising this intriguing point. The opposite effects on growth of Vap at early and later timepoints during infection might be explained by interactions with antagonistic OSBPs. Vap likely co-localizes with OSBP8 as well as with OSBP11 on the limiting LCV membrane or the ER, respectively (experiment to be performed; Fig. 2 and__ Fig. S4__). The absence of OSBP8 (ΔosbH) or OSBP11 (ΔosbK) causes larger or smaller LCVs, and increased or reduced intracellular replication of L. pneumophila, respectively. Thus, OSBP8 seems to restrict and OSBP 11 seems to promote intracellular replication. Accordingly, if Vap affects or interacts with OSBP11 early and with OSBP8 later during infection, opposite effects on growth of Vap might be explained. These reflections are now outlined in the Discussion section (l. 431-441).

      3) There is a large amount of data, which makes it difficult at times to follow. I suggest adding additional information to table 1, including LCV size and whether or not the protein's localization is PI(4)P-dependent.

      Response: Thank you for this suggestion. As proposed by the reviewer, we added the additional information to Table 1 (PtdIns(4)P-dependency of protein localization, LCV size).

      Reviewer #2 (Significance (Required)):

      Membrane contact sites during bacterial infection are a growing area of research. In Legionella, several papers point to the presence of MCS. Further, PI(4)P is known to be an important component on the LCV. This paper shows that MCS protein members are important in modulating LCV PI(4)P levels. The model as presented is not completely supported by the data as co-localization experiments are needed, along with more detailed analysis of how PI(4)P levels change over infection and the role of these MCS proteins in that process. This study will be of interest to those studying Legionella and other vacuolar pathogens. Area of expertise is on membrane contact sites and lipid biology.

      Response: Thank you very much for the overall positive and constructive evaluation.

      Reviewer #3 (Evidence, reproducibility and clarity):

      The authors perform proteomic analysis of Legionella-containing vacuoles. The observe association of membrane contact site (MCS) proteins including VAP, OSBPs, and Sac1. Functional data indicates that these proteins contribute to PI4P levels on LCVs and their ability to acquire lipid from the ER to enable LCV expansion/stability. Overall, the paper is an important contribution to the field and builds upon a growing appreciation for MCS in establishment of intracellular niches by microbial pathogens. I have only minor comments for the authors consideration.

      Response: We would like to thank the reviewer for this enthusiastic assessment.

      Minor comments:

      -line 145, "This approach revealed 3658 host or bacterial proteins identified on LCVs...". This number seems high... how does it compare to prior proteomic studies of pathogen-containing vacuoles?

      Response: As outlined above (reviewer 1, point 1), we have now changed the text (l. 207-213): “This approach revealed 2,434 LCV-associated D. discoideum proteins (Table S1), of a total of 13,126 predicted D. discoideum proteins (UniprotKB). Moreover, 1,224 L. pneumophila proteins were identified (among 3,024 predicted L. pneumophila proteins), which is a reasonable number of proteins identified from an intracellular bacterial pathogen within its vacuole with the proteomics methods applied (Herweg et al, 2015; Schmölders et al., 2017).”

      • line 160. Can the authors comment on why mitochondrial proteins are observed in their proteomic analysis? Are these non-specific background signals or reflecting relevant organelle contact?

      Response: The dynamics of mitochondrial interactions with LCVs and the effects of L. pneumophila infection on mitochondrial functions have been thoroughly analyzed (PMID: 28867389). This seminal work is now cited in the text (l. 227-230).

      • line 268. It is reported that LCVs are smaller with MCS disruption at 2 and 8 h p,i.. Does this also lead to instability or rupture of LCVs? And related to this why would LCVs be bigger at 16h with MCS disruption?

      Response: MCS components affect LCV size positively or negatively. E.g., the absence of OSBP8 (ΔosbH) or OSBP11 (ΔosbK) causes larger or smaller LCVs, and increased or reduced intracellular replication of L. pneumophila, respectively. However, as outlined in the Discussion section (l. 442-454), we believe that the relatively small size likely reflects a structural remodeling of the pathogen vacuole rather than a substantial LCV expansion. LCV rupture takes place only very late in the infection cycle (beyond 48 h) and is followed by lysis of the host amoeba (PMID: 34314090).

      • lines 288 and 299 "data not shown" this data should be included in a supplemental figure.

      Response: The data on the localization of GFP-Sac1 and GFP-Sac1_ΔTMD are included in the Figs. 1A, 4A, 5AD, S2A, S7A, and__ S9__ (l. 328, l. 339).

      • line 327. The authors choose to focus on the role of LepB and SidC in MCS modulation. The rationale for choosing these two amongst the ca 330 effectors was not given. Were other effectors also examined?

      Response: LepB and SidC were chosen due to their activities producing or titrating PtdIns(4)P, respectively, and their LCV localization. This rational is now given in the text (l. 385-387). No other effectors were examined up to this point.

      Reviewer #3 (Significance (Required)):

      Comprehensive LCV proteomics of interest to field of cellular microbiology. Studies of MCS broadly relevant to cell biologists.

      Response: Thank you very much for the overall very positive evaluation.

      4. Description of analyses that authors prefer not to carry out

      Reviewer #2

      Major comment

      2) The phenotypes are relatively mild, suggesting functional redundancy. Double knockouts, particularly in VAP and OSBP11, may generate a stronger phenotype that better supports the hypothesis and demonstrate the importance during infection.

      Response: Thank you for raising the important question of functional redundancy. We now outline this concept in the Discussion section (l. 427-429). A further analysis of the genetic and biochemical relationship between Vap and OSBP11 or OSBP8 are without doubt some of the most interesting aspects of further studies on the topic of LCV-ER MCS.

      The construction of a D. discoideum double mutant strain is time consuming and usually takes 1-2 months. Provided that a Vap/OSBP11 double deletion mutant strain is viable and can be generated, it takes another 1-2 months to thoroughly characterize the strain regarding intracellular replication of L. pneumophila (Fig. 3), LCV size (Fig. 4), and PtdIns(4)P score (Fig. 5). Moreover, there is already a large amount of data in the paper (to quote Reviewer #2), and therefore, adding new data might makes it even harder to follow the story and focus on the key points. Finally, we believe that the planned colocalization experiments (Reviewer #2, point 1) and the new data on lipid exchange kinetics (new Fig. 6 and Fig. S10) fit the current story more coherently, and thus, are more straightforward and informative than the generation and characterization of double mutant strains. For these reasons, we believe that the generation and characterization of D. discoideum double mutant strains is beyond the scope of the current study.

    1. One can't help but notice that Dutcher's essay, laid out like it is in a numbered fashion with one or two paragraphs each may stem from the fact of his using his own note taking method.

      Each section seems to have it's own headword followed by pre-written notes in much the same way he indicates one should take notes in part 18.

      It could be illustrative to count the number of paragraphs in each numbered section. Skimming, most are just a paragraph or two at most while a few do go as high as 5 or 6 though these are rarer. A preponderance of shorter one or two paragraphs that fill a single 3x5" card would tend to more directly support the claim. Though it is also the case that one could have multiple attached cards on a single idea. In Dutcher's case it's possible that these were paperclipped or stapled together (does he mention using one side of the slip only, which is somewhat common in this area of literature on note making?). It seems reasonably obvious that he's not doing more complex numbering or ordering the way Luhmann did, but he does seem to be actively using it to create and guide his output directly in a way (and even publishing it as such) that supports his method.

      Is this then evidence for his own practice? He does actively mention in several places links to section numbers where he also cross references ideas in one card to ideas in another, thereby creating a network of related ideas even within the subject heading of his overall essay title.

      Here it would be very valuable to see his note collection directly or be able to compare this 1927 version to an earlier 1908 version which he mentions.

    1. Your guidelines for private and public gout discussions are here on my contact page.

      Is it the best to put private vs public in the Contact Page? Or should it be separate?

      I think it is best to include it in the Contact Page. Because my perception is that it's all about contacting me. But I should do a poll on this.

      Also, this isn't just a Gout issue, so it needs to be addressed in other subject sites. And as it is a fundamental part of successful collaboration, I will repeat it in Food and Learning.

    1. I can't get behind the call to anger here, even if I don't approve of Apple's stance on being the gatekeeper for the software that runs on your phone.

      Elsewhere (in the comments by the author on HN), he or she writes:

      The biggest problem I try to convey is that you have no way of knowing you'll get the rejection

      No, I think there were pretty good odds that before even submitting the first iteration it would have been rejected, based purely on the concept alone. This is not an app. It's a set of pages—only implemented with the iOS SDK (and without any of the affordances, therefore, that you'd get if you were visiting in a Web browser. For whatever reason, the author both thought this was a good idea and didn't review the App Store guidelines and decided to proceed anyway.

      Then comes the part where Apple sends the rejection and tells the author that it's no different from a Web page and doesn't belong on the App Store.

      Here's where the problem lies: at the point where you're - getting rejections, and then - trying to add arbitrary complexity to the non-app for no reason other than to try to get around the rejection

      ... that's the point where you know you're wasting your time, if it wasn't already clear before—and, once again, it should have been. This is a series of Web pages. It belongs on the Web. (Or dumped into a ZIP and passed around via email.) It is not an app.

      The author in the same HN comment says to another user:

      So you, like me, wasted probably days (if not weeks) to create a fully functional app, spent much of that time on user-facing functions that you would have probably not needed

      In other words, the author is solely responsible for wasting his or her own time.

      To top it off, they finish their HN comment with this lament:

      It's not like on Android where you can just share an APK with your friends.

      Yeah. Know what else allows you to "just" share your work...? (No APK required, even!)

      Suppose you were taking classes and wanted to know the rubric and midterm schedule. Only rather than pointing you to the appropriate course Web page or sharing a PDF or Word document with that information, the professor tells you to download an executable which you are expected to run on your computer and which will paint that information on the screen. You (and everyone else) would hate them—and you wouldn't be wrong to.

      I'm actually baffled why an experienced iOS developer is surprised by any of the events that unfolded here.

    1. Are we wrong about what it means to be human? Can we be wrong about basic facts of being human? It appears obviously true that to be human means to think, to feel, to have qualia, etc. But is that true?

      Think back to a time before you started reflecting about yourself. Call it "square one". Are we still in square one?

      Let’s refer to this age of theoretical metacognitive innocence as “Square One,” the point where we had no explicit, systematic understanding of what we were. In terms of metacognition, you could say we were stranded in the dark, both as a child and as a pre-philosophical species.

      Sure, we have a special way of knowing about humans: an "inside view". I don't need to bring out a microscope or open my eyes. I just need to feel about myself. However, this "inside view" method is not scientific, and it is not necessarily better than scientific methods. It's quite possible that the inside view has provided a lot of false knowledge about being human, which science would overthrow.

      our relation to the domain of the human, though epistemically distinct, is not epistemically privileged, at least not in any way that precludes the possibility of Fodor’s semantic doomsday.

      So let's take a cold hard look at intentional theories of mind. Can it stand up to scientific scrutiny? I judge it based on 4 criterion.

      Consensus: no. Wherever you find theories of intentionality, you find endless controversy.

      Practical utility: maybe. Talking about people as if they have intentions is practically useful. However, the talk about intentionality does not translate to any firm theory of intentions. In any case, people seem to be born spiritualists rather than mentalists. Does that mean we expect spirits to stand up to scientific scrutiny? No.

      Problem ecology: no. Fundamentally, intentional thinking is how we think when faced with a complex system with too many moving parts for us to think about it causally (mechanically). In that case, using intentional thinking to understand human thinking is inherently limiting -- intentional thinking cannot handle detailed causal information. There's no way to think about a human intentionally if we use details about its physiology. We can only think intentionally about a human if we turn our eyes away from its mechanical details.

      Agreement with cognitive science: no.

      Stanislaus Dehaene goes so far as to state it as a law: “We constantly overestimate our awareness — even when we are aware of glaring gaps in our awareness”

      Slowly, the blinds on the dark room of our [intentional-]theoretical innocence are being drawn, and so far at least, it looks nothing at all like the room described by traditional [intentional-]theoretical accounts.

      In summary, out of 4 criterion, we find 3 against, and 1 ambiguous. Intentional theories of mind don't look good!

    1. media mentors for their children

      I like this term "media mentor." It's hard for me to wrap my head around the idea that most of my students don't remember life without smartphones. I think it's important to remind them that these tools are just that- tools. They are incredible when used in a purpose-driven way, but don't need to be in our hands and readily available every second of every day.

    1. As well as a widespread sense of a cancelled future, young people are for the most part much more left-wing than their forebears. Keir Milburn has labelled this trend “Generation Left,” and argues that what is occurring is not just a culture war played out across generations, as has occurred in the past, but a fundamental recomposition of class largely along age lines in Britain, America, and the many Western European countries that saw left populist insurgencies.

      Sounds not necessarily true, looking at the right of right-wing politics in Europe. Though it's possible that these are fuelled by older generations.

    1. There are, fundamentally, two different methods of determining whether a product meets its specification.

      If the specification is perfect then the specification is the product; it's just as prone to bugs as the product itself. We necessarily either allow undefined behavior or claim that the specification is a first implementation of the product, which is then re-implemented as the product itself.

    1. who is invited to decision-making tables

      This is a huge point in that it touches on what feels like a true problem to solve within the design space--it's not just that community members voices, especially marginalized community members, is that they often aren't even present when decisions about the design are being made (ex: facial recognition software that can't read non-white faces)

    1. Another consequence of restricted templates is that SSG documentation will then tell you to work in a third place: not in content, not in templates, but in “config code”. This is a problem because A) it’s probably one place too many and B) it’s often unclear at what time this config code is running or how often: once for every page generation or template run, or just once at the start of a build?

      I agree with this. I hate the config code and I get away with using almost none of it.

    2. Sometimes it’s just not possible to work with content hierarchies and partial content. (Most JavaScript static site generators fall into that category.)

      ???

      I was literally doing this in 11ty the other day. It definitely required a developer-like approach, but "not possible" is a big claim.

    1. And the good news about it is that you can actually train your attention, and it’s not that difficult. In fact, almost every contemplative meditation discipline has to do with just sitting down and paying attention to your breath and noticing how your attention changes. There is a saying that comes from the neuroscientists that neurons that fire together are wired together. When you begin paying attention to your attention, you are developing a capability that enables you to have more control over what’s occupying your mind space.

      attention as mindfulness, and as a muscle to train.

    1. 'll just say that DALL-E shows how racing against the machines is hardly our only option. We can dance with them too, using AI collaboratively and synergistically, in ways that radically amplify and extend our human skills and capabilities. 

      Love this framing, though I think it's naive given the centralization of power these are a result and product of. See Age of Surveillance Capitalism.

    1. Kimberlé Crenshaw, “The Urgency of Intersectionality,” TED Talk

      It is unfortunate that the ones who are supposed to be helping are now becoming corrupted such as the lawyer that dealt with Emma, how is it fair that her case got dismissed just because of the color of her skin and her gender? She wanted to take a stand for herself, she wanted to fight for what’s right and it’s so disappointing that someone of the law had to be so ignorant about this.

    1. Reviewer #3 (Public Review):

      This study aims to determine whether the chromosome defects induced by a bacterial endosymbiont in insects in developing embryos are a direct result of paternal chromosome defects from early embryogenesis or due to a second, independent set of defects that arise later: "we addressed whether defects observed in late CI embryos such as chromosome segregation errors and nuclear fallout are the result of first division errors or a second, distinct CI-induced defect."

      Using crosses, genetics, and fluorescent microscopy, the study claims that the defects at different embryonic stages are due to independent processes, and this work thus has mechanistic relevance to how bacteria inflict developmental harm to insect embryogenesis. The claim is not well supported by the weight of the evidence in this paper and the literature.

      The work is technically sound and proficiently completed to an expert level with appropriate statistics, but it does not provide straight-line evidence to substantiate the primary claim of the paper that later-stage embryos die for different reasons than early-stage embryos. That is no fault of the experimental rigor but rather to the difficulty of directly answering this question. It appears the field has insufficient information on the reductionist, bacterial mechanism that induces embryonic death, namely what acutely is modified by the bacteria to cause embryonic death? As such, the authors hedge that by studying different developmental stages of the embryonic defects, the answer can be surmised. However, a simple explanation for how late and early-stage embryos could die to similar mechanisms is that host cellular conditions are more or less susceptible to the same bacterial-induced change of the insect chromosomes (e.g., new chemical marks on the DNA). It's just not possible to rule this out until the acute mechanism of killing is known. For instance, some embryos may vary in their transcriptomes, proteomes, physiology, etc within a single family of fly offspring, and as such these varying embryos may be more or less susceptible to the same proximal cause of the bacteria-mediated defects. The difference is just when do they take place in development. Without knowing the bacterial mechanism of death (e.g. changes in chemical marks of the fly DNA), the study here can characterize broad strokes of chromatin biology while speculating on the weight of the evidence for whether or not different mechanisms are at play.

      To evaluate the primary question of whether or not there are completely separate defects across development, the study shows several pieces of data that offer a finer resolution of the broad defects of embryos that were previously characterized by the literature. The new follow-up details are robustly supported and include percentages of embryos experiencing a defect, nuclear fallout, determination of haploidy/diploid, sequencing depths, Y chromosome tracking, and developmental-staged characterizations of the chromatin defects. However, according to the text, there is effectively a single type of data that speaks to the main question of the paper - whether or not viable embryos that escaped the first mitosis had increased mitotic errors during later developmental stages.

      "Therefore, the significant increase in mitotic errors observed in diploid CI-derived embryos relative to wild-type derived embryos demonstrates the existence of a second, CI-induced defect, completely separate from the first division defect." This was already known; later-stage, chromatin defects do occur in a variety of insect species cited in the paper. In effect, the question answers itself because, in order to traverse an early lethal state that does not occur, there must be defects that ensue later in development, several of which have already been characterized, though to a lesser resolution than this study.

      Moreover, the study does not link the staged chromatin errors to the CI genes using transgenic tools that are now customary in this field. That work is quite relevant to the conclusion of the paper because the authors speculate in the discussion that additional CI genes may be necessary to explain the later defects in embryogenesis versus the initial defects. This work has been completed to a degree by the papers reporting the initial discovery of the CI genes. CI transgene expression in males causes both 1st mitosis and later chromatin defects, suggesting additional genes are not necessary to explain lethality after the first mitosis. This to me is perhaps the most significant counterpoint of the narrative of the paper's claim because the acute genetic cause of CI can lead to differently timed chromatin errors.

      This is solid work and a strong effort to refine the stages and types of embryonic lethality induced by bacteria, however, the claim that there are different acute mechanisms of death during embryogenesis is not well supported.

    1. David Cournape made the interesting observation that no technology less than 20 years old is better than Python in terms of stability.

      Not true. The standardized Web platform is not just more stable than Python, it's probably one of the most stable platforms in existence. It just moves (read: improves) slowly.*

      Most people don't recognize this, because they pack the stable, standardized parts all in the same mental bag as the shiny, non-standardized parts (like Flash, for an example of decades past, or experimental vendor-specific stuff for a more evergreen problem), then end up getting bit in a similar way and blaming the environment rather than communal tendencies towards recklessness.

      It also turns out that the Web's original academic underpinnings are deeply aligned with the goals of researchers looking for a way to work together—even if the final puzzle piece (ECMA-262) didn't come into existence until later in the decade when the Web first appeared. It's still old enough that it passes the litmus test proposed here.

      * for good reason; after all, look at the evidence

    1. “In the beginning, your skills are raw, your knowledge is sparse, and you lack experience. At best, you will be able to produce work that is “just okay.” And even then, you'll only manage to reach “just okay” by giving your best effort. Nobody wants to produce something that is “just okay.” You'll feel like it's beneath your standards. You'll worry about what others think of you. You'll wonder whether you would be better off taking a different path. But it is impossible to reach that stage unless you are willing to work through your current stage. And so, one of the main obstacles between who you are and who you could be is courage. The courage to keep trying even if you're not yet as good as you hope. The courage to keep trying despite your fears of what others may think. The courage to keep trying without knowing how the future will unfold. Your great work is on the other side of your early work. The only way to be exceptional later on is to have the courage to be “just okay” right now. This is how it is for everyone.”

      The best description of how it feels to be a PhD that I've ever read!

    1. Therefore, it is intriguing to realize what I am doing is, in fact, prompt engineering.Prompt engineering is the term AI researchers use for the art of writing prompts that make a large language model output what you want. Instead of directly formulating what you want the program to do, you input a string of words to tickle the program in such a way it outputs what you are looking for. You ask a question, or you start an essay, and then you prompt the program to react, to finish what you started.

      I take to the term prompt engineering. Designing prompts is important in narrative research, just as much as in AI, and in e.g. workshop settings. It's definitely a skill. Conversational prompts describes blog posts too.

    1. Remember, there were so many gays in the city, they were so visible, and some of the men were so outrageously gay--the gay parade, for instance, with its transvestites and so on--that it turned off an awful lot of the heterosexual community that wouldn't have been too bothered by the presence of gays if there hadn't been so many and they hadn't been so aggressively "out."

      all the way back in the 70s. i don't know that i've consciously thought about this, but capital P "Pride" is not a civil rights effort or whatever, it's just the apex of gay flamboyance and aesthetic imperialism

    1. why wage inflation remained persistently low in most developed countries before the pandemic, no one really knows for sure, including me, of course. However, my best guess is that the globalisation of the workforce and technology are two of the biggest contributors. Workforce globalisation means that Australian businesses, large and small, can hire offshore staff at a significantly lower cost which creates downward pressure on wages here in Australia (for some jobs). Technology assists by achieving greater efficiency, especially through automation. This means businesses need fewer staff. If the pandemic has taught us one thing, it’s that lots of people can do their job just as well, remotely. I think this will exacerbate workforce globalisation and continue to weigh on wage inflation. In addition, if the RBA’s aggressive rate hikes send Australia into a recession, which is entirely possible, it will risk undoing all its pre-pandemic efforts to generate sustainable wage inflation.

      .c2

    1. I wanted to build something (a runtime), so the projects were mostly impervious to reality. Any corrective feedback from the world had to get past my preconception that, what ever the solution, it had to involve the runtime I was building.

      This isn't the way to build products! Identify the actual user and be honest about it. If it's just me then build something I will use; otherwise you'll pretend that the service is for people it's not actually for, then feel frustrated when the tool doesn't seem to align with those fake standards. Tools with specific technical stipulations that don't impact the end user in a measurable way are built for you and/or for learning.

    1. Author Response

      Reviewer #1 (Public Review):

      Edmondson et al. develop an efficient coding approach to study resource allocation in resource constrained sensory systems, with a particular focus on somatosensory representations. Their approach is based on a simple, yet novel insight. Namely - to achieve output decorrelation when encoding stimuli from regions with different input statistics, neurons in the sensory bottleneck should be allocated to these regions according to jointly sorted eigenvalues of the input covariance matrix. The authors demonstrate that, even in a simple scenario, this allocation scheme leads to a complex, non-monotonic relationship between the number of neurons representing each region, receptor density and input statistics. To demonstrate the utility of their approach, the authors generate predictions about cortical representations in the star-nosed mole, and observe a close match between theory and data.

      Strengths:

      These results are certainly interesting and address an issue which to my knowledge has not been studied in-depth before. Touch is a sensory modality rarely mentioned in theoretical studies of sensory coding, and this work contributes to this direction of research.

      A clear strength of the paper is that it demonstrates the existence of non-trivial dependence between resource allocation, bottleneck size and input statistics. Discussion of this relationship highlights the importance of nuance and subtlety in theoretical predictions in neuroscience.

      The proposed theory can be applied to interpret experimental observations - as demonstrated with the example of the star-nosed mole. The prediction of cortical resource allocation is a close match to experimental data.

      We thank the reviewer for the feedback. Indeed, demonstrating an ‘interesting’ effect in even such a simple model was one of the main aims.

      Weaknesses:

      The central weakness of this work are the strong assumptions which are not clearly stated. In result, the consequences of these assumptions are not discussed in sufficient depth which may limit the generality of the proposed approach. In particular:

      1) The paper focuses on a setting with vanishing input noise, where the efficient coding strategy is toreduce the redundancy of the output (for example through decorrelation). This is fine, however, it is not a general efficient coding solution as indicated in the introduction - it is a specific scenario with concrete assumptions, which should be clearly discussed from the beginning.

      2) The model assumes that the goal of the system is to generate outputs, whose covariance structure isan identity matrix (Eq. 1). This corresponds to three assumptions: a) variances of output neurons are equalized, b) the total amount of output variance is equal to M (i.e. the number of of output neurons), c) the activity of output neurons is decorrelated. The paper focuses only on the assumption c), and does not discuss consequences or biological plausibility of assumptions a) and b).

      We have clarified the assumptions in the revised version. The original version did not distinguish clearly between assumptions that were necessary to allow study of the main effect, and assumptions that were included to present a full model but that could have been chosen otherwise without affecting the results.

      This has now been made much clearer. Regarding the noise issue (point 1), we have clarified the main strategy pursued by the model namely decorrelation, we acknowledge other possible strategies, and we make clear whether and how noise could be incorporated into the model. Regarding the biological plausibility of our assumptions (point 2),

      Reviewer #2 (Public Review):

      The authors propose a new way of looking at the amount of cortical resources (neurons, synapses, and surface area) allocated to process information coming from multiple sensory areas. This is the first theoretical treatment of attempting to answer this question with the framework of efficient coding that states that information should be preserved as much as possible throughout the early sensory stages. This is especially important when there is an explicit bottleneck such that some information has to be discarded. In this current paper, the bottleneck is quantified as the number of dimensions in a continuous space. Using only the second-order statistics of the stimulus, and assuming only the second-order statistics carrying information, the authors use variance instead of Shannon's information. The result is a non-trivial analysis of ordering in the eigenvalues of the corresponding representations. Using clever mathematical approximations, the authors arrive at an analytical expression -- advantageous since numerical evaluation of this problem is tricky due to the long thin tails of the eigenvalues of the chosen covariance function (common in decaying translation-invariant covariances). By changing the relative stimulus power (activity ratio), receptor density (effectively the width of the covariance function), and the truncation of dimensions (bottleneck width), they show that the cortical allocation ratio, surprisingly, is a non-trivial function of such variables. There are a number of weaknesses in this approach, however, it produced valuable insights that have a potential to start a new field of studying such resource allocation problems all across different sensory systems in different animals.

      Strengths

      • A new application of the efficient coding framework to a neural resource allocation problem given acommon bottleneck for multiple independent input regions. It's an innovation (initial results presented at NeurIPS 2019) that brings normative theory with qualitative predictions that may shed new light to seemingly disproportionate cortical allocations. This problem did not have a normative treatment prior to this paper.

      • New insights into allocation of encoding resources as a function of bottleneck, stimulus distribution, andreceptor density. The cortical allocation ratios have nontrivial relations that were not shown before.

      • An analytical method for approximating ordered eigenvalues for a specific stimulus distribution.

      Weaknesses

      The analysis is limited to noiseless systems. This may be a good approximation in the high signal-to-noise ratio regime. However, since the analysis of allocation ratio is very sensitive to the tail of eigenvalue distribution (and their relative rank order), not all conclusions from the current analysis may be robust. Supplemental figure S5 perhaps paints a better picture since it defines the bottleneck as a function of total variance explained instead of number of dimensions. The non-monotonic nonlinear effects are indeed mostly in the last 10% or so of the total variance.

      We agree that the model is most likely to apply in the low-noise regime, as stated in the Discussion. The robustness of the results is indeed a worry, and indeed we have encountered some difficulties when calculating model results numerically due to the issue pointed out by the reviewer, and this led us to focus on an analytical approach in the first case. However, to test model robustness we have now included numerical results for several other covariance functions to demonstrate that, at least qualitatively, the results presented in the paper are not simply a consequence of the particular correlation structure we investigated.

      In case where the stimulus distribution is Guassian, the proposed covariance implies that the stimulus distribution is limited to spatial Gaussian processes with Ornstein-Uhlenbeck prior with two parameters: (inverse) length-scale and variance. While this special case allowed the authors to approach the problem analytically, it is not a widely used natural stimuli distribution as far as I know. This assumed covariance in the stimulus space is quite rough, i.e., each realization of the stimulus is spatially continuous isn't differentiable. In terms of texture, this corresponds to rough surfaces. Of course, if the stimulus distribution is not Gaussian, this may not be the case. However, the authors only described the distribution in terms of the covariance function, and lacks additional detail to fill in this gap.

      We would argue that somewhat ‘rough’ covariance structure might be relatively common, for example in vision objects have clear borders leading to a power law relation and similarly in touch objects are either in contact with the skin or they are not. In either case, we have now extended the analysis to test several other covariance functions numerically. We found that, qualitatively, the main effects described in the paper were still present, though they could differ quantitatively. Interestingly, the convergence limit appeared to depend on the roughness/smoothness of the covariance function, indicating that this might be an important factor.

      The neural response model is unrealistic: Neuronal responses are assumed to be continuous with arbitrary variance. Since the signal is carried by the variance in this manuscript, the resource allocation counts the linear dimensions that this arbitrary variance can be encoded in. Suppose there are 100 neurons that encode a single external variable, for example, a uniform pressure plate stimulus that matches the full range of each sensory receptor. For this stimulus statistics, the variance of all neurons can be combined to a single cortical neuron with 100 times the variance of a single receptor neuron. In this contrived example, the problem is that the cortical neuron can't physiologically have 100 times the variance of the sensory neuron. This study is lacking power constraint that most efficient coding frameworks have (e.g. Atick & Redlich 1990).

      We agree that the response model, as presented, is very simplistic. However, the model can easily be extended to include a variety of constraints, including power constraints, without affecting the results at all. Unfortunately, we did not make this clear enough in the original version. The underlying reason is that decorrelation does not uniquely specify a linear transform and the remaining degrees of freedom can be used to enforce other constraints. As the allocation depends only on the decorrelation process (via PCA), we do not explicitly calculate receptive fields in the paper and any additional constraints (power, sparsity) would affect the receptive fields only and so were left out in the original specification. We have now added clearer pointers for how these could be included and why their inclusion would not affect the present results.

      The star-nosed mole shows that the usage statistics (translated to activity ratio) better explains the cortical allocation than the receptor density. However, the evidence presented for the full model being better than either factor is weak.

      We agree that the results do not present definitive evidence that the model directly accounts for cortical allocations and as we state in the paper, much stronger tests would be needed. Our idea here was to test whether, in principle, the model predictions are compatible with empirical evidence and therefore whether such models could become plausible candidates for explaining neural resource allocation problems. This seems to be the case, even though the evidence in favour of the ‘full model’ versus the ‘activity only’ model is indeed not overwhelming (though this might be expected as the regional differences in activity levels are much greater than those in density). We have now added additional tests to show that the results are not trivial. We would also like to note that it is not obvious that the ‘full’ model would perform better than the ‘activity only’ model: for either we choose the best-fitting bottleneck width (as the true bottleneck width is unknown), and therefore the degrees of freedom are equal (with both activity levels and densities fixed by empirical data).

      Reviewer #3 (Public Review):

      This work follows on a large body of work on efficient coding in sensory processing, but adds a novel angle: How do non-uniform receptor densities and non-uniform stimulus statistics affect the optimal sensory representation?

      The authors start with the motivating example of fingers and tactile receptors, which is well chosen, as it is not overstudied in the efficient coding literature. However, the connection between their model and the example seems to break down after a few lines when the authors state that they treat individual regions as independent, and set the covariance terms to zero. For finger, e.g. that would seem highly implausible, because we typically grasp objects with more than one finger, so that they will be frequently coactivated.

      Our aim was to take a first stab at a model that could theoretically account for neural resource allocation under changes in receptor density and activity levels, and by necessity this initial model is rather simple. Choosing a monotonically decreasing covariance function along with some other simplifications allowed us to quantify the most basic effects, and do so analytically. Any future work should take more complex scenarios into account. Regarding the sense of touch, we agree that the correlational structure of the receptor inputs will be more complex than assumed here, however, whether and how this would affect the results is less clear: Across all tactile experiences (not just grasps, but also single finger activities like typing), cross-finger correlations might not be large compared to intra-finger ones. Unfortunately, there is currently relatively little empirical data on this. That said, we agree with the broader point that complex correlational structure can be found in sensory systems and would need to be taken into account when efficiently representing this information.

      The bottleneck model posited by the authors requires global connectivity as they implement the bottleneck simply by limiting the number of eigenvectors that are used. Thus, in their model, every receptor potentially needs to be connected with every bottleneck neuron. One could also imagine more localized connectivity schemes that would seem more physiologically plausible given the observed connectivity patterns between receptors and relay neurons (e.g. in LGN in the visual system). It would be very interesting to know how this affects the predictions of the theory.

      We agree that the model in its current form is not biologically plausible. While individual receptive fields can be extremely localised, the initial allocation of neurons to regions we describe in the paper relies on a global PCA, and it is not clear how this might be arrived at in practice under biological constraints. However, our aim here was to specify a normative model that generates the optimal allocation and thereby answer what the brain should be doing under ideal circumstances. Future work should definitely ask whether and how these allocations might be worked out in practice and how biological constraints would affect the solutions.

      The representation of the results in the figures is very dense and due to the complex interplay between various factors not easy to digest. This paper would benefit tremendously from an interactive component, where parameters of the model can be changed, and the resulting surfaces and curves are updated.

      We have aimed to make the figures as clear as possible, but do appreciate that the results are relatively complex as they depend on multiple parameters. The code for re-creating the figures is available on Github (https://github.com/lauraredmondson/expansion_contraction_sensory_bottlenecks), making it easy to explore scenarios not described in the paper.

      For parts of the manuscript, not all conclusions made by the authors seem to follow directly from the figures: For example, the authors interpret Fig. 3 as showing that activation ratio determines more strongly whether a sensory representation expands or contracts than density ratio. This is true for small bottlenecks, but for relatively generous ones it seems the other way around. The interpretation by the authors, however, fits better the next paragraph, where they argue that the sensory resources should be relatively constant across the lifespan of an animal, and only stimulus statistics adapt. However, there are notable exceptions - for example, in a drastic example zebrafish change their sensory layout of the retina completely between larvae and adult.

      We have amended the text for this section in the paper to more closely reflect the conclusions that can be drawn from the figure. These are summarised below.

      The purpose of Fig. 3B is to show that knowledge of the activation ratio provides more information about the possible regime of the bottleneck allocations. We cannot tell the magnitude of the expansion or contraction from this information alone, or where in the bottleneck the expansion or contraction would occur. Typically, when we know the activation ratio only, we can tell whether regions will be expanded or contracted or whether both occur over all bottleneck sizes. For a given activation ratio (for example, a = 1:2, as shown in the 3B), we know that the lower activation region can be either contracted only or both expanded and contracted over the course of the bottleneck. In this case, regardless of the density ratio, the lower activation region cannot be contracted only. Conversely, for any density ratio (see dashed horizontal line in Fig. 3B), allocations can be in any regime.

      In the final part of the manuscript, the authors apply their framework to the star nosed mole model system, which has some interesting properties; in particular, relevant parameters seem to be known. Fitting to their interpretation of the modeling outcomes, they conclude that a model that only captures stimulus statistics suffices to model the observed cortical allocations. However, additional work is necessary to make this point convincingly.

      We have now included a further supplementary figure panel providing more details on the fitting procedure and results for each model. Given that we fit over a wide range of bottleneck sizes, where allocations for each ray can vary widely (see Figure 6, supplement 1A), we tested an additional model to confirm that the model requires accurate empirical density and/or activation values for each ray to provide a good fit to cortical data. Here we randomise the values for the density and activation of each ray within the possible range of values for each. We find that with this randomisation of the values the model performs poorly on fitting even with a range of bottleneck sizes. This suggests that the model can only be fitted to the empirical cortical data when using the empirically measured values.