1. Last 7 days
    1. "I had zero ideas about what to say,"

      There is a clear difference between writing essays, and actually writing for an audience. You really have to take into account what people might or how they may perceive your views.

    2. Okolloh also wrote aboutdaily life, posting pictures of her baby and discussing the joys of living in Nairobi, including cabdrivers so friendly they'd run errandsFrom Clive Thompson's bookSmarter Than You Think: How Technologyis Changing Our Minds for the Better, Penguin Press, 2013. This is fromchapter 2, "Public Thinking."

      Although she was unsure about what content to write about in her blog at first, she included anything and everything that she felt passionate about.

    1. 其他时候,当你试图睡觉并且不想与它们打交道时,它们会出现在晚上。

      This is so true, and this is the reason that I can't fall in sleep at night.

    2. 写下最多 5 个负面想法、担忧、担忧或沉思

      I think the first step is already stress me out, I am still trying to be facing stress correctly.

    3. Gain practice breaking down larger problems into smaller, more manageable ones.

      This is the way that I am relief my stress. When I have work and schoolwork at the same time, I feel so stressful. For this situation, I will solve the easier problem fist to balance my time.

    1. Theschool Psychiatrist must Xamine it physically and mentally and issue a full report. If X’s testshowed it was a boy, it would have to obey all the boys’ rules. If it proved to be a girl, Xwould have to obey all the girls’ rules, and if X turned out to be some kind of mixed- upmisfit, then X should be Xpelled from the school.

      This passage shows it isn't just Baby X's external family pushing for conformity, but the wider community as well. The parent's association treats X as a "problem" and pushes for labels, showing how strong outsiders will enforce gender roles. Their extreme reactions show that social resistance, not the child itself, that causes trouble.

    2. Clearly, nothing at all was wrong. Nevertheless, none of the relatives felt comfortable aboutbuying a present for a Baby X. The cousins who sent the baby a tiny football helmet wouldnot come and visit anymore. And the neighbours who sent a pink-flowered romper suitpulled their shades down when the Joneses passed their house.

      I think this part is pretty interesting because it perfectly shows how society clings to gender roles. It shows that the real challenges of raising baby X did not come from the child, but from social pressures.

    1. Last but certainly not least here, LLMs require a tremendous amount of energy throughout the entire lifecycle, from building data centers to training AI to processing user queries. Looking at just one company, Google’s carbon emissions have gone up by more than 50% in recent years because of its AI energy needs; by 2026, this will be about the energy demand for Japan, which is the #5 country in terms of annual energy consumption. Given that we’re facing both an energy crisis and a climate crisis, widespread use of AI will make both worse, so much so that lawsuits are being filed or contemplated related to this environmental impac

      I hadn't previously considered the environmental side of AI, but it’s alarming that the energy demands of LLMs are so massive. Training AI and running data centers at this scale clearly has a real impact on both energy consumption and carbon emissions. Comparing Google’s AI energy use to the consumption of an entire country really puts things into perspective.

    2. but to give up on reason and whatever other intellectual powers we possess is to give up on being human.

      This is a strong point and a good one. I agree that being able to reason is the superpower of humanity. While I think that students at times rely on generative AI for school assignments, a lot of reasoning still happens outside of the classroom, in relationships, family life, commitments and jobs. It could be valuable to think of ways to make academic reasoning more "human," so that students are able to treat it like they would any other situation.

    3. Likewise, it is very surprising when ChatGPT can’t even beat a 1977 Atari gaming machine in chess. Sure, ChatGPT also isn’t designed to be a chess engine, but you would expect something that is otherwise so capable—even superhuman for many tasks—to not fail so badly at games that rely on critical thinking and planning.

      This was surprising to read! This shows me how AI can seem quite advanced in some areas but fail badly in others. ChatGPT isn’t actually reasoning; it’s predicting human language, not planning moves like a chess engine. For me, this is a reminder that even though AI presents itself as 'intelligent', like everything else in the world, it has its limits.

    4. but there are many important but invisible tradeoffs in using it, especially in the classroom.

      The mention of "invisible tradeoffs" stands out to me. Oftentimes, with students that are trying to get away with using AI when it is prohibited, it seems that they either do not care about the "tradeoff" or do not realize that it is happening. By calling it "invisible," I think that the text captures this phenomenon very well. An interesting research angle would be the psychology of plagiarism with AI. What leads a student to engage in this tradeoff and is it invisible to them?

    5. As with AI art, AI writing can look weirdly the same no matter which AI app created it.

      I've noticed that a lot of AI-generated writing has a polished but formulaic feel, almost like it’s missing the quirk and unique voice that makes human writing engaging. Especially with the inclusion of em dashes mid sentence. For me, that sameness is one of the reasons I’m cautious about relying on AI for creative work. It can be useful for brainstorming or structuring ideas, but if everything starts to sound the same in our writing, we risk losing the diversity of voices and styles that make writing meaningful.

    6. On the other hand, universities aren’t vocational schools, merely training students for future jobs.

      This point indicates that we need to answer the question, "what are universities for?" to best understand the role of AI in courses. I think that an AI policy at any level of schooling needs to be grounded in what the course is intended for.

    7. What is the value of free time—it sounds good, but considering that we waste a lot of our free time already, is doing more of that valuable?

      We all want more free time, and it sounds great in theory. But in today’s culture, so much of it goes into doom scrolling through social media or binge watching on streaming services. Maybe the real issue isn’t how much free time we have, but how we use it. Even if AI gave us extra hours of freedom, would most people actually use them productively to grow and learn — or would it just turn into more wasted time?

    8. Worse, you might instead come away with a technology dependency or addiction, causing you to doubt your own intelligence and abilities. As a result, you may become unable or very anxious to think or write for yourself without the help of AI (and writing is thinking).

      AI should help us gain confidence in navigating digital tools and not lessen it. For example, if a student always relies on ChatGPT to create essays, they might freeze up when asked to write a response in class that is timed or on an exam. Instead of building their own meaning-making skills, they become anxious without the tool. This also weakens participatory culture. If people are always doubting their own voice, they stop contributing authentically to discussions with the real human perspective.

    9. You might not care for, say, history, but understanding geopolitics and the history of a particular region may help you develop business strategies in breaking into new markets and accounting for local preferences. You might not care for, say, biology, but understanding the environmental impact of your products can help not only your business but also the world you and your future children will want to live in. And you might not care for, say, philosophy, but identifying the possible ethical concerns about your products or services in advance can help you avoid stepping into a legal or public-relations landmine, even if you don’t care about doing what’s right.

      Understanding that this is part of a broader conversation regarding the ethical and personal risks of AI, I believe it connects more closely to the topic of participatory culture.

      When we expect AI to explain and solve all our problems for us or educate us on important global topics, we may unknowingly avoid conversations or other means of communication, which would more naturally help us understand what we are so eager to learn. Participating in important political conversations, for example, with those in our community through books, websites, and other sources, allows us to engage in a process rather than expecting all the answers at our fingertips. I feel as if isolating ourselves to a computer screen, talking to ChatGPT, removes the human element of participatory culture. In the same sense, does everyone engaging with AI begin a new participatory culture of AI use and generating ideas?

    10. If a random person feeds your work into an AI to copy your style and flood the market with similar content, that could kill demand for your work and blossoming career before you have much chance to build up a body of work and establish yourself as you would want.

      Yes, this is already happening in many markets. There’s no way to stop it, so it really pushes creators to develop more unique and distinctive styles to stand out.

    11. Besides, no company is immune from having its data hacked or leaked. Already, some users have accidentally let their private AI queries to be posted publicly. It’s also possible your psychological vulnerabilities and stressors could be deduced from your AI chats, which means a risk of being manipulated by bad actors.

      I completely agree. Technology itself isn’t the problem; the real issue is who controls and uses it. As I said before, no one can completely avoid modern technology, and it’s impossible to refuse it in today’s world. What we can do is ensure proper regulation, standards, and oversight so that AI is used responsibly and safely.

    12. But if you’re not putting in the work to receive a minimum education, then how can you know when AI is hallucinating?

      I totally agree! Just like with critical literacy, AI is a powerful tool, but it requires the user to have enough knowledge and discernment to tell when it’s wrong. I can only imagine how challenging it will be for future early childhood and elementary teachers to manage this… and yes, that sounds like a lot of work, just joking!

    13. This doesn’t necessarily mean that everyone is pushed to the same ideas, but AI “can funnel users with similar personalities and chat histories toward similar conclusions.”

      I hadn’t realized that before—it’s a really important point. I can see now how AI might guide people with similar backgrounds or habits toward the same kinds of conclusions, even without them noticing.

    14. o appreciate the difference, imagine you didn’t know English and had to rely on a translation app: how different and worse would your daily life be?

      I think people often forget the original purpose of AI tools like translation apps: to make life easier and communication more convenient. Not everyone has the time or money to learn a new language, especially just for short-term travel, so translation software provides access for the general public. It’s only a medium, not a replacement. Some may prefer to stay in their comfort zone, and that’s fine—we don’t need to force everyone to learn deeply. But for those who truly want to grow, it’s important to remember that AI is just a tool to serve us, not a substitute for our own effort.

    15. It seems so widescale that AI has been called a “mass-delusion event.” Several users have been led by AI to commit suicide.

      I think the idea of AI as a “mass-delusion event” sounds exaggerated. When I looked into “chatgpt psychosis” cases, most involved people who already had mental health challenges or were socially marginalized—these are extreme examples, not the norm. It reminds me of nuclear energy: the real danger is not the technology itself, but how people use and control it. For example, in the Windsor Castle intruder case, the key questions are not simply “AI caused this,” but rather: why did this person only listen to a machine’s encouragement? Who was truly behind that encouragement? Why would someone prefer to confide in a robot rather than a human? And why did the operators of that AI system fail to detect and report it in time? These deeper issues of responsibility and oversight are more important to examine than blaming AI for causing psychosis.

    16. You’re here at a university to develop as a human being—to become a better, more educated person and citizen of the world—and learning how to productively disagree (and to resolve that) is a critical part of education.

      I strongly agree and appreciate this perspective. This is exactly the advantage of a university environment: unlike social media, where expressing an opinion can expose personal information and invite insults or extreme reactions, universities provide a space for free, respectful expression. It’s where diverse ideas can flourish, and people can learn to disagree productively, which is essential for human progress.

    17. With the accumulated knowledge of the world now at their fingertips, students need their teachers more, not less.

      I see your point about students needing teachers more in the AI age, but I think this mainly applies to younger students. For university and graduate students, reliance on teachers is often limited. In my own experience, most of my university courses were largely self-directed; lectures and teaching methods didn’t suit classes of 50+ students. Students sometimes had to focus on navigating relationships with professors to get higher grades, rather than deeply learning content. Good professors are rare, and even when found, individual guidance depends on the match between the teacher’s style and the student’s needs. In extreme cases, like in some Asian graduate programs, students may even feel pressured to help professors with personal tasks to gain favor, which can compromise the purity of academic learning.

    18. For instance, AI prompt engineering went from the “hottest job in 2023” to “obsolete” in two years.

      This accelerates a “fast-food era” of work, where adaptability is crucial. AI is now indispensable in many workplaces, so the question is not whether we use it, but how we use it. What we need is an AI critical literacy curriculum: learning to master AI tools, apply them effectively, and develop critical thinking skills through this process.

    19. This isn’t just a psychological effect, but research is showing that relying on AI can change how your brain works in a way that can resemble brain damage. Some people have already been involuntarily hospitalized for “ChatGPT psychosis”, which even the tech industry acknowledges is a big problem.

      Out of everything I've read thus far in this article, this excerpt made me pause and truly reflect. I find it alarming that relying on AI could have such serious cognitive and psychological consequences. The idea that AI use could actually change the way our brains function, or even trigger severe mental health issues like “ChatGPT psychosis”, makes me question how we integrate these tools into daily life and education. For me, this emphasizes the importance of setting boundaries, using AI deliberately rather than excessively, and ensuring that we maintain our own reasoning and critical thinking skills. It also makes me think that conversations around AI shouldn’t just focus on convenience or productivity, but also on mental health and long-term cognitive well-being.

    20. AI can be more creative, especially if you’re a person who’s not that creative to begin with. Not everyone is, and that’s ok. With AI, you can now do things that you previously couldn’t, such as to effortlessly create art and music, even if you have no skill or training.

      I can definitely see the value in that perspective. For people who don’t feel naturally creative or haven’t had training, AI can open up possibilities that were previously out of reach, like making art, music, or written work with ease. For me, that’s exciting because it lowers barriers and allows more people to experiment, express themselves, and engage with creative processes. At the same time, I think it raises interesting questions about what creativity really means—if AI is generating something for you, is it your creativity, the AI’s, or a mix of both? I feel like the key is how we use AI: it can be a tool to enhance our ideas, explore new possibilities, and build skills we wouldn’t otherwise develop. For someone like me, who might struggle with certain creative tasks, AI could act as both a learning aid and a springboard for personal expression, as long as I remain actively involved in shaping the final product.

      However, as someone who is very creative and values creative expression as something uniquly human, I would be lying if I said that AI's capacity to write books and create art didn't concern me. Part of what makes creative work meaningful for me is the process—the struggle, the experimentation, and the personal choices that shape a piece of art, music, or writing. With AI, there’s a risk that these processes could be shortcut or devalued, producing work that is polished but lacks the nuance and personal perspective that comes from human effort. I worry that if AI becomes the default way to create, it could shift expectations and standards, making it harder for people like me, who take pride in crafting ideas from scratch, to have our work recognized or appreciated. At the same time, I also see that AI could be a tool if used intentionally, but for me, the concern is that it might overshadow the very human creativity that defines our contributions and distinguishes our work from what a machine can generate.

    21. Efficiency is only one of many goals we can have; it’s not the only goal, even if some people have a fetish or obsession with it. For instance, right away, efficiency is biased toward things that can be easily measured, i.e., activities and outcomes that can be turned into metrics to optimize. But not everything important is easily quantifiable, such as subjective features.

      In education, efficiency often prioritizes outputs—grades, test scores, word counts—while undervaluing subtler aspects of learning like curiosity, creativity, or ethical reflection. Lin is warning, that not everything of value such as the joy of learning fits into measurable categories.

    22. Using Grammarly in school vs. using LLMs in school. Grammarly used to be for grammar and wordsmithing, but now it uses LLMs and can help students slip past AI detectors and much more. If it was ok to use Grammarly when it just corrected grammar and awkward sentences, which still used some AI but not LLMs, what exactly is the difference between that and ChatGPT?

      This raises an important question about shifting boundaries in technology use: where do we draw the line between permissible assistance and impermissible outsourcing? Grammarly functioned as a tool, polishing grammar, correcting mechanics, and smoothing awkward / lengthy sentences without replacing a student's intellectual labour.

    23. Therefore, we need to strike a balance between the two possibilities if we don’t know how the future will play out. This means an open conversation about AI’s pros and cons, since ultimately it will be your decision to use AI in your coursework or not, even when an instructor prohibits it.

      Finding a balance with using AI in school is so important. Its not just about whether or not we use it, but also understanding the risks and benefits so that we make informed, smart choices. Even if an instructor discourages or bans AI, like the author says, we still make the decision to use it or not. So, learning to decide responsibly and thoughtfully seems just as important as learning how to use the tool itself. For me, this highlights that part of being prepared for the future is developing judgment, not just technical skills.

    24. And what would be a competitive advantage in a job market filled with AI wranglers? In a word, that’s authenticity. Having a different, special perspective will separate you from the masses who are using AI to produce more or less the same content with the same, ordinary, and generic voice. Being different and authentic would best help you to contribute new ideas to your work, not old ideas that are recycled and repackaged by AI,

      If most workers rely on AI for content, then originality, voice, and unique perspective become scarce—and therefore valuable. The warning is that AI tends to reproduce “average” responses, so individuals who cultivate their own ideas and styles will stand out in a crowded job market. This places education in a new light: the classroom becomes less about mastering efficiency and more about nurturing individuality, creativity, and critical perspective. There is a true advantage of imaginative contributions.

    25. If it’s a game-changer that will be regularly used in future jobs, then students will need to know how to use it expertly; thus it may be premature and potentially a disservice to students to ban AI in the classroom. Without learning how to “wrangle” AI, you could be at a competitive disadvantage once you graduate and enter the workforce.

      I read this as a strong case for why banning AI in the classroom could actually harm students. If AI is going to be central to the workforce in the future, it is a skill that students need to learn to use responsibly, much like digital literacy.

    26. It's better to know things and have them ready to connect to other things you know, in order to generate new insights. This ability to synthesize information is often held up as one of the most important skills needed for the future.

      Memorization or familiarity with facts is not about stockpiling trivia, but about building a mental library that allows students to make unexpected connections. AI can provide quick answers, but it cannot replace the deeper cognitive skill of synthesis—linking disparate ideas into something original.

    27. In olden times, classrooms banned the use of calculators for math classes, at least at the pre-calculus levels where it was still important for students to learn how to do basic operations for themselves. The usual rationale is laughable now; we were told, “You won’t always have access to calculators!” And that was mostly true…until the rise of mobile phones in the 1990s and then smartphones in the 2000s, when calculators became a standard feature on this one device we always had on us.

      What once seemed like a reasonable restriction now looks outdated, since calculators became universally accessible. The comparison is while banning calculators seemed absurd to our generation (avg university student age in 2025), the rationale for banning or limiting AI is not the same. Calculators automate mechanical processes, while AI risks automating cognitive and creative processes which are central purposes of education. Thus, there is a necessary skill(s) that must be built before technology is introduced.

    28. Protecting your privacy and intellectual property (IP) are also ethical and practical concerns. Your AI queries contain valuable clues about you and your dispositions, which could be exploited by marketeers or even weaponized. For instance, say you were writing about some politically charged topic, such as Gaza or abortion or immigration: the fact that you were exploring certain positions, even if you don’t believe them but just wanted to better understand them, could be used against you in any number of possible situations,

      The vulnerability of personal data and intellectual property should be a major consideration for students' use of AI. It may not feel relevant or real and perhaps can be easily ignored, but every query leaves a digital trace that reveals sensitive information about a person's values, beliefs, and thought processes, which then can be commodified, surveilled or even weaponized against you. These tools can often profit from exploiting user data - its a matter of digital safety and civic responsibility.

    29. Punting your work to AI, whether in school or at a job, also means depriving yourself of the personal satisfaction that comes from achievement and knowledge, such as actually drawing an artful image instead of typing in words that gets an AI to produce the same thing.

      There is an intrinsic reward inherent in the effort and mastery of a skill, reminding us that learning is not only about external outcomes, getting a higher grade, but also about internal fulfillment. The value is not just in the final product, but in the act of honing a skill and overall experiencing growth. By constantly outsourcing outputs, the achievements then result in hollow victories.

    30. Worse, LLMs sound confident in their outputs, even when they’re factually wrong, and this makes it even harder to know what claim needs to be double-checked.

      One of the most deceptive qualities of AI: its rhetorical confidence. AI's fluent prose can easily persuade readers even when the content is inaccurate or fabricated. This lack of confidence can blur the line between trustworthy information and misinformation, making it difficult for students to exercise critical judgement about what to verify.

    31. Research is also showing that AI is homogenizing our thoughts, i.e., our ideas are “regressing to the mean” or collapsing to the most popular or average takes, which means unoriginal ideas. This doesn’t necessarily mean that everyone is pushed to the same ideas, but AI “can funnel users with similar personalities and chat histories toward similar conclusions.”

      There is a subtle but serious intellectual risk: the narrowing of thought. By design, large language models produce outputs that reflect the most statistically probable or average responses which discourages originality and nuance. This endangers diverse perspectives and flattens them into predictable patterns shaped by prior date or algorithms.

    32. The university is a gym for your mind. For both body and mind, your abilities and skills atrophy or decline when they’re not used, like a dead limb, for efficiency and energy-savings. This deskilling is already happening with doctors and other professionals, not just students.

      This metaphor frames education as a kind of mental training ground, where the workout is quite simply, doing the reading, writing, and critical thinking and as a result, building intellectual strength. Just like any other muscle in our body, our mind weakens, cognitively speaking, with the overall reliance on AI. Extending this concern to doctors, and other professionals which may connect to a larger debate in educational philosophy about resilience, discipline, and long-term costs of outsourcing efforts to machines. I see Lin's point as preserving mental fitness necessary for intellectual and professional life.

    33. To begin with, why are you at a university in the first place? Maybe you’re here just to get training for a particular job or career path, but this education is generally not free in America, and it likely costs you or your parents a substantial amount of money every year. Even if you have scholarships or a full ride (which someone is paying for), you still incur opportunity costs, i.e., the loss of other things you could have been doing with your time if you weren’t here.

      University is not costless - it involves financial investment, by raising this point, Lin is highlighting that education is a scarce and valuable resource, not something to be treated lightly or bypassed with shortcuts like AI. It reframes AI misuse as not just a violation of rules, but as a kind of self-sabotage that undermines the very reason for being at university, essentially pushing students to reconsider whether leaning on AI aligns with their educational goals or wastes the financial sacrifice being made to be afforded this opportunity.

    34. There are other possible benefits, of course. AI can help tackle hard problems facing the world, such as climate change, energy sustainability, diseases, and other serious challenges, even aging and death itself. Some predict the end of scarcity (of food, energy, etc.) because of AI and therefore the end of wars once there’s radical abundance.

      This gestures to the almost utopian promises often associated with AI. By listing global crises like climate change, disease, and scarcity - though, the scale of these claims contrasts sharply with the classroom context: while AI may one day help solve humanity's hardest problems, it does not necessarily follow that it should solve a student's homework struggles. Framing AI as some saviour of civilization is inappropriate because this obscures its limitations, risks, and unintended consequences. What might be losing in human intellectual development?

    35. AI can be smarter or more informed, especially if you’re a person who’s not that educated to begin with. With AI, you can now produce well-written, grammatically correct, thoughtful papers and other content, which might have been a great struggle for you before.

      This is both appealing, and a danger of AI in education. On the one hand, it democratizes access to polished language and complex information, ultimately giving students who struggle with writing mechanics or academic conventions a powerful tool. This acknowledges the real barriers that less experiences or less confident learners face which may feel liberating for these students. However, the bottom line is that while AI masks those struggles, it does not resolve them. Students skip over the process of learning how to write and think for themselves, leaving them entirely dependent on a system that performs the task for them - so there is disempowerment through dependence.

    1. ‘Right side,’ the blind man said. ‘I hadn’t been on a train in nearly 40 years. Not since I was a kid. With my folks. That’s been a long time. I’d nearly forgotten the sensation.

      The blind man still remembers the sensation of a train ride after 40 years of not being on one.

      This can be related to the documentary crip camp as it gave him a sense of freedom doing something he hadn't been able to do for so long. https://www.youtube.com/watch?v=OFS8SpwioZ4

    1. alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" numbers = "0123456789"

      for letter1 in alphabet: for letter2 in alphabet: for letter3 in alphabet: for digit1 in numbers: for digit2 in numbers: for digit3 in numbers: for digit4 in numbers: code = letter1 + letter2 + letter3 + digit1 + digit2 + digit3 + digit4 print(code)

    1. Read your paper aloud to catch errors, and use spell check on your computer to correct any typos.

      Reading out loud in general or when trying to memorize things does help you catch errors or either thing how it can be better

    2. Revising Your Body Paragraphs As you build support for your thesis in the body paragraphs, always ask yourself if you are spending your readers’ time wisely.

      We all think differently and as you write and develop your body paragraphs, make sure every detail truly strengthens your main point and respects your readers’ time by being clear and meaningful.

    3. Make sure you draw your readers in from the beginning and follow with interesting and supportive information. If readers are not intrigued from the very beginning of the piece, they will quickly become distracted and avoid reading any further.

      You want to hook your readers from the very start and keep giving them cool and helpful info. If the beginning is not as interesting, it then becomes a story collecting dust. Sadly..

    4. Another helpful technique in the final revision process is to have someone read your paper aloud to you. This practice will force you to go over the material more slowly and allow you another chance to absorb the content of the paper.

      Having someone read your paper out loud helps you review it more carefully and gives you a better chance to understand and catch any mistakes.

    5. Revising and editing are two separate processes that are often used interchangeably by novice writers. Revising requires a significant alteration in a piece of writing, such as enriching the content, or giving the piece clarity; editing, however, is not as involved and includes fixing typos and grammatical errors.

      Adding more details, reorganizing ideas, or making things clearer, so the message comes through better.

    1. She smoothes her hair with automatic hand,

      This line read to me as a representation of the increasing automation and lack of autonomy that is a central theme throughout The Waste Land. This woman, who has just engaged sexually with a man, is performing an “automatic” act by smoothing her hair. Sex, for this character, has become a necessity for her survival, financially, and therefore an automatic act devoid of the human feeling with which it is typically associated. This line appeared to me in conversation with The Jig of Forslin, specifically the line “women by mirrors combing out their hair,” which is positioned in a section about “thousands of secret lives” that have been revealed and laid “bare.” This idea of automation of everyday life, the transition from unique to monotony, brought me back to our discussions of industrialization, and how the overall sentiment of the poem’s moment was one where autonomy was being lost to automation. The character of the sex worker in the Fire Sermon does not suffer the terrible fate of most other women in the poem, but instead is forced to live a life where acts that should carry joy and wonder become tired and repetitive.

    2. When lovely woman stoops to folly and Paces about her room again, alone,

      This line is extracted from Goldsmith's "The Vicar of Wakefield", referring to a young woman, Olivia, who was seduced by the wicked Squire Thornhill. Tricked into a fraudulent marriage by the notorious womanizer, she is left disgraced in the eyes of society, and the stigma of her seduction taints her family's reputation by extension. Olivia thus sings a ballad of her own lament: "When lovely woman stoops to folly, / And finds, too late, that men betray, / ... / The only art her guilt to cover, / To hide her shame from ev'ry eye / To give repentance to her lover, / And wring is bosom, is-- to die" (133-4). This line encapsulates the rigid moral standards to which women are held in society: purity and virtue are held above all, thus determining their absolute value as a person. However, although Olivia is framed as a victim of male seduction, she is simultaneously blamed for her own disgrace by "stooping to folly", with the entirety of punishment ultimately falling upon the woman. Furthermore, another important message is emphasized by Olivia: "the only art" left to woman who have lost their chastity is death. Dying emerges as the only feasible resolution to the issue; it acts not only as an expression of despair, but as a socially endorsed means of restoring dignity, banishing guilt, and even receiving remorse from others.

      On the other hand, Eliot echoes a similar notion of female disgrace "The Fire Sermon". The typist, passive and disengaged during her forced sexual encounter with the clerk, offers no resistance but equally no desire to his intimate advancements. As a result, her moral reputability is irrevocably tarnished; while she was once "lovely", she is now reduced to a hollow emblem of sexual exploitation and impurity. Additionally, she condemns herself to emotional isolation, "pac[ing] about her room again, alone" (line 253). In this moment, Eliot not only depicts the moral disrepute of this singular woman in the modern wasteland, but gestures towards the broader commentary on the tainted condition of womanhood, stripped of its agency and dignity.

    3. Old man with wrinkled female breasts, can see

      I find it interesting how Tiresias is between man and woman. According to Ovid and Lempriere, Tiresias lived for seven years as a woman after striking two mating snakes. This unique experience, which allowed him to settle the dispute between Jove and Juno about sexual pleasure, makes him the ultimate witness. His "wrinkled female breasts" and status as an "old man with wrinkled dugs" later in line 228 signify his union of male and female, allowing him to embody both the male "clerk" and the female "typist." We've seen Eliot experiment with characters who shift from man to woman, but never embody both. By embodying both, Tiresias unites the two genders. I'm even more intrigued by the fact that Tiresias is blind ("condemned to never-ending night," as stated by Ovis) and a prophet. Tiresias is not only between man and woman ("throbbing between two lives," line 218), but moving between the past and the present. His loss of sight almost strengthens the truth he sees by enabling him to witness the flaws of either man or woman, or the modern day. Is Eliot suggesting that these two aspects can only unite successfully when being blinded by worldly concerns?

    1. We can use dictionaries and lists together to make lists of dictionaries, lists of lists, dictionaries of lists, or any other combination.

      Although this is a great thing to do, you should never use a list as a key of your dictionary. Lists are mutable (this means you can add or remove elements). So, you would not be able to consistently refer back to your mapped values later (as the key is constantly changing). Instead, please use tuples as they are immutable

    1. un rey viejo pero insensato

      Una persona con años y experiencia en puestos de responsabilidad, quizá crea que no necesita consejos de nadie, y se vuelva insensato porque se siente confiado y seguro, pero aprenderá rápido el poder la fama, sin humildad ni sabiduría, es efímera, y en éste caso hasta un joven, que quizás estuvo preso por éste, puede llegar a ocupar su lugar.

  2. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Zero-based numbering. September 2023. Page Version ID: 1176111995. URL:

      BCPL (a precursor of C) used zero-based arrays because it was memory efficient. An array element at index 0 could be accessed directly at the address of p + 0. Since then, a lot of languages copied this convention so computer scientists start everything from 0, rather than 1.

      As a math & cs double major, this is very interesting. In my cs classes, we start proofs with n = 0 while we start our natural number n = 1 in math classes. So, I need to mention what convention I use at the start of my proof.

    2. Shannon Bond. Elon Musk wants out of the Twitter deal. It could end up costing at least $1 billion. NPR, July 2022. URL: https://www.npr.org/2022/07/08/1110539504/twitter-elon-musk-deal-jeopardy (visited on 2023-11-24).

      Elon Musk entered a merger agreement with Twitter's board that said that if he tried to back out of buying Twitter, he could be liable for up to a billion dollars. But despite this, he's potentially setting himself up to back out of the deal, citing Twitter's bot presence being too high (which Twitter denies).

    1. disillusionment with their ideas of war "i thought the british army could never retreat"

      killing, despite seeing the enemy as people

      racism within the army (british/indian)

    1. view of war as glorious, steeds and honourable deaths descriptions of the nationwide love for the war

      seeing war as a noble escape from the life theyre living right now, giving them a chance to avoid life for a while partially aware of propaganda

      seen as a thing to do with friends

  3. social-media-ethics-automation.github.io social-media-ethics-automation.github.io
    1. Brian Whitaker. Oman's Sultan Qaboos: a classy despot. The Guardian, March 2011. URL: https://www.theguardian.com/commentisfree/2011/mar/04/oman-sultan-qaboos-despot (visited on 2023-11-17).

      I found myself interested in the image of a ruler posing to be benevolent and cultured while really being ignorant and dismissive towards their people. Specifically the detail of how difficult it is for the people of Oman to assemble and to speak out makes me understand the connection between the Sultan and social media bots. If social media congregation is the only reasonable way for people to speak out against a neglectful government, it makes the ethical question of automated bots a bit more complicated.

    1. the ultimatum being completely unachievable. limits to serbian sovereignty, with ah wanting to participate in their investigations and wanting censorship within serbia

      germany didnt want a super harsh ultimatum, because military action with serbia means military action against russia

    1. Wastewater monitoring was more effective than clinical testing in the early detection of specific variants, with notable delays observed in clinical surveillance.

      Wastewater gave faster signals than clinical sequencing

    2. Wastewater sampling from aircraft tanks originating from China started on January 13, 2023, at Stockholm Arlanda

      Aircraft wastewater sampling began immediately after EU recommendations.

    1. emphasis on serbian nationalism and self determination Supreme central directorate controls all info about the group, though there are also regional cells harsh punishments and extreme secrecy wants control in the "serbian regions" full of people who are not serbian (croatia, sea-coasts, and the like)

    1. # Go through the tweets to see which ones have curse words for mention in mentions.data: # check if the tweet has a curse word if(predict(mention.text))[0] == 1): # if it did have a curse word, put it in the cursing mentions list cursing_mentions.append(mention)

      I remember learning about some of this stuff in AP Comp Sci Principles. When we were hearing about automated bots that go through social media and take specific actions, and then further provided the steps to run code to make that happen, I started trying to put the steps of the code together in my mind. I figure you need to iterate through a list to look for particular phrases, which you'd set within another list, along with a for loop to detect your desired word in social media. When I start to get lost is when I think about scaling that to be bigger.

    1. a ‘duty of care’ to act respon-sibly towards the sites we excavate, as well as to the public who relies on us toproduce data that is accessible, understandable, and thoughtful. The chief wayin which we consider whether we are meeting that responsibility is through ourpractice of ‘archaeological ethics’.

      I've always thought about archeologists especially when I vsion the landscape and truth of what we call our history. How do we know what to trust? How can we preserve this trust for future generations? It takes a lot of acre to make sure that nothing slips through the cracks and there arew no cracks in the artefacts atht didn't manage to slip past.

    1. l

      This is ironic because the time in which this was published, when you became a wife it was automatically assumed that you'd become a mother...whether that was what you desired or not

    2. I am A Wife

      The switch from a broad classification to the proper specific noun of " A Wife" reduces aspects of herself while highlighting this one in particular as though " A Wife" is where her story begins and ends

    1. On June 14, 2025, the defendant disguised himself as a member of law enforcement and traveled to the homes of Democratic elected officials with the intent to intimidate and murder.

      Chilling, and telling of the ease with which false identity as law enforcement can be effective.

    1. But to go to nations with whom there is no war, who have no way provoked,without farther design of conquest, purely to catch inoffensive people like wildbeasts for slaves, is an height of outrage against Humanity and Justice, that seemsleft by Heathen nations to be practised by pretended Christians

      Paine was kind of a beast. I like how he writes with such passion.

    Annotators

    1. If you want something super cheap, a Clarke Sweetone.  If you want something that's more of an intermediate instrument, I'd get a O’Briain Improved, Freeman Tweaked, Dixon Trad or DX005, or Hoover PVC. Those are just recommendations from experience, there are a few threads on here that have lots of recommendations. https://www.reddit.com/r/tinwhistle/comments/1fq77yf/pinned_whistle_maker_list/ https://www.reddit.com/r/tinwhistle/comments/179avhc/a_request_for_a_pinned_thread_of_all_whistle/

      Several people here recommend the Dixon DX005 (plastic) as a good mid-price starter whistle in the $50 range.

      https://reddit.com/r/tinwhistle/comments/1nsape6/thin_whistle_for_learning/

    1. List of active MAKERS unless noted RETIRED Abell $$$ premium wooden whistles https://www.abellflute.com/whistles/ Alba mezzo and low whistles https://www.albawhistles.com/ Alexander Karavaev $$ Russian Barterloch $$-$$$ *USA Handmade whistles in D * https://www.barterloch.com Becker RETIRED http://www.beckerwhistles.com/ Burke $$$ premium metal whistles https://www.burkewhistles.com/ Busman $$$ RETIRED USA handcrafted wood and polymer https://www.busmanwhistles.com/ Carbony $$$ USA, premium carbon fiber whistles with unique offerings https://carbony.com/product-category/whistles/ Clare $ Ireland, Generation/Feadog style whistles https://www.tin-whistle.com/buy.html Clarke $ England, the original tin whistle, conical https://www.clarketinwhistle.com/ Clover handmade https://www.facebook.com/CloverFlutes01 Dannan $ mass produced DeQuelery $$ Netherlands, handmade https://dequelery.nl/en/whistles/ Erik the Flutemaker $$—$$$ exotic wood and carbon-fiber whistles https://eriktheflutemaker.com Feadog $ mass produced metal whistles https://feadog.ie/ Flo-Ryan $$$ Austria, carbon fiber D whistles https://www.flo-ryan.com/ Fred Rose $$$ UK, premium wooden whistles https://www.fredrose.co.uk Galeón $$—$$$ aluminum and wood whistles https://www.galeonwhistles.com/ Gary Humphrey $$—$$$ metal whistles made to order https://humphreywhistles.github.io/ Generation $ England, the most common mass produced whistle https://generationmusic.co.uk/ Glenluce $$ Pakistani made wood and Sindt-style metal whistles Goldfinch $$ cpvc whistles https://goldfinch.eu/ Goldie $$$ https://www.colingoldie.de/ Harmony Flute $$—$$$ Russian, exotic wood whistles https://harmonyflute.com/product-category/catalogy/whistle/ Hermit Hill Folk Instruments $$ handmade metal or plastic whistles to order, offers engraving https://www.hhfi.biz Howard $$—$$$ low D and C whistles with interchangeable mouthpieces for different tone https://www.howardmusic.co.uk/ iVolga $$ wooden whistles including more chromatic models James Dominic $-$$ PVC low whistles https://james-dominic-whistles.myshopify.com Jerry Freeman $$ tweaked whistles https://www.ebay.com/usr/freemanwhistles John Laurence $$ pvc whistles https://drjohnlaurence.com/takahe-flutes Kerry Whistles $$—$$$ metal whistles https://www.kerrywhistles.com Killarney $$ Sindt-style https://killarneywhistle.com Labu Flutes $ Bangladesh, bamboo whistles, keyed according to XXX–OOO https://www.labuflutes.com/ Lark $ Susato-like whistles Lindstruments $$ Scotland, 3D-printed whistles https://lindstruments.com/ Lir $$ silver plated, Sindt-style https://www.lirwhistle.com 10% off STEPHANIETINWHISTLE MackBeth (formerly Hoover) $$$ USA, handmade one at a time, small batch https://www.mackbethwhistles.com MASC $$—$$$ aluminum whistles https://mascwhistles.wordpress.com Mazur $$ Poland, handmade by Michał Mazur https://www.facebook.com/mazurwhistles Roy McManus / McMaghnuis $$$ Belfast, wooden whistles, instructions on website not found https://preview.redd.it/534nq22mtrrd1.png?width=911&format=png&auto=webp&s=2bb6bb725443051265c29a972709af6a0033bbf2 http://www.roymcmanus.co.uk https://www.facebook.com/mcmanuswhistles/ McNeela $$ Sindt-style https://mcneelamusic.com/whistles.html Milligan $$$ USA, handmade exotic wood and delrin whistles https://milliganwhistles.com/whistles.html MK $$$ premium low whistles https://mkwhistles.com Musique Morneaux $$$ premium wood whistles https://musiquemorneaux.com/whistlesflageolets/ Naomi $ Chinese metal and carbon fiber whistles Nick Metcalf $$$ USA handmade whistles https://www.irishwhistle.com/ Oak $ mass produced metal whistls O'Briain Improved $$-$$$ modified whistles https://www.obriainimproved.com/ Ormiston $$$ Scotland, blackwood/silver whistles http://www.ormistonflutes.co.uk/index.html PA Music $$$$ Austria, wooden/aluminum whistles http://www.pa-music.com/en/instrument-maker/instrument/irish-whistles/detail Pablo Asturias $ México, PVC, aluminum by request http://www.asturiaswhistles.com/store Peter Worrell $$$$ UK, whistles fitted with keys for one-handed playing http://www.peterworrell.co.uk/onehandedwhistles.htm Reyburn $$$ USA, offering offset hole patterns https://reyburnwhistles.com River Whistles $ USA, 3-D printed whistles https://www.riverwhistles.com/ Rui Gomes $$—$$$ Portugal, handmade wood and metal whistles and flutes https://soprosrg.com/en-us https://www.etsy.com/shop/Sopro?ref=seller-platform-mcnav&section_id=39375902 Setanta $$—$$$ premium metal whistles http://www.setanta-whistles.com/ Shaw $$ traditional tin made, wood block, conical bore, non-tunable whistles https://www.daveshaw.co.uk/SHAW_Whistles/shaw_whistles.html Shearwater $—$$ https://www.shearwaterwhistles.com/ Sindt $$$ hard to find and copied by many [sindtwhistle@aol.com](mailto:sindtwhistle@aol.com) Siog $$ Sindt-style whistles Susato $—$$ USA, plastic whistles, recorders, pentacorders, dulce-duos, and more https://www.susato.com/ Syn Whistles and Oz whistles $$ RETIRED Australia https://www.ozwhistles.com/shop/synwhistles S.Z.B.E. $$ Japan https://www.szbe.net/index\_e.htm*the Japanese page is better maintained than the English* Thomann $ https://www.thomannmusic.com Thornton $$$ Ireland, tapered wooden whistles https://tommmymartin.wixsite.com/thorntonwhistles Tilbury $$ USA, aluminum whistles http://www.sprucetreemusic.com/instruments/other-instruments/tilbury-whistles Tony Dixon $—$$ a wide range of whistles https://www.tonydixonmusic.co.uk/ TWZ $-$$$ Germany https://www.tinwhistle.de/tin-whistles/twz-tin-whistles-aus-eigener-fertigung/index.php Waltons $ Ireland, books and mass produced metal whistles https://waltonsirishmusic.com/collections/tin-whistles West Coast Whistle $$-$$$ Canada, metal whistles with numerous color options https://www.angelfire.com/music2/WestCoastWhistleCo/OrderPage2.html Weston $$ handmade wooden whistles https://westonwhistles.co.uk/?page_id=12 Whistlesmith $—$$ USA, flute-like plastic whistles https://whistlesmith.com Woodi $ Susato-like whistles .................................................................................... List of retailers: https://bigwhistle.co.uk/ https://mcneelamusic.com/whistles.html https://larkinthemorning.com/collections/pennywhistles https://www.hobgoblin-usa.com/ https://hobgoblin.com/ https://www.thomannmusic.com https://www.justflutes.com/shop/browse/traditional-flutes-whistles https://www.gandharvaloka.ie/product-category/irish/whistles/ https://www.irishflutestore.com/ https://earlymusicshop.com/collections/tabor-pipes https://www.jimlaabsmusicstore.com/store/tin-whistles/ http://www.thewhistleshop.com https://www.scottshighland.com/product-category/bodhrans-whistles/ https://www.buckscountyfolkmusic.com/collections/wind-flutes-fifes-whistles-harmonicas-etc https://www.grothmusic.com/c-652-tin-whistles.aspx https://www.1to1music.co.uk/pages/whistles-and-flutes .................................................................................... Usefull Websites Forum https://forums.chiffandfipple.com/viewforum.php?f=1 All the whistle keys and other information https://learntinwhistle.com/resources/tin-whistle-fingering-charts/ Sheet music and forum https://thesession.org/ Sheet Music https://pdfminstrel.wordpress.com/4-sopranodescant-recorder-pdfs/ Transposing https://janmilosh.github.io/chord-transposer/# Find sheet music and books https://kupdf.net/ Christian whistler's website and forum https://praisewhistlers.org/mackhooverwhistles/MackHooverWhistles.html Sheet music and transcription app https://flat.io My Account with some songs transcribed https://flat.io/geoffrey_rox

      https://www.reddit.com/r/tinwhistle/comments/179avhc/a_request_for_a_pinned_thread_of_all_whistle/

    1. Relationship status

      Relationship status would be the easiest set of data to store out of the available options. The constraints placed could be that users are either in relationships or not, and it could be measured in a Boolean, showing either true or false to the status of users.

    1. 2. For a minimalist wiki engine, isn't redundant to have two scripting languages and two hypermedia frameworks/libraries Short answer: for the scripting part no, as YueScript is intended for the end user while Lua is for the developer, and because the first transpiles to the second, this combination will eventually cover a learning path between users and developres. For the hypermedia part yes, but HTMX may be replaced in the future with Datastar, once more understanding and limits of the first are reached, while we build the infrastructure for the second, particularly for real time apps.

      Aquí se habla de lenguajes e hipermedios como Yuescrip que es para usuario que usan wiki, Lua para los programadores y Yuescript se convierte en Lua, para aprender poco a poco, tambien se habla de hipermedios como htmx, y a futuro llega datastar por optimizar tiempos, pero sigo pensando en el proceso de aprender , de a poco es bueno, con ejemplos, teniendo en cuenta el nivel principiantes y mas avanzados.

    2. Non FAQ Cardumem is inmature/unknown enough to not have any Frequently Asked Questions (FAQ), but here are some of the imagined ones to start with: 1. Why another wiki engine? There is plenty of wiki engines with variety of features and we have had direct experience with several of them (MoinMoin, Dokuwiki, MediaWiki, Tiddlywiki) since early 2000's until now. Because of that, we acknowledge the software crafmanship and dedication behind such creations from their developers and the communities around them. So, what is the Cardumem's differential offering that deserves to create even more sofware? Cardumem is proposed, in the context of the Grafoscopio community, where we have experimented with digital metatools and the notion of interpersonal wikis as a way to collect and care for personal and community knowledge and memory. Because of the connections of members in the Grafoscopio community with places in other communities and academia, our practices and infrastructures has been tested in different contexts: linguistic revitalizing for indigenous communities in the Colombian Amazonas, Role Playing games, diagnosis of community learning needs in information and communication technologies, and examples (1, 2) of personal blikis (blogs + wikis), among others. Our previous and current experience with wiki engines, made me wonder about new possibilities, considering that none of the previous wiki engines were: designed before the current increasing rise and awareness of hypermedia systems. designed with our particular needs, practices and context in mind, and even TiddlyWiki, the one that is the simpler and more fluent to customize, was reaching a "stress point" regarding our documentation workflows and the extension capabilities and learnability that the (primary) author of Cardumen, was intending for. Given that our needs, practices and workflows in the Grafoscopio community were feeding other communities and individuals, I thought that a deeper exploration would be interesting in order to adress the required tool evolution in Grafoscopio community and other possible future beneficiaries, like the communities and individuales that are using our customizations in Amazonas, the coffe region, Bogotá and maybe some other places, specialy in the Global South. Lua and YueScript would resonate with the simplicity that exists in TiddlyWiki and while a web server is added, to create a Multi-Page Application (or MPA) instead of the Single Page Application (or SPA), it is expected to maintain much of the cross-platform portability, thanks to the high embeddability of Lua/YueScript and the simplicity of hypermedia systems, compared to their JavaScript counterparts. The table as a single data structure in Lua and functions as first class citizens, would preserve the uniqueness of Tiddlers for storing wiki content, appearance and functionality and would explore the Tiddler Philosophy of providing an "algebra of information", which allows remixing "minimal units of meaning with richly modelled relationships between them" in languages beyond JavaScript.

      Comprendo que Cardumen es nuevo y se esta desarrollando, aunque menciona que es mas fácil de usar que el tiddlywiki, pienso que las microwikis tienen un transfondo de lenguaje de programación, de manual de uso, entonces cardumen piensa en lo mas adaptable y esta bien, pero el hecho de hablar en común sobre lenguajes de programación, de sistemas, de tecnología etc, se generan estos lenguajes diferentes a Java que es muy chévere, también genera estos retos en la enseñanza y acompañamiento adecuado para que tenga el alcance que se desea.

    3. New Syntax The relation between notation and thought has been expressed repeatedly: from the practical imposibility to multiply in roman numbers versus the easiness to do it with arabic numbers, to the convenience of changing between Leibnitz's and Newton's notation for derivates and how that makes some manipulations easier for particular context, to the Kenneth's Iversson's "Notation as a tool for thought" and its impact on APL and its successors. So, how a new syntax can make more explicit the Hipertextual Algebra we talked before and empower memory/knowledge hypertextual/hypermedia practices? Cardumem is an exploration of that inquiry. The proposed new syntax has the particular intention of dealing with the problems of TiddlyWiki's domain-specific languages (DSL) with its operators and filters that, due to TiddlyWiki's syntax and particularities, do not generate knowledge easily transportable to other contexts outside TiddlyWiki and viceversa, the knowledge you have from other programming languages/environments clashes with the TiddlyWiki's syntax and particularities. That means that the gentle curve that TiddlyWiki provides between being a content creator and a functionality creator within the wiki, with the smooth transitions between lightweight markup languages, macros, filters and operators, is limited in the future, if one wants to use those concepts in a more general way or mix them with knowledge coming from other programming environments/languages. With Cardumem, a DSL could be implemented, which would also be simple to learn, but which embodies more general concepts such as functional programming, pipelining, data injection and transformation, and which can be more easily reused and transported in other contexts. In this way, the gentle learning curve described above is preserved, while being generalized at the same time. The new syntax3 is provided by YueScript and Mustache logicless templating system, so expressions like this should be available to select all units of information (called Dumems in Cardumem) tagged as "member", randomize them and apply a particular template/style: tagged("member") |> ramdomize() |> stylize("MemberTemplate") As you can see, the readability should be greatly improved over other wiki engines (including TiddlyWiki) and composability should be made more explicit. Because of that and the hypermedia metatools approach, we hope that syntax encourages the exploration of new pragmatics and semantics, so more people traverse the gentle curve between content and functionality creator, while exploring the "algebra of hypertext" in their particular projects, like the ones we had with other wiki engines: Personal Knowledge Management, interpersonal wikis (1, 2), web portafolios, linguistic revitalization in Colombian Amazonas, community's memory, role playing games, among others

      El tema de los lenguajes en si es tenso, aprenderlo genera retos como las nuevas formas de escribir, la familiarización con los símbolos y la gramática, bien sea para interpretar de forma correcta o traducir adecuadamente, y estos elementos se encuentran en este proceso, creo que es poco a poco para que el proceso de guardar, organizar y mejorar evolucionando a formas mas claras y fáciles sea realmente efectivo, toma tiempo, porque el proceso de aprender lo técnico, los temas de programación funcionales todo esto presentando ejemplos claros incluyendo esos elementos de comunidades y juegos.

    4. Cardumem Wiki Cardumem is a wiki engine that continues TiddlyWiki's pioneer and long lasting exploration of an "algebra of hypertext", but goes beyond the JavaScript ecosystem, by reimagining such exploration from a Hypermedia Metatools approach. In a sense, Cardumem is a homage to TiddlyWiki, while (re)thinking/extending its deep ideas from another angle. Cardumem is a prototype of a minimalist extensible wiki engine, inspired by TiddlyWiki and backwards compatible with its data, made by combining Lua/YueScript + HTMX/Datastar, for server-side hypermedia programming, instead of client-side JavaScript. Cardumem tries to introduce new syntax for what Jeremy Ruston, author of TiddlyWiki would call1 an "algebra of hypertext", while supporting various of the pragmatics (practices) we have in several communities and projects that already use TiddlyWiki, and hopefully empowering the practices we care more about and introducing new ones. With the new syntax we try to support a similar gentle curve between being a content creator and a functionaly creator, like in TiddlyWiki's while implementing a more parsimonious design with syntax/concepts that can be applied more fluently outside our wiki engine (like data piping, templating and functional programming). A metatool is a tool that is used to describe, build and modify itself and eventualy other tools. Because of that, metatools are particularly useful to build custom workflows. While Cardumem at the moment is not build in itself, one important objective is to use the lessons of the Grafoscopio community building metatools, to boostrap first the usage of Cardumem in a particular community and then the meta properties of the tool, so it can be modified for such community. It that sense ours is also a practical and embodied inquiry and reflection in resonance with the Malleable Systems Collective (in fact, we found such collective after finishing the PhD research that conduced to the Grafoscopio metatool and community around it)

      Encuentro interesante como en base a una herramienta se puede generar otra de forma mejorada y que en el transfondo siempre sea solucionar y hacer mas fácil y flexible las necesidades y las formas de uso de los usuarios, ya que estos elementos están implícitos en el acceso de la información.

    5. Cardumem is proposed, in the context of the Grafoscopio community, where we have experimented with digital metatools and the notion of interpersonal wikis as a way to collect and care for personal and community knowledge and memory. Because of the connections of members in the Grafoscopio community with places in other communities and academia, our practices and infrastructures has been tested in different contexts: linguistic revitalizing for indigenous communities in the Colombian Amazonas, Role Playing games, diagnosis of community learning needs in information and communication technologies, and examples (1, 2) of personal blikis (blogs + wikis), among others.

      se ve que Cardumem no nace solo por curiosidad técnica, sino que responde a necesidades reales de las comunidades. Me parece super valioso que lo relacionen con la memoria comunitaria y la revitalización de lenguas, esto hace que el programa también pueda tener un impacto social y humano.

    6. With Cardumem, a DSL could be implemented, which would also be simple to learn, but which embodies more general concepts such as functional programming, pipelining, data injection and transformation, and which can be more easily reused and transported in other contexts. In this way, the gentle learning curve described above is preserved, while being generalized at the same time.

      Lo que entiendo es que quieren un lenguaje más fácil y práctico por que así más personas pueden usarlo sin enredarse. Si la forma de escribir la información es clara, también lo será la forma de organizar nuestras ideas. y eso me parece muy interesante.

    7. Cardumem is a prototype of a minimalist extensible wiki engine, inspired by TiddlyWiki and backwards compatible with its data, made by combining Lua/YueScript + HTMX/Datastar, for server-side hypermedia programming, instead of client-side JavaScript. Cardumem tries to introduce new syntax for what Jeremy Ruston, author of TiddlyWiki would call1 an "algebra of hypertext", while supporting various of the pragmatics (practices) we have in several communities and projects that already use TiddlyWiki, and hopefully empowering the practices we care more about and introducing new ones. With the new syntax we try to support a similar gentle curve between being a content creator and a functionaly creator, like in TiddlyWiki's while implementing a more parsimonious design with syntax/concepts that can be applied more fluently outside our wiki engine (like data piping, templating and functional programming).

      Me llama la atención como se presenta a Cardumem, es como una especie de continuación de Tiddly Wiki, pero con un enfoque más amplio. Me parece interesante que lo llamen “álgebra de hipertexto”, porque no se trata solo de unir información como en cualquier wiki. Da la sensación de que no es solo un programa, sino también una manera de reflexionar sobre cómo organizamos el conocimiento. Esto nos invita a meditar que no es únicamente una herramienta técnica, sino un espacio para experimentar con nuevas formas de aprender y compartir.

    1. and commonly some of usused to get up a tree to look out for any assailant, or kidnapper, that might comeupon us; for they sometimes took those opportunities of our parents’ absence, toattack and carry off as many as they could seize

      Growing up having to expect kidnappers constantly is crazy

    2. having slightlyreprimanded me, ordered me to be taken care of, and not to be ill-treated

      This punishment is so slight in comparison to those that come later.

    3. even to the wretchedness of slavery.

      This was a very powerful and engaging narrative. There isn't really much to say about it - it is simply a recounting of the horrors of the translatlantic slave trade. I think we all know that this stuff was horrible but reading a firsthand account makes it very real.

    4. O, ye nominalChristians! Might not an African ask you: learned you this from your God?

      For real. Nominal Christians is such a good way to describe it. There is no genuine belief amongst these people.

    5. This produced copious perspirations, so thatthe air soon became unfit for respiration, from a variety of loathsome smells, andbrought on a sickness amongst the slaves, of which many died, thus fallingvictims to the improvident avarice [extravagant greed], as I may call it, of theirpurchasers

      Isn't this so common from the greedy and wicked? Causing extensive harm to others but also to their own enterprise out of sheer greed and ignorance.

    6. he white people looked and acted, as Ithought, in so savage a manner, for I had never seen among any people suchinstances of brutal cruelty; and this not only shown towards us blacks, but also tosome of the whites themselves. One white man in particular I saw, when we werepermitted to be on deck, [was whipped] so unmercifully with a large rope near theforemast, that he died in consequence of it; and they tossed him over the side asthey would have done a brute.

      Isn't it ironic? But seriously imagine the absolute cruelty and barbarity of British sailors. The narrative of savagery or even demonic nature could much more easily be applied to white people. What made them not savages was their supposedly superior society and technology, not their behavior.

    7. , lest we should leap into the water: and I haveseen some of these poor African prisoners most severely cut for attempting to doso, and hourly whipped for not eating.

      The conditions of the slave ships were so unbelievable. I think that this is likely why banning the trade was often the first step taken in abolishing slavery.

    Annotators

    1. The gravestones crowding the island, demarcating cemeteries, built into thewalls of mills and pathways, or emerging in the middle of a field of sugarcane, do muchmore than record the names of wealthy British families who made Barbados ‘home’

      this certainly paints a visual picture and helps readers see the perspective

    1. This longstanding question is known in psychology as the nature versus nurture debate. It seeks to understand how our personalities and traits are the product of our genetic makeup and biological factors, and how they are shaped by our environment, including our guardians, peers, and culture.

      See this explains that the nature versus nurture debate, which looks at how much of who we are comes from genetics and biology (nature) and how much comes from our environment and experiences (nurture). It helps us understand why people develop different personalities and traits.

    1. Medication management is another area where AI can play an important role in empowering patients. By analysing patient data, such as prescription histories and vital signs, AI algorithms can help healthcare providers improve medication management and reduce the risk of adverse drug events. This can improve patient safety and lead to better health outcomes.

      I agree yes and no medication is one way that AI could be helping patients empower with for their health, but the fact is that AI does not know how to track your progress or even the way you react or how you might change with it. AI could only help very little as to human stats and results.

    2. AI has the potential to bring about positive changes in healthcare and to empower patients by providing them with more control over their health.

      AI does have the potential to bring positive changes and as well as being able to empower patients with more control over their health, but humans are more than likely to keep you on the right path for your health. Ai could only give you so much as for humans could give you more and run more test and more answers for you.

    3. AI algorithms may also generate questions that are too easy, too difficult, or not relevant to the course material.

      This is something that educators should keep in mind if using AI, and it should also show that they shouldn't completely rely on AI to generate the entirety of their coursework or exams

    4. AI algorithms can also personalize exams by analysing student performance data and generating questions that focus on areas of weakness, thereby improving student learning

      AI can really be helpful for student's education and learning; AI analyzes the student's performance data in order to focus on topics/areas that need more practice

    5. This information can help patients better understand their health and make informed decisions about their care. Another important application of AI in healthcare is remote monitoring. With AI-powered remote monitoring systems, patients can have their vital signs tracked and monitored, alerting healthcare providers to any potential issues. This can lead to earlier intervention and improved patient outcomes, as well as reducing the need for in-person visits to healthcare facilities.

      This paragraph talks about AI and how it can help patients better understand their own health and have them involved in their own care. With the help of AI patients can make more educated decisions about their care and understand their current state of health. It also mentions how AI-powered remote monitoring systems can detect and track a patient's vital signs to alert a healthcare provider when there is a issue. This can result in better outcomes and earlier intervention, and also less of a need for in person patient visits, which helps increase access to care. This is important because it talks about how AI helps patients take a more active role in their own health management while also helping healthcare providers. This comes to show how technology can improve patient and provider communication while also having fast responses to medical emergencies and preventative treatment.

      Dave, M., & Patel, N. (2023). Artificial intelligence in healthcare and education. British Dental Journal, 234(10), 761–764. https://doi.org/10.1038/s41415-023-5845-2

    6. One of the key ways that AI can help is by detecting and preventing errors in medical care. AI algorithms can be trained to analyse medical records, identifying errors or potential risks such as misdiagnoses, incorrect treatments, or adverse events. This information can be used to help doctors prevent similar errors from happening in the future. Another way AI can be used is through clinical decision support. AI algorithms can be designed to provide doctors with real-time guidance and recommendations based on patient data, helping them to make informed decisions and reducing the risk of errors. This kind of technology can greatly benefit doctors who are facing complex cases and require quick access to relevant information.

      This paragraph talks about how AI can really help in increasing patient safety in relation to medical errors. The paragraph talks about how AI algorithms can analyze medical records and identify the situation, including misdiagnosis, incorrect treatment, or other threats, so that healthcare providers do not repeat errors and the quality of care improves. It also talks about AI in the way of it being a clinical decision support tool, because of this it can provide real time guidance and recommendations specific to the patients data. This can be really helpful for doctors that are handling difficult cases because they can be guided towards better decisions using data more effectively and efficiently. This is important because it talks about how AI can be used in providing patient safety in healthcare while showing the effectiveness of technology in helping human judgement to improve medical decision making and reduce medical errors.

      Dave, M., & Patel, N. (2023). Artificial intelligence in healthcare and education. British Dental Journal, 234(10), 761–764. https://doi.org/10.1038/s41415-023-5845-2

    1. Apprendre l’anglais en immersion a toujours été ma première source de motivation

      He is in Canada because he is motivated to learn english.

    1. L’action de Trump est pourtant clairement orientée vers le contraire du chaos guerrier y compris en Ukraine.

      Sa lutte contre le nihilisme woke est en cours et couvre toute l’étendue du désastre mis en place, depuis les transgenres dans l’armée jusqu’à l’immigration illégale massive, en passant par la grande corruption vaccinale.

      La grande thèse de Todd est la quasi-annexion d’Israël, ce qui suppose que l’attaque du Hezbollah, de l’Iran, du Qatar furent faites avec son accord préalable, tout comme l’acharnement à Gaza. Cela semble hautement invraisemblable et les choses sont bien plus compliquées que cela, comme on dit.

      Et puis, les humiliations et attaques caractérisées de Trump contre les européens ne sont-elles pas plutôt de raisonnables rétorsions contre la folie Européenne, et aussi des vengeances justifiées contre les dirigeants européens tous ligués contre son élection ?

    1. As Peter L. Berger (1963, pp. 23–24) notes in his classic book Invitation to Sociology, “The first wisdom of sociology is this—things are not what they seem.” Social reality, he says, has “many layers of meaning,” and a goal of sociology is to help us discover these multiple meanings. He continues, “People who like to avoid shocking discoveries, who prefer to believe that society is just what they were taught in Sunday School…should stay away from sociology.”

      sociology defies what we already know but more-so deepens what we can know.

    1. exists physically in digital technology as a string of bits,

      Yes, an excellent point. In a way there are multiple realities going on for digital documents. They exist as code (and many different layers of code if I understand the OSI model correctly) which many people would fail to interpret in a meaningful way when presented with it directly (I certainly would). Though I suppose the code itself expressed visually or in some other way could have some inherent interest. They also exist as they "are intended" by whomever created them opened by the software they were created for and, in addition, they might possibly exist opened through alternative software which could produce different (if perhaps, indecipherable) results.

    2. Suzanne Briet: Physical evidence as document

      In part, I appreciate the pragmatism of Briet's approach. It would certainly make a cataloger's life easier to view documents in this way and, on its surface, it makes a tremendous amount of "sense".

      However, I can't help but feel this view is a little too limited. Certainly, it seems to me, the antelope itself would be a source of information. In one way it is an example of what an "antelope" is, but it is also an individual and, beyond that, an individual at a certain snapshot in time.

      In a very broad view, we can think that nothing is truly permanent as all things are constantly changing. I think it depends so much on how we observe and questions of time scale.

      Human beings are not even exactly what we were in the past. We grow (both physically and in other ways), we change (we age, we change our minds, we change our clothes, we get tattoos, we erase tattoos) and eventually we, as an individual, will cease to exist by any observable means (depending on your belief system) other than by the "things" we leave behind.

      We also continue to exist, in a sense, in the minds of those who knew us, but their memories cannot be a whole picture of who we were and certainly no one may know truly how we are inside our own heads. Others will certainly bring their own biases or preferences to their memories of us which may or may not be a complete picture of who we were.

    3. This was convenient for extending the scope of the field to include pictures and other graphic and audio-visual materials. Paul Otlet (1868-1944), is known for his observation that documents could be three dimensional, which enabled the inclusion of sculpture.

      Yes! I have to say I love this characterization. I feel Art has so much to offer here in terms of expanding our thinking about what can be documents, objects, things. I have a particular interest in modern art up to our current era (however it will be named when looked at from the rear view) where anything from a found object to a blank canvas becomes a piece of art once it is placed in a gallery or museum.

      The act of placing a thing within the space gives it added significance. Thereby engaging the curators themselves into the process of defining an object. Not to mention that the artist, for whatever reason, selected this particular thing as opposed to all of the other possible things they could have selected. That is a kind of curation, we might say, the artist has engaged in.

      Despite the fact that the artist didn't necessarily create the object themselves, they may have simply picked it up out of the forest or on the street, they have made a statement that this thing means something (or some thing). It seems to me taking this expansive view is powerful and may only be tempered by the very practical concerns of how to preserve, display or catalog a thing.

    4. if the term "document" were used in a specialized meaning as the technical term to denote the objects to which the techniques of documentation could be applied,

      It seems natural to me that the definition of a document would evolve along with the technological advancements that allowed people to produce new types of documents or, at least, new types of "things". This could be a radio program at the dawn of that era or a television show at its outset and on and on to our current era.

      It strikes me that preservation (of things we would consider "documents") has typically been a reactive rather than a proactive process. Maybe there is no way around this -- perhaps since we can't predict what the next popular format (I'm using that term "format" very loosely to include everything from a social media post to a video to something that exists at least in part on a physical storage medium) we can't prepare ways to preserve new documents. However, there could be some collaboration in theory between the industries creating these new vessels for information and those that would engage in preserving them. Just a thought.

    1. eLife Assessment

      In this manuscript, the authors report the fundamental finding that a secreted ubiquitin ligase of Shigella, called IpaH1.4, mediates the degradation of a host defense factor, RNF213. The data are convincing and represent a major contribution to our understanding of cell-autonomous immunity and bacterial pathogenesis as they provide new mechanistic insight into how the cytosolic bacterial pathogen Shigella flexneri evades IFN-induced host immunity.

    2. Reviewer #1 (Public review):

      Shigella flexneri is a bacterial pathogen that is an important globally significant cause of diarrhea. Shigella pathogenesis remains poorly understood. In their manuscript, Saavedra-Sanchez et al report their discovery that a secreted E3 ligase effector of Shigella, called IpaH1.4, mediates the degradation of a host E3 ligase called RNF213. RNF213 was previously described to mediate ubiquitylation of intracellular bacteria, an initial step in their targeting to xenophagosomes. Thus, Shigella IpaH1.4 appears to be an important factor to permit evasion of RNF213-mediated host defense. Strengths: The work is focused, convincing, well-performed and important, and the manuscript is well-written. The revised version addressed all the concerns raised during the initial review.

    3. Reviewer #2 (Public review):

      Summary:

      The authors find that the bacterial pathogen Shigella flexneri uses the T3SS effector IpaH1.4 to induce degradation of the IFNg-induced protein RNF213. They show that in the absence of IpaH1.4, cytosolic Shigella is bound by RNF213. Furthermore, RNF213 conjugates linear and lysine-linked ubiquitin to Shigella independently of LUBAC. Intriguingly, they find that Shigella lacking ipaH1.4 or mxiE, which regulates the expression of some T3SS effectors, are not killed even when ubiquitylated by RNF213 and that these mutants are still able to replicate within the cytosol, suggesting that Shigella encodes additional effectors to escape from host defenses mediated by RNF213-driven ubiquitylation.

      Strengths:

      The authors take a variety of approaches, including host and bacterial genetics, gain-of-function and loss-of-function assays, cell biology, biochemistry, . Overall, the experiments are elegantly designed, rigorous, and convincing.

    4. Reviewer #3 (Public review):

      Summary:

      In this study the authors set out to investigate whether and how Shigella avoids cell autonomous immunity initiated through M1-linked ubiquitin and the immune sensor and E3 ligase RNF213. The key findings are that the Shigella flexneri T3SS effector, IpaH1.4 induces degradation of RNF213. Without IpaH1.4, the bacteria are marked with RNF213 and ubiquitin following stimulation with IFNg. Interestingly, this is not sufficient to initiate the destruction of the bacteria, leading the authors to conclude that Shigella deploys additional virulence factors to avoid this host immune response. The second key finding of this study is that M1 chains decorate the mxiE/ipaH Shigella mutant independent of LUBAC, which is by and large, considered the only enzyme capable of generating M1-linked ubiquitin chains. These findings are fundamental in nature and of general interest.

      Strengths and weaknesses:

      The data is well-controlled and clearly presented with appropriate methodology. The authors provide compelling evidence that demonstrates that IpaH1.4 is the effector responsible for the degradation of RNF213 via the proteasome and their conclusions are well supported. They have clearly demonstrated how Shigella disarms RNF213-mediated immunity.

      This work builds on prior work from the same laboratory that suggests that M1 ubiquitin chains can be formed independently of LUBAC (in the prior publication this related to Chlamydia inclusions). Two key pieces of evidence support this statement - fluorescence microscopy-based images and accompanying quantification in Hoip and Hoil knockout cells for association of M1-ub, using an M1 specific antibody, and the use of an internally tagged Ub-K7R mutant. Whilst it remains possible that the M1 antibody is non-specific, as acknowledged by the authors, the data in supplementary figure 1, comparing K7R-ub and the N-terminally tagged K7R ub variant, provides evidence that during Shigella infection, LUBAC independent M1-ubiquitin chains are indeed formed. This represents an important new angle in ubiquitin biology.

      The importance of IFNgamma priming for RNF213 association to the mxiE or ipaH1.4 remains an interesting question that awaits future studies that compare different intracellular bacteria and the role of RNF213.

      Overall, the findings are important for the host-pathogen field, cell autonomous/innate immune signaling fields and microbial pathogenesis fields and the work is a very valuable addition to the recent advances in understanding the role of RNF213 in host immune responses to bacteria.

    5. Author response:

      The following is the authors’ response to the original reviews

      Reviewer #1 (Public review):

      Shigella flexneri is a bacterial pathogen that is an important globally significant cause of diarrhea. Shigella pathogenesis remains poorly understood. In their manuscript, Saavedra-Sanchez et al report their discovery that a secreted E3 ligase effector of Shigella, called IpaH1.4, mediates the degradation of a host E3 ligase called RNF213. RNF213 was previously described to mediate ubiquitylation of intracellular bacteria, an initial step in their targeting of xenophagosomes. Thus, Shigella IpaH1.4 appears to be an important factor in permitting evasion of RNF213-mediated host defense.

      Strengths:

      The work is focused, convincing, well-performed, and important. The manuscript is well-written.

      We would like to thank the reviewer for their time evaluating our manuscript and the positive assessment of the novelty and importance of our study. We provide a comprehensive response to each of the reviewer’s specific recommendations below and highlight any changes made to the manuscript in response to those recommendations.

      Reviewer #1 (Recommendations for the authors):

      (1) In the abstract (and similarly on p.10), the authors claim to have shown "IpaH1.4 protein as a direct inhibitor of mammalian RNF213". However, they do not show the interaction is direct. This, in my opinion, would require demonstrating an interaction between purified recombinant proteins. I presume that the authors are relying on their UBAIT data to support the direct interaction, but this is a fairly artificial scenario that might be prone to indirect substrates. I would therefore prefer that the 'direct' statement be modified (or better supported with additional data). Similarly, on p.7, the section heading states "S. flexneri virulence factors IpaH1.4 and IpaH2.5 are sufficient to induce RNF213 degradation". The corresponding experiment is to show sufficiency in a 293T cell, but this leaves open the participation of additional 293T-expressed factors. So I would remove "are sufficient to", or alternatively add "...in 293T cells".

      We agree with the reviewer and made the recommended changes to the text in the abstract, in the results section on page 7, and in the Discussion on page 11. During the revision of our manuscript two additional studies were published that provide convincing biochemical evidence for the direct interaction between IpaH1.4 and RNF213 (PMID: 40205224; PMID: 40164614). These studies address the reviewer’s concern extensively and are now briefly discussed and cited in our revised MS.

      (2) In the abstract the authors state "Linear (M1-) and lysine-linked ubiquitin is conjugated to bacteria by RNF213 independent of the linear ubiquitin chain assembly complex (LUBAC)." However, it is not shown that RNF213 is able to directly perform M1-ubiquitylation. It is shown that RNF213 is required for M1-linked ubiquitylation in IpaH1.4 or MxiE mutants, this is different than showing conjugation is done by RNF213 itself. This should be reworded.

      We agree and edited the text accordingly

      (3) Introduction: one of the main points of the paper is that RNF213 conjugates linear ubiquitin to the surface of bacteria in a manner independent of the previously characterized linear ubiquitin conjugation (LUBAC) complex. This is indeed an interesting result, but the introduction does not put this discovery in much context. I would suggest adding some discussion of what was known, if anything, about the type of Ub chain formed by RNF213, and specifically whether linear Ub had previously been observed or not.

      We now provide context in the Introduction on page 3 and briefly discuss previous work that had implicated LUBAC in the ubiquitylation of cytosolic bacteria. We emphasize that LUBAC specifically generates linear (M1-linked) ubiquitin chains, while the types of ubiquitin linkages deposited on bacteria through RNF213-dependent pathways had remained unidentified.

      (4) Figure 3C: is the difference in 7KR-Ub between WT and HOIP KO cells significant? If so, the authors may wish to acknowledge the possibility that HOIP partially contributes to M1-Ub of MxiE mutant Shigella

      The frequencies at which bacteria are decorated with 7KR-Ub is not statistically different between WT and HOIP KO cells. We have included this information in the panel description of Figure 3.

      (5) On page 11, the authors state that "...we observed that LUBAC is dispensable for M1-linked ubiquitylation of cytosolic S. flexneri ∆ipaH1.4. We found that lysine-less internally tagged ubiquitin or an M1-specific antibody bound to S. flexneri ∆ipaH1.4 in cells lacking LUBAC (HOIL-1KO or HOIPKO) but failed to bind bacteria in RNF213-deficient cells". In fact, what is shown is that M1-ubiquitylation in ∆ipaH1.4 infection is RNF213-dependent (5E), but the work with lysine mutants, HOIP or HOIL-1 KOs are all with ∆mxiE, not ∆ipaH1.4 (3B) in this version of the manuscript. Ideally, the data with ∆ipaH1.4 could be added, but alternatively, the conclusion could be re-worded.

      We now include the data demonstrating that staining of ∆ipaH1.4 with an M1-specific antibody is unchanged from WT cells in HOIL-1 KO and HOIP KO cells. These data are shown in supplementary data (Fig. S3E) and referred to on page 9 of the revised manuscript.

      (6) The UBAIT experiment should be explained in a bit more detail in the text. The approach is not necessarily familiar to all readers, and the rationale for using Salmonella-infected ceca/colons is not well explained (and seems odd). Some appropriate caution about interpreting these data might also be welcome. Did HOIP or HOIL show up in the UBAIT? This perhaps also deserves some discussion.

      As expected, HOIP (listed under its official gene name Rnf31 in the table of Fig.S2B) was identified as a candidate IpaH1.4 interaction partner as the third most abundant hit from the UBAIT screen. Remarkably, Rnf213 was the hit with the highest abundance in the IpaH1.4 UBAIT screen. To address the reviewer’s comments, we now explain the UBAIT approach in more detail and provide the rational for using intestinal protein lysates from Salmonella infected mice. The text on page 8 reads as follows: “To investigate potential physical interactions between IpaH1.4 and IpaH2.5, we reanalyzed a previously generated dataset that employed a method known as ubiquitin-activated interaction traps (UBAITs) (32). As shown in Fig. S2A, the human ubiquitin gene was fused to the 3′ end of IpaH2.5, producing a C-terminal IpaH2.5-ubiquitin fusion protein. When incubated with ATP, ubiquitin-activating enzyme E1, and ubiquitin-conjugating enzyme E2, the IpaH2.5-ubiquitin "bait" protein is capable of binding to and ubiquitylating target substrates. This ubiquitylation creates an iso-peptide bond between the IpaH2.5 bait and its substrate, thereby enabling purification via a Strep affinity tag incorporated into the fusion construct (32). IpaH2.5-ubiquitin bait and IpaH3-ubiquitin control proteins were incubated with lysates from murine intestinal tissue. To detect interaction partners in a physiologically relevant setting, we used intestinal lysates derived from mice infected with Salmonella, which in contrast to Shigella causes pronounced inflammation in WT mice and therefore better simulates human Shigellosis in an animal model. Using UBAIT we identified HOIP (Rnf31) as a likely IpaH2.5 binding partner (Fig. S2B), thus confirming previous observations (28) and validating the effectiveness our approach. Strikingly, we identified mouse Rnf213 as the most abundant interaction partner of the IpaH2.5-ubiquitin bait protein (Fig. S2B). Collectively, our data and concurrent reports showing direct interactions between IpaH1.4 and human RNF213 (36, 37) indicate that the virulence factors IpaH1.4 and IpaH2.5 directly bind and degrade mouse as well as human RNF213.”

      (7) It would be helpful if the authors discussed their results in the context of the prior work showing IpaH1.4/2.5 mediate the degradation of HOIP. Do the authors see HOIP degradation? If indeed HOIP and RNF213 are both degraded by IpaH1.4 and IpaH2.5, are there conserved domains between RNF213 and HOIP being targeted? Or is only one the direct target? A HOIP-RNF213 interaction has previously been shown (https://doi.org/10.1038/s41467-024-47289-2). Since they interact, is it possible one is degraded indirectly? To help clarify this, a simple experiment would be to test if RNF213 degraded in HOIP KO cells (or vice-versa)?

      We appreciate the reviewer’s suggestions. We conducted the proposed experiments and found that WT S. flexneri infections result in RNF213 degradation in both WT and HOIP KO cells. Similarly, we found that HOIP degradation was independent of RNF213. We have included these data in Figs. 5A and S3B of our revised submission. A study published during revisions of our paper demonstrates that the LRR of IpaH1.4 binds to the RING domains of both RNF213 and LUBAC (PMID: 40205224). We refer to this work in our revised manuscript.

      Reviewer #2 (Public review):

      Summary:

      The authors find that the bacterial pathogen Shigella flexneri uses the T3SS effector IpaH1.4 to induce degradation of the IFNg-induced protein RNF213. They show that in the absence of IpaH1.4, cytosolic Shigella is bound by RNF213. Furthermore, RNF213 conjugates linear and lysine-linked ubiquitin to Shigella independently of LUBAC. Intriguingly, they find that Shigella lacking ipaH1.4 or mxiE, which regulates the expression of some T3SS effectors, are not killed even when ubiquitylated by RNF213 and that these mutants are still able to replicate within the cytosol, suggesting that Shigella encodes additional effectors to escape from host defenses mediated by RNF213-driven ubiquitylation.

      Strengths:

      The authors take a variety of approaches, including host and bacterial genetics, gain-of-function and loss-of-function assays, cell biology, and biochemistry. Overall, the experiments are elegantly designed, rigorous, and convincing.

      Weaknesses:

      The authors find that ipaH1.4 mutant S. flexneri no longer degrades RNF213 and recruits RNF213 to the bacterial surface. The authors should perform genetic complementation of this mutant with WT ipaH1.4 and the catalytically inactive ipaH1.4 to confirm that ipaH1.4 catalytic activity is indeed responsible for the observed phenotype.

      We would like to thank the reviewer for their time evaluating our manuscript and the positive assessment of our work, especially its scientific rigor. We conducted the experiment suggested by the reviewer and included the new data in the revised manuscript. As expected, complementation of the ∆ipaH1.4 with WT IpaH1.4 but not with the catalytically dead C338S mutant restored the ability of Shigella to efficiently escape from recognition by RNF213 (Figs. 5C-D).

      Reviewer #2 (Recommendations for the authors):

      The authors should perform genetic complementation of the ipaH1.4 mutant with WT ipaH1.4 and the catalytically inactive ipaH1.4 to confirm that ipaH1.4 catalytic activity is indeed responsible for the observed phenotype.

      We performed the suggested experiment and show in Figs. 5C-D that complementation of the ∆ipaH1.4 mutant with WT IpaH1.4 but not with the catalytically dead C338S mutant restored the ability of Shigella to efficiently escape from recognition by RNF213. These data demonstrate that the catalytic activity of IpaH1.4 is required for evasion of RNF213 binding to the bacteria.

      Reviewer #3 (Public review):

      Summary:

      In this study, the authors set out to investigate whether and how Shigella avoids cell-autonomous immunity initiated through M1-linked ubiquitin and the immune sensor and E3 ligase RNF213. The key findings are that the Shigella flexneri T3SS effector, IpaH1.4 induces degradation of RNF213. Without IpaH1.4, the bacteria are marked with RNF213 and ubiquitin following stimulation with IFNg. Interestingly, this is not sufficient to initiate the destruction of the bacteria, leading the authors to conclude that Shigella deploys additional virulence factors to avoid this host immune response. The second key finding of this paper is the suggestion that M1 chains decorate the mxiE/ipaH Shigella mutant independent of LUBAC, which is, by and large, considered the only enzyme capable of generating M1-linked ubiquitin chains.

      Strengths:

      The data is for the most part well controlled and clearly presented with appropriate methodology. The authors convincingly demonstrate that IpaH1.4 is the effector responsible for the degradation of RNF213 via the proteasome, although the site of modification is not identified.

      Weaknesses:

      (1)The work builds on prior work from the same laboratory that suggests that M1 ubiquitin chains can be formed independently of LUBAC (in the prior publication this related to Chlamydia inclusions). In this study, two pieces of evidence support this statement -fluorescence microscopy-based images and accompanying quantification in Hoip and Hoil knockout cells for association of M1-ub, using an antibody, to Shigella mutants and the use of an internally tagged Ub-K7R mutant, which is unable to be incorporated into ubiquitin chains via its lysine residues. Given that clones of the M1-specific antibody are not always specific for M1 chains, and because it remains formally possible that the Int-K7R Ub can be added to the end of the chain as a chain terminator or as mono-ub, the authors should strengthen these findings relating to the claim that another E3 ligase can generate M1 chains de novo.

      (2) The main weakness relating to the infection work is that no bacterial protein loading control is assayed in the western blots of infected cells, leaving the reader unable to determine if changes in RNF213 protein levels are the result of the absent bacterial protein (e.g. IpaH1.4) or altered infection levels.

      (3)The importance of IFNgamma priming for RNF213 association to the mxiE or ipaH1.4 strain could have been investigated further as it is unclear if RNF213 coating is enhanced due to increased protein expression of RNF213 or another factor. This is of interest as IFNgamma priming does not seem to be needed for RNF213 to detect and coat cytosolic Salmonella.<br /> Overall, the findings are important for the host-pathogen field, cell-autonomous/innate immune signaling fields, and microbial pathogenesis fields. If further evidence for LUBAC independent M1 ubiquitylation is achieved this would represent a significant finding.

      We would like to thank the reviewer for their time evaluating our manuscript and the positive assessment of our work and its significance. We provide a comprehensive response to the main three critiques listed under ‘weaknesses’ and also have responded to each of the reviewer’s specific recommendations below. We highlight any changes made to the manuscript in response to those recommendations.

      (1) As the reviewer correctly pointed out, 7KR ubiquitin cannot only be used for linear ubiquitylation but can also function as a donor ubiquitin and can be attached as mono-ubiquitin to a substrate or to an existing ubiquitin chain as a chain terminator. To distinguish between 7KR INT-Ub signals originating from linear versus mono-ubiquitylation, we followed the reviewer’s advice and generated a N-terminally tagged 7KR INT-Ub variant. The N-terminal tag prevents linear ubiquitylation but still allows 7KR INT-Ub to be attached as a mono-ubiquitin. We found that the addition of this N-terminal tag significantly reduced but not completely abolished the number of Δ_mxiE_ bacteria decorated with 7KR INT-Ub. These data are shown in a new Fig. S1 and indicate that 7KR lacking the N-terminal tag is attached to bacteria both in the form of linear (M1-linked) ubiquitin and as donor ubiquitin, possibly as a chain terminator. While we cannot rule out that the anti-M1 antibodies used here cross-react with other ubiquitin linkages, we reason that the 7KR data strongly argues that linear ubiquitin is part of the ubiquitin coat encasing IpaH1.4-deficient cytosolic Shigella. Collectively, our data show that both linear and lysine-linked (especially K27 and K63) ubiquitin chains are part of the RNF213-dependent ubiquitin coat on the surface of IpaH1.4 mutants. And furthermore, our data strongly indicate that this ubiquitylation of IpaH1.4 mutants is independent of LUBAC.

      (2) We used GFP-expressing strains of S. flexneri for our infection studies and were therefore able to use GFP expression as a loading control. We have incorporated these data into our revised figures. These new data (Figs. 4A, 5A, and S3B) show that bacterial infection levels were comparable between WT and mutant infections and that therefore the degradation of RNF213 (or HOIP – see new data in Fig. S3B) is not due to differences in infection efficiency.

      (3) We agree with the reviewer that the mechanism by which RNF213 binds to bacteria is an important unanswered question. Similarly, whether other ISGs have auxiliary functions in this process or whether binding efficiencies vary between different bacterial species are important questions in the field. However, these questions go far beyond the scope of this study and were therefore not addressed in our revisions.

      Reviewer #3 (Recommendations for the authors):

      (1) An N-terminally tagged K7R-ub should be used as a control to test whether the signal found around the mutant shigella is being added via the N terminal Met into chains. As it is known that certain batches of the M1-specific antibodies are in fact not specific and able to detect other chain types, the authors should test the specificity of the antibody used in this study (eg against different di-Ub linkage types) and include this data in the manuscript.

      We agree with the reviewer in principle. The anti-linear ubiquitin (anti-M1) monoclonal antibody, clone 1E3, prominently used in this study was tested by the manufacturer (Sigma) by Western blotting analysis and according to the manufacturer “this antibody detected ubiquitin in linear Ub, but not Ub K11, Ub K48, Ub K63.” However, this analysis did not include all possible Ub linkage types and thus the reviewer is correct that the anti-M1 antibody could theoretically also detect some other linkage types. To address this concern, we added new data during revisions demonstrating that 7KR INT-Ub targeting to S. flexneri is largely dependent on the N-terminus (M1) of ubiquitin. Our combined observations therefore overwhelmingly support the conclusion that linear (M1-linked) as well as K-linked ubiquitin is being attached to the surface of IpH1.4 S. flexneri bacteria in an RNF213-dependent and LUBAC-independent manner.

      (2) The M1 signal detected on bacteria with the antibody is still present in either Hoip or Hoil KO’s but due to the potential non-specificity of the antibody, the authors should test whether K7R ub is detected on bacteria in the Hoil ko (in addition to Hoip KO). This would strengthen the authors’ data on LUBAC-independent M1 and is important because Hoil can catalyse non-canonical ubiquitylation.

      The specific linear ubiquitin-ligating activity of LUBAC is enacted by HOIP. We show that linear ubiquitylation of susceptible S. flexneri mutants as assessed by anti-M1 ubiquitin staining or 7KR INT-Ub recruitment occurs in HOIPKO cells at WT levels (Figs. 3B, 3C, S3E [new data]). In our view , these data unequivocally show that the observed linear ubiquitylation of cytosolic S. flexneri ipaH1.4 and mxiE mutants is independent of LUBAC.

      (3) For Figure 4A, do mxiE bacteria show similar invasion - authors should include a bacterial protein control to show levels of bacteria in WT and mxiE infected conditions. A similar control should be included in Figure 5A.

      We used GFP-expressing strains of S. flexneri for our infection studies and were therefore able to use GFP expression as a loading control. We have incorporated these data into our revised figures. These new data (Figs. 4A, 5A, and S3B) show that bacterial infection levels were comparable between WT and mutant infections and that therefore the degradation of RNF213 (or HOIP – see new data in Fig. S3B) is not due to differences in infection efficiency.

      (4) Can the authors speculate why IFNg priming is needed for the coating of Shigella mxiE mutant but not in the case of Salmonella or Burkholderia? Is this just amounts of RNF213 or something else?

      In our studies we did not directly compare ubiquitylation rates of cytosolic Shigella, Burkholderia, and Salmonella bacteria with each other under the same experimental conditions. However, such a direct comparison is needed to determine whether IFNgamma priming is required for RNF213-dependent bacterial ubiquitylation of some but not other pathogens. Two papers published during the revisions of our manuscript (PMID: 40164614, PMID: 40205224) reports robust RNF213 targeting to IpaH1.4 Shigella mutants in unprimed cells HeLa cells (whereas we used A549 and HT29 cells). Therefore, differences in reagents, cell lines, and/or other experimental conditions may determine whether IFNgamma priming is necessary to observe substantial RNF213 translocation to cytosolic bacteria.

      (5) Typos - there are several, but this is hard to annotate with line numbers so the authors should proofread again carefully.

      We proofread the manuscript and corrected the small number of typos we identified

  4. docdrop.org docdrop.org
    1. fusion, and irrationality in public education. Public schools are essential to make the American dream work, but schools are also the arena in which many Americans first fail. Failure there almost cer-tainly guarantees failure from then on. In the dream, failure results from lack of individual merit and effort;

      This line captures the deep contradiction at the heart of U.S. education. Schools are supposed to embody equality of opportunity, the pathway for any child to succeed regardless of background. Yet in reality, they often serve as the first site where systemic inequities based on class, race, and neighborhood become visible and consequential. The irony is striking: the very institution designed to level the playing field is where many children first experience structural disadvantage. This shows how schools both symbolize and betray the American Dream, reinforcing the gap between its promise and its practice.

    2. ss. Quality preschool, indi-vidual reading instruction, small classes in the early grades, and consistently challenging academic courses have been demonstrated to help disadvantaged children achieve, just as they enable middle-class children to achieve.

      This line points to a key paradox: we know what works to reduce inequality in education, yet these resources are disproportionately concentrated in schools serving wealthier families. The issue is not a lack of evidence but a lack of political will to distribute proven interventions equitably. This highlights how educational inequality is not accidental but sustained by choices to prioritize the comfort and success of privileged students while leaving poor children with fewer opportunities. The annotation emphasizes that inequality persists not because we lack solutions, but because those in power resist restructuring access to them.

    3. uired by law to attend separate and patently inferior schools. Yet this progress has met limits. Hispanics and inner city residents still drop out much more frequently than others, the gap between black and white achievement rose during the 1990s after declining in the previous decade, the achievement gap between students from lower-and higher-class families has barely budged, and poor students in poor urban schools have dramatically lower rates of literacy and arithmetic or scientific competence.

      This line shows how surface-level progress can mask persistent structural inequalities. While statistics like lower dropout rates suggest improvement, the deeper reality is that race and class still strongly determine educational outcomes. The endurance of achievement gaps reveals that reforms often address symptoms without dismantling systemic barriers—such as unequal funding, segregation, and generational poverty. It underscores a central contradiction: the promise of equal education is celebrated rhetorically, but the lived experiences of marginalized groups show how far the system is from delivering it.

    4. on. The American dream is egalitarian at the starting point in the "race of life," but not at the end. That is not the paradox; it is simply an ideological choice.

      This statement reveals the central paradox of American education: it promises equality of opportunity but not equality of outcomes. By framing success as a race, the metaphor implies that everyone begins at the same starting line, yet in reality, some children are advantaged by their parents’ resources, neighborhoods, and social capital. The finish line is therefore tilted from the outset. This exposes how the ideology of the American Dream obscures systemic inequalities by focusing on effort and talent, while ignoring inherited privilege and structural barriers that shape who “wins.”

    5. EPT. It encourages each person who lives in the United States to pursue success, and it cre-ates the framework within which everyone can do it

      This sentence reflects both the power and the illusion of the American Dream. On the surface, it offers a unifying promise: that success is available to all who work hard. Yet the “framework” it describes assumes equal access to opportunity, ignoring the structural inequalities, like poverty, racism, systemic bias, that limit mobility for many. The language of universality masks exclusion, shifting responsibility for failure onto individuals rather than acknowledging broader barriers. By presenting the Dream as an open path, this narrative legitimizes inequality while sustaining faith in the system.

    1. Mais les alternatives open-source nécessitent des connaissances avancées en programmation

      Est-ce que QGIS WebClient est aussi compliqué (voire plus) que Leaflet ou Open Layers ou est-ce autre chose ?

    2. l’alignement des systèmes de coordonnées

      Depuis quelques mois il y a une possibilité de connecter le LLM Claude à QGIS (je n'ai pas la méthode précise, mais trouvable sur Youtube et vue sur LinkedIn). Est-ce qu'il n'y a pas moyen de tester grâce à l'IA un réalignement des systèmes de coordonnées ? Au-delà, une intervention à DistamLab par un usager de l'IA dans les SIG pourrait être intéressante.

    3. un serveur sur lequel les données déposées seront diffusées et une interface consultable sur le Web, un WebSIG

      Pour les redistribuer sous forme de flux WMS et WFS ? QGIS Server peut être une solution par exemple ? (Estelle ou Cécile avez-vous des retours d'expérience ?) Concernant l'enregistrement des données vectorisées, ne faut-il pas privilégier et promouvoir le format libre "gpkg" (geopackage) ? Qui est plus pratique que le SHP.

    4. Domaine: Spatialité

      J'arrive après que Cécile, Estelle et Vincent aient tout rédigé (merci au passage). Mes commentaires sont davantage des réflexions en cours.

    5. Une réflexion doit être engagée afin d’assurer une gestion raisonnée et adéquate de la volumétrie (–> engager la réflexion avec le consortium 3D ? étudier l’expérience des astronomes, de l’imagerie médicale ?)

      Dans une perspective de gestion raisonnée (qui tient compte des enjeux écologiques), la réflexion peut porter sur la sélection des rasters à mettre en ligne. Existe-t-il des formats autres que le TIFF qui soit aussi performant et moins volumineux ?

  5. docdrop.org docdrop.org
    1. ey. Investments in quality early childhood education not only has one of the highest yields-for every $1 spent on early education and care, $8 is saved on crime, public assistance, supplemental schooling, and so on-but is also one of the most important stages at which a child's educational trajectory is shaped (Nisbett, 2009).

      This line underscores the profound return of investing in early education, not just economically but socially. It highlights how early interventions ripple outward, improving literacy, reducing inequality, and even cutting long-term social costs. The insight here is that inequity often begins before formal schooling, making early childhood programs pivotal in either reinforcing or disrupting cycles of disadvantage. Despite the data, access remains unequal, illustrating the contradiction between what research proves to be effective and what society chooses to fund. By failing to prioritize universal early education, we undermine both children’s futures and the collective good.

    2. Historically and contemporarily, U.S. public schools illustrate th · 1. · f . . . . . . e s1mp 1c1ty o reproduction-that 1s, the mdehble relat1onship between curre t d 1 . n an eventua class membership-by way of replicating class status in the superior ed t. I · · f h · h uca 10na opportumt1es o t ose wit more money If you can b f · h · ·

      This line captures how schools function less as engines of mobility than as mechanisms of social reproduction. The phrase “simplicity of reproduction” suggests how effortless and predictable it is for wealth and privilege to perpetuate themselves through the education system. Access to better-funded schools, enriched curricula, and social networks ensures that class boundaries are rarely disrupted. Rather than being a “great equalizer,” public education often reinforces inherited inequalities by distributing opportunities in proportion to existing wealth. This highlights the irony that an institution designed to democratize opportunity is one of the clearest mirrors of social stratification.

    3. assets. The surest way to build wealth-as indicated by the real in real estate-is to own a home

      This line highlights how systemic inequality is embedded in the most ordinary path to economic stability: homeownership. Because housing policies and lending practices historically favored white families while excluding families of color, the intergenerational transfer of wealth has been profoundly unequal. The phrase underscores that wealth inequality is not accidental but structurally produced through discriminatory policies like redlining, restrictive covenants, and unequal access to mortgages. What looks like a neutral economic fact, that homeownership builds wealth, is actually a reflection of racialized access and denial, revealing how economic inequality is deeply tied to racial injustice.

    4. What scores of students-well-meaning educators, all-fail to realize is that public education does not serve its intended function as the great equal-ize

      This line disrupts the deeply held myth that schooling is inherently liberatory. Instead of closing gaps, the very structure of public education often reproduces social hierarchies by tracking, unequal resource distribution, and cultural biases embedded in curricula. The phrasing “structure inequality” is powerful because it shifts the blame away from individual “failures” of poor students and reframes it as a systemic issue. This invites a critical shift in perspective: poverty and underperformance in schools are not evidence of personal shortcomings but symptoms of institutions designed to maintain stratification.

    5. He conceptualized public education as "the great equalizer," or the most powerful mechanism for abating class-based "prejudice and hatred," and, most important, the only means by which those without economic privilege or generational wealth could experience any hope of equal footing.

      This line underscores the radical promise embedded in the idea of public education: that schooling could serve as a pathway to dismantle entrenched social hierarchies. Yet the fact that this promise remains unfulfilled two centuries later exposes a painful paradox, that schools often reproduce inequality rather than resolve it. The phrase “great equalizer” is aspirational, but its persistent failure points to the structural barriers like poverty, racism, privatization that education alone cannot overcome. The insight here is that while education has transformative potential, it cannot function as a true equalizer without systemic change beyond the classroom.

    6. Poor people exist because they wasted a good, free educa-tion. The poor themselves are the problem

      Looking at this text, as someone who comes from a low-income community, I can say that those "advantages" that the author talks about are not as feasible as they are for a well-funded school.

    1. Multiculturalism compels educators to recognize the nar-row boundaries that have shaped the way knowledge is shared in the classroom. It forces us all to recognize our complicity in accepting and perpetuating biases of any kind.

      Multiculturalism forces educators to recognize that the way knowledge is imparted in the classroom has actually been very narrowly confined by certain boundaries.

    2. Students taught me, too, that it is necessary to practice com-passion in these new learning settings. I bave not forgotten the day a student came to class and told me: 'We take your class. We learn to look at the world from a critica! standpoint, one that considers race, sex, and class. And we can't enjoy life anymore."

      The author remembers that one day, a student said to him, "We took your class and learned to view the world with a critical perspective, considering issues related to race, gender and class. But as a result, we were no longer able to simply enjoy life."

    3. Making the classroom a democratic setting where everyone feels a responsibility to contribute is a central goa! of trans-formative pedagogy.

      Transforming the classroom into a democratic environment where everyone feels responsible for participating is the core goal of "transformative teaching".

    4. To share in our efforts at intervention we invited professors from universities around the country to corne and talk-both formally and informally-about the kind of work they were doing aimed at transforming teaching and learning so that a multicultural education would be possible

      In order to promote educational intervention and improvement, we invited professors from universities across the United States to engage in both formal and informal exchanges, sharing their ongoing work. The goal of this work is to transform teaching and learning and make multicultural education possible.

    5. Despite the contemporary focus on multiculturalism in our society, particularly in education, there is not nearly enough practica! discussion of ways classroom settings can be trans-formed so that the learning experience is inclusive. If the effort to respect and honor the social reality and experiences of groups in this society who are nonwhite is to be reflected in a pedagogical process, then as teachers-on all levels, from ele-mentary to university settings-we must acknowledge that our styles of teaching may need to change. Let's face it: most of us were taught in classrooms where styles of teachings reflected the hotion of a single norm of thought and experience, which we were encouraged to believe was universal. This has been just as true for nonwhite teachers as for white teachers. Most of us learned to teach emulating this model.

      When most of us were being educated, we grew up in a classroom environment that only recognized a single way of thinking and experience as the "universal standard". Although in today's society, especially in the field of education, there is a strong emphasis on multiculturalism, there are very few actual discussions on how to truly make the classroom more inclusive.

    6. s process we build community. Despite the focus on diversity, our desires for inclusion, many professors still teach in classrooms that are predominant-ly white. Often a spirit of tokenism prevails in those settings.

      This line exposes the gap between rhetoric and practice in higher education. While institutions may claim to value diversity, the reality is that classrooms often remain centered on whiteness, with inclusion reduced to symbolic gestures. Tokenism not only fails to address systemic inequities but also places an unfair burden on the few students of color to “represent” entire communities. True inclusion requires more than the presence of diverse bodies; it demands structural change in curriculum, pedagogy, and the distribution of power within the classroom.

    7. Most of us learned to teach emulating this model. As a çonsequence, many teachers are disturbed by the political implications of a multicultural education because they fear losing control in a 35

      This line exposes how education often disguises cultural particularity as universality, erasing difference while privileging one dominant perspective. By presenting a single worldview as neutral or “normal,” traditional teaching reproduces inequality and leaves little room for alternative voices or ways of knowing. Recognizing this false universality is the first step toward creating classrooms that value multiple perspectives and challenge the myth of neutrality. True multicultural education requires dismantling this illusion so that learning reflects the diverse realities of students’ lives.

    8. nbiased liberal arts education. Multiculturalism compels educators to recognize the nar-row boundaries that have shaped the way knowledge is shared in the classroom. It forces us all to recognize our complicity in accepting and perpetuating biases of any kind

      This statement reveals how education is never neutral as it is shaped by long histories of exclusion and bias that often go unexamined. Recognizing complicity is uncomfortable, but it is also necessary if educators are to move beyond reproducing dominant perspectives. The line underscores that true multiculturalism is not just about adding diverse content to a syllabus, but about rethinking the very structures through which knowledge is validated and shared. By confronting these boundaries, educators can begin to transform classrooms into spaces of liberation rather than reproduction of inequality.

    9. shifting paradigms and talk about the discomfort it can cause. White students learning to think more critically about ques-tions o f race and racism may go home for the holidays and sud-denly see their parents in a different light

      Education is not just about acquiring knowledge; it can alter the lens through which students interpret their closest relationships and everyday environments. That discomfort is a sign of growth, revealing how deeply ingrained social norms are challenged in the process of learning. The shift in perspective demonstrates that classrooms are not isolated spaces of theory but catalysts for real-world reexamination, where the personal and political collide.

    10. shifting paradigms and talk about the discomfort it can cause. White students learning to think more critically about ques-tions o f race and racism may go home for the holidays and sud-denly see their parents in a different light.

      Education is not just about acquiring knowledge; it can alter the lens through which students interpret their closest relationships and everyday environments. That discomfort is a sign of growth, revealing how deeply ingrained social norms are challenged in the process of learning. The shift in perspective demonstrates that classrooms are not isolated spaces of theory but catalysts for real-world reexamination, where the personal and political collide.

    11. ven ifwe cannot read the signs) they make their presence felt. When I first entered the multicultural, multiethnic class-room setting I was unprepared. I did not know how to cope effective!y with so much "diflerence.

      This line reveals how diversity in the classroom cannot be met with good intentions alone. Even educators who support progressive politics often lack the practical tools and experience to engage with real cultural difference. Acknowledging unpreparedness is powerful because it highlights that genuine multicultural teaching requires self-reflection, humility, and new strategies. Rather than assuming inclusivity comes naturally, this moment illustrates that teachers must be willing to relearn and adapt, modeling the same openness to growth they ask of their students.

    12. essary o . . 1 Emphasizing that a white male professor m an Enghsh tra. ,. ak d arttnent who teaches only work by "great white men IS m -ep . . ing a political decision,

      This line highlights the illusion of neutrality in education. By framing the act of teaching a narrow, Eurocentric canon as "just the tradition," educators conceal the power structures embedded in those choices. Curriculum is never apolitical as omissions and inclusions both communicate values. Choosing not to expand beyond "great white men" reproduces systemic exclusion while presenting itself as objective. The insight is that resisting change is not simply inertia, but an active reinforcement of dominant ideologies. This makes clear why critical pedagogy insists on questioning what knowledge is legitimized and whose voices are heard in the classroom.

    13. They bave told me that many professors never showed any interest in hearing their voices. Accepting the decentering of the West globally, embracing multiculturalism, com pels educators to focus attention on the issue of voice. Who speaks? Who listens? And why? Caring about whether all students fulfill their responsibility to con tribute to learning in the classroom is not a common approach in what Freire has called the "banking system of education" where students are regarded merely as passive consumers

      I agree with the idea of fostering an inclusive and engaging environment. This text acknowledges how some students don't feel valued and heard. It is essential that educators actively listen and encourage participation to impact students' experiences.

    14. This reminded us that it is difficult for individuals to shift paradigms and that there must be a setting for folks to voice fears, to talk about what they are doing, how they are doing it, and why. One of our most useful meetings was one in which we asked professors from different disciplines (including math and science) to talk informally about how their teaching had been changed by a desire to be more inclusive. Hearing individuals describe concrete strate-gies was an approach that helped dispel fears.

      Creating an inclusive space is challenging when educators are scared to voice their concerns, but it is better to voice than go without knowing. This can facilitate impractical teaching; however, when concerns are expressed, it can create an opportunity for growth. This collaborative sharing from different disciplines can influence inclusivity and make it more effective for teachers and students.

    15. Emphasizing that a white male professor m an Enghsh tra. ,. ak d arttnent who teaches only work by "great white men IS m -ep . . ing a political decision, we had to work cons1stently agamst and through the overwhelming will on the part of folks to deny the politics of racism, sexism, heterosexism, and so forth that · form how and what we teach

      The resistance of professors is telling, as the ongoing struggle to overcognize and challenge what is taught to children is concerning. Thus, the author makes a pivotal point in the text that this is a larger issue within the curriculum and how specific teaching affects political stances, even if they are unaccounted for by those teaching.

    16. There must be training si tes where teachers have the opportunity to express those concerns while also learning to create ways to approach the multicultural classroom and curriculum.

      I believe that it is important for teachers to express any concerns or questions they have regarding teaching a multicultural classroom and curriculum. This can help teachers foster a way of teaching without confusing or spreading misconceptions across subjects.

    17. If the effort to respect and honor the social reality and experiences of groups in this society who are nonwhite is to be reflected in a pedagogical process, then as teachers-on all levels, from ele-mentary to university settings-we must acknowledge that our styles of teaching may need to change.

      I believe that it is crucial to establish a curriculum that aligns with factual evidence, ensuring that every student can grasp it without being subjected to a one-sided ideological perspective. This can be achieved through diverse learning styles, incorporating social aspects into the subject, and fostering an inclusive environment.

  6. notebooksharing.space notebooksharing.space
    1. ________________________________||||||| | Huai Kha Khaeng | Oct 2025? | ForestGEO plot | 50 | ? | 1 | Seasonal evergreen and deciduous forest | | | | Kapook Kapieng | 16 | ? | 1 | Seasonal deciduous forest | | | | Huai Krading | 16 | ? | 1 cm |Seasonal evergreen and deciduous forest |

      Not rendering well. Not important to fix at this point.

    2. This analysis aims to inform GEO-TREES to evaluate if proposed sites have sufficient plot data and to guide the placement of new plots by investigating the following research questions:

      This is the immediate need, but will need to be framed more generally in the eventual paper.

    3. Forest Ecology and Management? Biogeosciences? Methods in Ecology and Evolution?

      I think in this order of preference:

      1. Forest Ecology and Management
      2. Methods in Ecology and Evolution
      3. Biogeosciences
    1. eLife Assessment

      This study presents important methodologies for repeated brain ultrasound localization microscopy (ULM) in awake mice and a set of results indicating that wakefulness reduces vascularity and blood flow velocity. The data supporting these findings are solid. This study is relevant for scientists investigating vascular physiology in the brain.

    2. Reviewer #1 (Public review):

      Summary:

      Wang and Colleagues present a study aimed at demonstrating the feasibility of repeated ultrasound localization microscopy (ULM) recording sessions on mice chronically implanted with a cranial window transparent to US. They provided quantitative information on their protocol, such as the required number of Contrast enhancing microbubbles (MBs) to get a clear image of the vasculature of a brain coronal section. Also, they quantified the co-registration quality over time-distant sessions and the vasodilator effect of isoflurane.

      Strengths:

      Strengths:

      The study showed a remarkable performance in recording precisely the same brain coronal section over repeated imaging sessions. In addition, it sheds light on the vasodilator effect of isoflurane (an anesthetic whose effects are not fully understood) on the different brain vasculature compartments, although, as the Authors stated, some insights in this aspect have already been published with other imaging techniques. The experimental setting and protocol are very well described.

      In this newly revised version, the Authors made evident efforts to strengthen the messages of their study. All the limitations of their research have been clearly acknowledged.

      A central issue remains. To answer my concerns about the need for multivariate analyses, the Author stated that: "Due to the limited number of animals used, the analyses presented in this work should be interpreted as example case studies." Although this sentence does not convince me, if the purpose of this study was to showcase the potentialities of ULM for future longitudinal awake studies, why don't they avoid any statistics? The trend for decreased vein size and increased arterial blood flow during wakefulness is evident from the plot and physiologically plausible. Why impose wrong statistics instead of dropping them altogether? I do not see the lack of statistics as detrimental to this study, based on the feedback received from the Authors.

    3. Reviewer #2 (Public review):

      Summary:

      The authors present a very interesting collection of methods and results using brain ultrasound localization microscopy (ULM) in awake mice. They emphasize the effect of the level of anesthesia on the quantifiable elements assessable with this technique (i.e. vessel diameter, flow speed, in veins and arteries, area perfused, in capillaries) and demonstrate the possibility of achieving longitudinal cerebrovascular assessment in one animal during several weeks with their protocol.

      The authors made a good rewriting the article based on the reviewers' comments. One of the message of the first version of the manuscript was that variability in measurements (vessel diameter, flow velocity, vascularity) were much more pronounced under changes of anesthesia than when considering longitudinal imaging across several weeks. This message is now not quite mitigated, as longitudinal imaging seems to show a certain variability close to the order of magnitude observed under anesthesia. In that sense, the review process was useful in avoiding hasty conclusion and calls for further caution in ULM awake longitudinal imaging, in particular regarding precision of positioning and cancellation of tissue motion.

      Strengths:

      Even if the methods elements considered separately are not new (brain ULM in rodents, setup for longitudinal awake imaging similar to those used in fUS imaging, quantification of vessel diameters/bubble flow/vessel area), when masterfully combined as it is done in this paper, they answer two questions that have been long-running in the community: what is the impact of anesthesia on the parameters measured by ULM (and indirectly in fUS and other techniques)? Is it possible to achieve ULM in awake rodents for longitudinal imaging? The manuscript is well constructed, well written, and graphics are appealing.

      The manuscript has been much strengthened by the round of review, with more animals for the longitudinal imaging study.

      Weaknesses:

      The manuscript has been only marginally modified since our last round of review, so there is probably not much we reviewers can additionally elaborate to improve it. Therefore my last concerns about the reliability of longitudinal quantifications and on certain discrepancies remains for this paper. As a general piece of advice, I would just say that every claim (' is higher', is lower', is stable') should be supported by evidence and statistical testing if it is not already the case.

      Response 06: the authors' response is not satisfactory. Even if the difference in terms of ROI boundaries between fig 4e and fig 4j has been underlined by the authors, they only provide a wordy comment and no additional quantitative analysis that could explain the discrepancy I pointed out. By doing so they take the risk of making misinterpretations. The reader is left with a discrepancy that could be explained by 2 mechanisms: -pial vessel population behave differently from penetrating arterioles and venules OR - the imaging of pial vessels with ULM is not good enough to enable proper quantification because the vessels are not clearly visible (out of plane extent). In any case Figure 4j does not "provides a more comprehensive representation of cortical vasculature" as stated. If the changes in pial vessels cannot be reliably measured, they should be excluded from the ROI.

      Line 161: be careful with the use of vessel density, as pointed by reviewer 1.

      Line 196: "the decrease in venous vessel area (averaging 55% across mice) was greater than that of arterial (averaging 35%)" no stat test has been performed.

    4. Author response:

      The following is the authors’ response to the previous reviews

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      Wang and Colleagues present a study aimed at demonstrating the feasibility of repeated ultrasound localization microscopy (ULM) recording sessions on mice chronically implanted with a cranial window transparent to US. They provided quantitative information on their protocol, such as the required number of Contrast enhancing microbubbles (MBs) to get a clear image of the vasculature of a brain coronal section. Also, they quantified the co-registration quality over time-distant sessions and the vasodilator effect of isoflurane.

      Strengths:

      The study showed a remarkable performance in recording precisely the same brain coronal section over repeated imaging sessions. In addition, it sheds light on the vasodilator effect of isoflurane (an anesthetic whose effects are not fully understood) on the different brain vasculature compartments, although, as the Authors stated, some insights in this aspect have already been published with other imaging techniques. The experimental setting and protocol are very well described.

      Wang and co-authors submitted a revised version of their study, which shows improvements in the clarity of the data description.

      However, the flaws and limitations of this study are substantially unchanged.

      The main issues are:

      Statistics are still inadequate. The TOST test proposed in this revised version is not equivalent to an ANOVA. Indeed, multivariate analyses should be the most appropriate, given that some quantifications were probably made on multiple vessels from different mice. The 3 reviewers mentioned the flaws in statistics as the primary concern.

      Response 01: We thank the reviewer for raising this important point. We fully acknowledge the limitations of our current statistical analysis. We would like to clarify that the TOST procedure was applied exclusively to the measurements taken from the same vessel segment in the same animal across different time points, with the purpose of evaluating the consistency of vessel diameter measurements. We recognize that the statistical analysis in this study remains limited, which we have acknowledged as a key limitation in the manuscript. This constraint arises primarily from the limited number of animals, and our analysis should be interpreted as a representative case study rather than a generalized statistical conclusion. We have revised the manuscript to clarify these points and to more explicitly acknowledge the statistical limitations.

      (Line 329) “Our current study primarily focused on demonstrating the feasibility of longitudinal ULM imaging in awake animals, instead of conducting a systematic investigation of how isoflurane anesthesia alters cerebral blood flow. Due to the limited number of animals used, the analyses presented in this work should be interpreted as example case studies. While the trends observed across animals were consistent, the small sample size restricts the scope of statistical inference. For future work, it would be valuable to design more rigorous control experiments with larger sample sizes to systematically compare the effects of isoflurane anesthesia, awake states, and other anesthetics that do not induce vasodilation on cerebral blood flow.”

      No new data has been added, such as testing other anesthetics.

      Response 02: We acknowledge that the current study does not include data involving other anesthetics, and we have also discussed this point in our initial response. In fact, we did attempt to use other anesthetics such as ketamine. However, we found it difficult to draw reliable conclusions due to experimental limitations such as variable anesthesia recovery profiles and injection timing, as elaborated in the following paragraphs. Therefore, we decided not to include these data in the current study to avoid potential misinterpretation.

      One major limitation of our experimental setup is that imaging in the awake state is necessarily conducted after a brief period of isoflurane-anesthesia. This brief anesthesia allows for the intravenous injection of microbubbles via the tail vein. Isoflurane is particularly suited for this purpose due to its rapid onset and offset. Mice can recover quickly once the gas is withdrawn, which enables relatively consistent post-anesthesia imaging in the awake state.

      In contrast, other anesthetic agents present challenges. Their recovery profiles are slower, more variable, and less controllable. Reversal drugs can be administered to awaken the animals, but they add another variability. These may lead to greater fluctuations in cerebral hemodynamics and factors introduce uncertainty in the timing of bolus microbubble injection. As such, our current setup is not ideal for systematically comparing different anesthetics and could yield misleading results.

      A more appropriate strategy for comparing awake ULM imaging with different anesthetics would be performing awake imaging first, followed by imaging under anesthesia. This would ensure that the awake condition is free from residual anesthetic effects. However, this method raises higher requirement in bubble delivery, as no anesthesia can be used for the intravenous injection.

      To address this, we are actively exploring another solution using indwelling jugular vein catheterization. By surgically implanting a catheter into the jugular vein prior to imaging, we can establish a stable and reproducible route for microbubble delivery in fully awake animals without any anesthesia induction. This method has the potential to enable direct and reliable comparisons across different physiological states. However, the implementation of this technique and the associated experimental findings go beyond the scope of the current study and will be presented in a future manuscript.

      In the present work, we have emphasized the methodological limitations of our approach and clarified that our primary goal is to highlight the necessity and feasibility of awake-state ULM imaging. The focus is not to comprehensively characterize the effects of different anesthetic agents on microvascular brain flow. We appreciate your understanding and interest in this important future direction. 

      Based the responses and previous revision, we have further refined the discussion of the relevant limitations:

      (Line 324) “Although isoflurane is widely used in ultrasound imaging because it provides long-lasting and stable anesthetic effects, it is important to note that the vasodilation observed with isoflurane is not representative of all anesthetics. Some anesthesia protocols, such as ketamine combined with medetomidine, do not produce significant vasodilation and are therefore preferred in experiments where vascular stability is essential, such as functional ultrasound imaging. Our current study primarily focused on demonstrating the feasibility of longitudinal ULM imaging in awake animals, instead of conducting a systematic investigation of how isoflurane anesthesia alters cerebral blood flow. Due to the limited number of animals used, the analyses presented in this work should be interpreted as example case studies. While the trends observed across animals were consistent, the small sample size restricts the scope of statistical inference. For future work, it would be valuable to design more rigorous control experiments with larger sample sizes to systematically compare the effects of isoflurane anesthesia, awake states, and other anesthetics that do not induce vasodilation on cerebral blood flow.”

      (Line 347) “Another limitation of this study is the potential residual vasodilatory effect of isoflurane anesthesia on awake imaging sessions and the short imaging window available after bolus injection. The awake imaging sessions were conducted shortly after the mice had emerged from isoflurane anesthesia, required for the MB bolus injections. The lasting vasodilatory effects of isoflurane may have influenced vascular responses, potentially contributing to an underestimation of differences in vascular dynamics between anesthetized and awake state. In addition, since microbubbles are rapidly cleared from circulation, the duration of effective imaging is limited to only a few minutes, which also overlaps with the anesthesia recovery period, constraining the usable awake-state imaging window. Future improvement on microbubble infusion using an indwelling jugular vein catheter presents a promising alternative to address these limitations. This method allows for stable microbubble infusion without the need for anesthesia induction, ensuring that the awake imaging condition is free from residual anesthetic effects. Moreover, it has the potential to extend the duration of imaging sessions, offering a longer and more stable time window for data acquisition. Furthermore, by performing ULM imaging in the awake state first, instead of starting with anesthetized imaging, researchers can achieve a more rigorous comparison of how various anesthetics influence cerebral microvascular dynamics relative to the awake baseline.”

      The Authors still insist on using the term Vascularity which they define as: 'proportion of the pixel count occupied by blood vessels within each ROI, obtained by binarizing the ULM vessel density maps and calculating the percentage of the pixels with MB signal.'. Why not use apparent cerebral blood volume or just CBV? Introducing an unnecessary and redundant term is not scientifically acceptable. In this revised version, vascularity is also used to indicate a higher vascular density (Line 275), which does not make sense: blood vessels do not generate from the isoflurane to the awake condition in a few minutes. Rev2 also raised this point.

      Response 03: Thank you for revisiting this important point. We acknowledge that the term vascularity is difficult to interpret for readers, and we also recognize that we did not sufficiently justify its use in the earlier version.

      Based on your suggestion, we have now replaced all instances of “vascularity” with “fractional vessel area”. While the underlying definition remains the same, fractional vessel area offers a more intuitive description. The term “fractional” denotes that the vessel area is normalized to the total area of the selected ROI. This normalization is essential for fair comparisons across ROIs of different sizes, such as Figures 4i–k to evaluate various brain regions. We would also like to clarify that this was not introduced as an unnecessary or redundant term, but rather as a more suitable metric for longitudinal ULM analysis. We did consider using apparent cerebral blood volume (CBV), estimated from microbubble counts. However, we found that it was less robust and meaningful in the context of longitudinal ULM comparisons. Below we provide further justification for using the vessel area instead:

      (1) Using the vessel area is more robust:

      In longitudinal ULM comparisons, normalization across time points is essential to enable fair and meaningful comparisons. In our study, we normalized the data based on a cumulative 5 million microbubbles (e.g., Fig. 2). Other normalization strategies could also be adopted, as long as the resulting vascular maps reach a sufficiently saturated state. However, even with normalization, it remains important to use a quantitative metric that is minimally biased and invariant to experimental fluctuations across time points. Vessel area, derived from binarized vessel maps, is less sensitive to variations in acquisition time and microbubble concentration. This is because repeated microbubble trajectories through the same location are not counted multiple times. In contrast, apparent CBV, calculated from the microbubble counts, is more susceptible to different concentration conditions. Since repeated detections in the same location accumulate, the metric can be dependent on injection efficiency and imaging duration. While CBV may still be valid under well-controlled, steady-state conditions, we found the vessel area to be a more robust and reliable metric for longitudinal analysis under our current bolus-injection protocol.

      (2) Using the vessel area is more meaningful:

      Compared to CBV, the vessel area provides a more direct representation of structural characteristics such as vessel diameter. Anesthesia-induced vasodilation leads to an increase in vessel diameter. Although local diameter changes can be assessed by manually selecting vessel segments, this approach is labor-intensive and prone to selection bias. To enable a more comprehensive and objective assessment of such morphological changes, fractional vessel area provides a more informative alternative to CBV, as it captures diameter-related variations at a global or regional scale, and avoids potential biases associated with manually selecting specific vessels or regions.

      In response to: vascularity is also used to indicate a higher vascular density (Line 275), which does not make sense: blood vessels do not generate from the isoflurane to the awake condition in a few minutes.

      We agree that blood vessels cannot be generated in a few minutes. Vascularity (now fractional vessel area) should be interpreted as apparent vessel density, which reflects a probabilistic estimate of vessel density based on the detectable microbubble. 

      Both apparent vessel density and apparent CBV are indirect, sampling-based approximations of vascular features, and both are fundamentally limited by microbubble detection sensitivity. Low microbubble concentrations lead to underestimation of both CBV and vessel area. A change from zero to non-zero in these metrics does not imply the physical appearance or disappearance of vessels, but rather reflects a change in the likelihood of detecting flow in each region.

      In summary, while neither fractional vessel area (vascularity in previous versions) nor apparent CBV is a perfect metric due to the inherent limitations of ULM, we believe the vessel area provides a more robust and meaningful parameter for our longitudinal comparisons. We have revised the main text to include this explanation and acknowledge the limitations and interpretation of fractional vessel area more explicitly.

      Revision in Results:

      (Line 181) “To validate the broader applicability of our findings, we conducted ROI-based analyses using fractional vessel area and mean velocity as primary metrics. These metrics extended the analysis of vessel diameter and flow velocity to entire brain regions or selected ROIs, which provides a more objective assessment of cerebral blood flow changes at a global scale and reduces the bias associated with manually selecting vessel segments. For vessel area measurements, the term fractional denotes that the vessel area is normalized to the total area of the selected ROI. This normalization is essential for fair comparisons across ROIs of different sizes.”

      Revision in Methods: definition of vascularity

      (Line 571) “In ROI-based analysis, we focused on two primary parameters: fractional vessel area and mean velocity. Fractional vessel area was defined as the proportion of the pixel count occupied by blood vessels within each ROI, obtained by binarizing the ULM vessel density maps and calculating the percentage of the pixels with MB signal. Mean velocity was calculated by averaging all non-zero pixel of velocity estimates within the ROI. The velocity distribution within each ROI was also visualized using violin plots, as shown in Fig. 2, 4 and 6, to illustrate the range and density of flow velocity estimates across different acquisition. In this study, we focused on these two metrics because they represent the most straightforward extension of single-vessel analysis to brain-wide vascular changes.”

      We put our ROI analysis code on GitHub and added a “Code availability” section. We hope it can serve as a foundation for users to explore different quantitative metrics in their own longitudinal ULM studies. We hope to provide an example to inspire further exploration.

      (Line 578) “Code availability

      To support quantitative longitudinal analysis of ULM data, we developed an open-source MATLAB application (https://github.com/ekerwang/ULMQuantitativeAnalysis). This tool is designed to facilitate ROI-based analysis of ULM images for longitudinal comparisons. It supports multiple quantification metrics, including but not limited to vessel area and mean velocity used in this study. Users can select and adapt different metrics based on their specific applications, as a wide range of ULM-based quantification metrics have been developed for different pathological and pharmacological studies.”

      The long-term recordings mentioned by the Authors refer to the 3-week time frame analyzed in this study. However, within each acquisition, the time available from imaging is only a few minutes (< 10', referring to most of the plots showing time courses) after the animals' arousal from isoflurane and before bubbles disappear. This limitation should be acknowledged.

      Response 04: Thank you for this comment. We agree that the current imaging sessions are constrained by the short time window available after the animal’s arousal from isoflurane and before bubbles disappear. This limitation indeed restricts the duration of usable awake-state imaging in our current bolus injection protocol. As discussed earlier, we are actively exploring the use of a jugular vein catheterization approach to address this limitation. This approach has the potential to extend the imaging session duration and provide a longer, more stable time window. We have now acknowledged this limitation more explicitly in the revised Discussion section.

      (Line 347) “Another limitation of this study is the potential residual vasodilatory effect of isoflurane anesthesia on awake imaging sessions and the short imaging window available after bolus injection. The awake imaging sessions were conducted shortly after the mice had emerged from isoflurane anesthesia, required for the MB bolus injections. The lasting vasodilatory effects of isoflurane may have influenced vascular responses, potentially contributing to an underestimation of differences in vascular dynamics between anesthetized and awake state. In addition, since microbubbles are rapidly cleared from circulation, the duration of effective imaging is limited to only a few minutes, which also overlaps with the anesthesia recovery period, constraining the usable awake-state imaging window. Future improvement on microbubble infusion using an indwelling jugular vein catheter presents a promising alternative to address these limitations. This method allows for stable microbubble infusion without the need for anesthesia induction, ensuring that the awake imaging condition is free from residual anesthetic effects. Moreover, it has the potential to extend the duration of imaging sessions, offering a longer and more stable time window for data acquisition. Furthermore, by performing ULM imaging in the awake state first, instead of starting with anesthetized imaging, researchers can achieve a more rigorous comparison of how various anesthetics influence cerebral microvascular dynamics relative to the awake baseline.”

      The more precise description of the number of mice and blood vessels analyzed in Figure 6 makes it apparent the limited number of independent samples used to support the findings of this work. A limitation that should be acknowledged. The newly provided information added as Supplementary Figure 1 should be moved to the main text, eventually in the figure legends. The limited data in support of the findings was also highlighted by Rev2 and, indirectly, by Rev3.

      Response 05: We acknowledge the limited number of independent samples used in this study. In the revised manuscript, we have explicitly emphasized this limitation in the Discussion section. Specifically, we added the following statement:

      (Line 329) “Our current study primarily focused on demonstrating the feasibility of longitudinal ULM imaging in awake animals, instead of conducting a systematic investigation of how isoflurane anesthesia alters cerebral blood flow. Due to the limited number of animals used, the analyses presented in this work should be interpreted as example case studies. While the trends observed across animals were consistent, the small sample size restricts the scope of statistical inference. For future work, it would be valuable to design more rigorous control experiments with larger sample sizes to systematically compare the effects of isoflurane anesthesia, awake states, and other anesthetics that do not induce vasodilation on cerebral blood flow.”

      Following your suggestion, we have also moved the newly provided information (the table in Supplementary Figure 1) into figure captions. In addition, we have modified in the Methods section to ensure that this information is clear.

      (Line 406) “Eight healthy female C57 mice (8-12 weeks) were used for this study, numbered as Mouse 1 to Mouse 8. Three mice (Mouse 1–3) were used to compare imaging results between awake and anesthetized states (Fig. 3 and 4). Three additional mice (Mouse 4–6) underwent longitudinal imaging over a three-week period (Fig. 5 and 6). Among them, Mouse 4 was also used as an example to demonstrate the overall system schematic and saturation conditions (Fig. 1 and 2). Several mice (Mouse 2, 6, 7, and 8) exhibited suboptimal cranial window quality or image artifacts and were included to illustrate common surgical or imaging issues (Supplementary Fig. 1). The specific usage of each animal is also annotated in the corresponding figure captions.”

      Reviewer #2 (Public Review):

      The authors present a very interesting collection of methods and results using brain ultrasound localization microscopy (ULM) in awake mice. They emphasize the effect of the level of anesthesia on the quantifiable elements assessable with this technique (i.e. vessel diameter, flow speed, in veins and arteries, area perfused, in capillaries) and demonstrate the possibility of achieving longitudinal cerebrovascular assessment in one animal during several weeks with their protocol.

      The authors made a good rewriting of the article based on the reviewers' comments. One of the message of the first version of the manuscript was that variability in measurements (vessel diameter, flow velocity, vascularity) were much more pronounced under changes of anesthesia than when considering longitudinal imaging across several weeks. This message is now not quite mitigated, as longitudinal imaging seems to show a certain variability close to the order of magnitude observed under anesthesia. In that sense, the review process was useful in avoiding hasty conclusion and calls for further caution in ULM awake longitudinal imaging, in particular regarding precision of positioning and cancellation of tissue motion.

      Strengths:

      Even if the methods elements considered separately are not new (brain ULM in rodents, setup for longitudinal awake imaging similar to those used in fUS imaging, quantification of vessel diameters/bubble flow/vessel area), when masterfully combined as it is done in this paper, they answer two questions that have been longrunning in the community: what is the impact of anesthesia on the parameters measured by ULM (and indirectly in fUS and other techniques)? Is it possible to achieve ULM in awake rodents for longitudinal imaging? The manuscript is well constructed, well written, and graphics are appealing.

      The manuscript has been much strengthened by the round of review, with more animals for the longitudinal imaging study.

      Weaknesses:

      Some weaknesses remain, not hindering the quality of the work, that the authors might want to answer or explain.

      When considering fig 4e and fig 4j together: it seems that in fig 4e the vascularity reduction in the cortical ROI is around 30% for downward flow, and around 55% for upward flow; but when grouping both cortical flows in fig 4j, the reduction is much smaller (~5%), even at the individual level (only mouse 1 is used in fig 4e). Can you comment on that?

      Response 06: Thank you for carefully pointing this out. This discrepancy arises primarily from differences in ROI selections.

      The vascularity metric (now we changed the term into fractional vessel area, based on Reviewer 1’s comments) is calculated as the proportion of vessel-occupied pixels relative to the total ROI area. As such, it is best suited for longitudinal comparisons within the same ROI rather than across-ROI comparisons, particularly when the size and vessel composition of the ROIs differ.

      In Fig. 4e, the cortical ROI includes mostly the penetrating vessels, which are selected due to their clear distinction between upward (venous) and downward (arterial) flow directions. Pial vessels were intentionally excluded because flow direction alone does not reliably distinguish arteries from veins in these surface vessels. Thus, the goal of this analysis was to indicate arteriovenous differences, rather than to represent the full cortical vascular changes.

      In contrast, the ROIs used in Fig. 4j aim to provide a more comprehensive view of cortical vascular responses without distinguishing flow direction. That’s why both penetrating and pial vessels are included. Since pial vessels showed relatively smaller vascularity changes within the coronal cross-sections analyzed in our study, their inclusion in the cortical ROI likely contributed to the smaller overall reduction in vascularity observed in Figure 4j.

      To address this potential confusion, we have added further clarification in the Results section of the revised manuscript.

      (Line 209) “It is worth noting that prior analyses (Fig. 4d–h) aimed to illustrate arteriovenous differences. Since pial vessels are difficult to distinguish as arteries or veins based on flow direction in coronal plane imaging, they were excluded from the ROI selection in those analyses. In the current whole-brain comparisons (Fig. 4i-k), the cortical ROIs no longer exclude pial vessels, since distinguishing between arteries and veins is not required. This aims to provide a more comprehensive representation of cortical vasculature.”

      When considering fig 4e, fig 4j, fig 6e and fig 6i altogether, it seems that vascularity can be highly variable, whether it be under anesthesia or vascular imaging, with changes between 5 to 40%. Is this vascularity quantification worth it (namely, reliable for example to quantify changes in a pathological model requiring longitudinal imaging)?

      Response 07: Thank you for raising this important point. We found that imaging in the awake state is inherently more variable than under anesthesia. In contrast, anesthetized imaging offers a more controlled and stable physiological condition, as anesthesia suppresses many sources of variation. For pathological studies, if the vascular or hemodynamic changes induced by anesthesia do not interfere with the scientific question being addressed, imaging under anesthesia can still be a practical and effective approach, due to its experimental simplicity and better physiological consistency.

      The higher variability observed in awake imaging arises from both physiological fluctuations in animals and unavoidable experimental inconsistencies, such as small misalignment on the imaging plane across sessions. If the research question aims to avoid the confounding effects of anesthesia, then instead of suppressing variation through anesthesia, it is important to acknowledge the natural baseline variation in the awake state. However, efforts should be made to minimize technical sources of variation. We have added a brief discussion of this issue at the end of the manuscript to reflect this consideration.

      (Line 396) “However, it is also important to note that although longitudinal awake imaging presents promise to avoid the confounding effects of anesthetics, imaging under anesthesia remains more convenient and controllable in many cases. For applications where the physiological question of interest is not sensitive to anesthesia-induced vascular effects, anesthetized imaging still offers a simpler and more stable approach. Awake imaging inherently exhibits greater physiological variability. However, care must be taken at the experimental level to minimize confounding sources of variation, such as stress level of the animal or handling inconsistencies, to ensure that the measurements are physiologically meaningful.”

      Regarding whether fractional vessel area (formerly referred to as vascularity) is a worthwhile metric for longitudinal quantification: based on our experience and comparisons, we found vessel area to be relatively robust and informative (see also Response 02 to Reviewer 1 for details). However, we acknowledge that other quantitative metrics—such as microbubble count, tortuosity, or flow directionality—may be more suitable depending on the specific pathological model or research question. How these metrics perform in awake imaging and longitudinal disease models is indeed an open and important question. We hope our work can serve as a foundation to inspire further investigation in this direction. To facilitate such exploration, we have developed and open-sourced a MATLAB-based analysis tool that supports multiple quantitative ULM metrics for longitudinal comparison. We encourage users to adapt and extend this framework to evaluate different quantitative metrics.

      (Line 578) “Code availability

      To support quantitative longitudinal analysis of ULM data, we developed an open-source MATLAB application (https://github.com/ekerwang/ULMQuantitativeAnalysis). This tool is designed to facilitate ROI-based analysis of ULM images for longitudinal comparisons. It supports multiple quantification metrics, including but not limited to vessel area and mean velocity used in this study. Users can select and adapt different metrics based on their specific applications, as a wide range of ULM-based quantification metrics have been developed for different pathological and pharmacological studies.”

      Reviewer #2 (Recommendations For The Authors):

      Images in figure 4 lack color bars.

      Response 08: Thank you for pointing this out. The color bars for the images in Figure 4 are the same as those used in the corresponding images in Figure 3. We have now added the explanation of color bars to the revised version of Figure 4 caption.

      Fig 4d: upward and downward are probably swapped.

      Response 09: Thank you for pointing this out, and we apologize for the oversight. They were mistakenly swapped. We have corrected this error in the revised figure.

      No quantitative conclusions are drawn regarding the changes in vessel diameter under anesthesia? Is it not significant? If it is not then why bring changes in diameter to our attention in fig 3 (white arrows) and figure 4b?

      Response 10: Our intention in highlighting diameter changes in Figure 3 (white arrows) and Figure 4b was to provide an illustrative example of isoflurane-induced diameter changes at the single-vessel level. These examples are meant to serve as case studies, not as the basis for broad statistical conclusions.

      In the initial version of the manuscript, we attempted to draw quantitative conclusions by measuring vessel diameters from ten manually selected vessel segments at each location. However, based on feedback from other reviewers, we decided to remove this analysis in the revised version. Manual selection of vessel segments is highly subjective and prone to bias, limiting its reliability for quantitative interpretation.

      Instead, we focused on ROI-based analysis using fractional vessel area (formerly referred to as vascularity), which reflects widespread changes in vessel diameter across regions. It is a more generalizable and less biased metric for quantifying vascular diameter changes.

      We further explained this in the Results section:

      (Line 181) “To validate the broader applicability of our findings, we conducted ROI-based analyses using fractional vessel area and mean velocity as primary metrics. These metrics extended the analysis of vessel diameter and flow velocity to entire brain regions or selected ROIs, which provides a more objective assessment of cerebral blood flow changes at a global scale and reduces the bias associated with manually selecting vessel segments. For vessel area measurements, the term fractional denotes that the vessel area is normalized to the total area of the selected ROI. This normalization is essential for fair comparisons across ROIs of different sizes.”

      Line 210 "In summary, statistical analysis revealed a decrease in individual vessel diameter" this does not seem to be supported by this version of the manuscript as no analysis is done on a representative group of vessels for the diameter.

      Response 11: Thank you for pointing out this important issue. In line with our previous response (Response 10), we would like to clarify that the analysis of individual vessel diameter was intended to serve as an example study, rather than a statistically supported conclusion based on a group of vessels. To avoid confusion, we have removed the phrase “statistical analysis revealed a decrease in individual vessel diameter” from the manuscript. 

      The meaning of the *** in fig 6b and 6c should be clarified as: -it is not explicitly stated - the equivalence test interpretation is less usual than other tests.

      Response 12: We thank the reviewer for pointing out this important issue. We agree that the use of asterisks (***) in Fig. 6b and 6c may have led to confusion, as such markers are typically associated with statistical significance in difference testing. In our case, the analysis was based on the two one-sided test (TOST) procedure to assess statistical equivalence, which is indeed less commonly used and could be misinterpreted.

      To address this, we have replaced the asterisks *** in the figure with the label “equiv.”, which more clearly reflects the intended interpretation. Additionally, we have revised the figure caption and the main text to explicitly state that these markers denote statistical equivalence (not difference) as determined by TOST, with the equivalence margin defined as three times the standard deviation of one week.

      (Figure 6 Caption) “Statistical analysis was performed using the two one-sided test (TOST) to evaluate consistency of measurement. The label “equiv.” indicates statistically equivalent measurements (p < 0.001), defined as interweek differences smaller than three times the standard deviation of one week.”

      (Line 240) “Statistical testing of equivalence was conducted using the two one-sided test (TOST) procedure, which evaluates whether the difference between two time points falls within a predefined equivalence margin. Specifically, equivalence is defined as the inter-week difference being smaller than three times the standard deviation of one week. A statistically significant result in TOST (p < 0.001) supports the interpretation that the measurements are statistically equivalent, which is denoted as “equiv.” in the figures.”

      Line 237 and following: please consider rephrasing into "To further generalize these findings and examine longitudinal variation in ROI-based analysis, we used Mouse 4 as an example to show the consistency of blood flow density across different flow directions in the cortex (Fig. 6d) and extended the quantitative analysis to all three mice (Fig. 6e) (individual ULM upward and downward flow images for all three mice over the threeweek longitudinal study period can be found in Supplementary Fig. 4)." The paragraph will make much more sense.

      Response 13: We appreciate your helpful rephrasing. We have fully adopted your proposed revision to enhance the clarity and coherence of the text. The sentence now reads exactly as you recommended:

      (Line 250): “To further generalize these findings and examine longitudinal variation in ROI-based analysis, we used Mouse 4 as an example to show the consistency of blood flow density across different flow directions in the cortex (Fig. 6d) and extended the quantitative analysis to all three mice (Fig. 6e) (individual ULM upward and downward flow images for all three mice over the three-week longitudinal study period can be found in Supplementary Fig. 4).”

      Line 248: "While arterial and venous flow velocity distributions exhibit clear distinctions, their variations over the three weeks remained acceptable" the meaning of acceptable remains elusive.

      Response 14: Thank you for pointing out the ambiguity in the phrase “remained acceptable”. To improve clarity and precision, we have revised the sentence to provide a more informative description. The updated sentence now reads:

      (Line 261) “While arterial and venous flow velocity distributions exhibit clear distinctions, the distribution shapes remained relatively consistent across the three weeks. Specifically, variation in median velocity were within 1 mm/s. In contrast, anesthesia-induced changes can lead to velocity shifts exceeding 1 mm/s.”

      Line 253: consider rephrasing in "Despite subcortical regions showing the largest vascularity variability consecutive to anesthesia-induced changes, vascularity in those regions was relatively stable values in the longitudinal study" as otherwise the link between the 2 parts of the sentence feels odd.

      Response 15: Thank you for your constructive suggestion regarding the logical flow of the sentence. We fully agree with your point and have revised the sentence exactly as you proposed.

      (Line 268) “Despite subcortical regions showing the largest vascularity variability consecutive to anesthesia-induced changes, vascularity in those regions was relatively stable values in the longitudinal study.”

    1. eLife Assessment

      This important study investigates why the 13-lined ground squirrel (13LGS) retina is unusually rich in cone photoreceptors, the cells responsible for color and daylight vision. The authors perform deep transcriptomic and epigenetic comparisons between the mouse and the 13-lined ground squirrel (13LGS) to provide convincing evidence that identifies mechanisms that drive rod vs cone-rich retina development. Overall, this key question is investigated using an impressive collection of new data, cross-species analysis, and subsequent in vivo experiments. However, the functional analysis showing the sufficiency and necessity of Zic3 and Mef2C remains incomplete, and further analyses are needed to support the claim that these enhancers are newly evolved in 13LGS.

    2. Reviewer #1 (Public review):

      Summary:

      In this manuscript, Weir et al. investigate why the 13-lined ground squirrel (13LGS) retina is unusually rich in cone photoreceptors, the cells responsible for color and daylight vision. Most mammals, including humans, have rod-dominant retinas, making the 13LGS retina both an intriguing evolutionary divergence and a valuable model for uncovering novel mechanisms of cone generation. The developmental programs underlying this adaptation were previously unknown.

      Using an integrated approach that combines single-cell RNA sequencing (scRNAseq), scATACseq, and histology, the authors generate a comprehensive atlas of retinal neurogenesis in 13LGS. Notably, comparative analyses with mouse datasets reveal that in 13LGS, cones can arise from late-stage neurogenic progenitors, a striking contrast to mouse and primate retinas, where late progenitors typically generate rods and other late-born cell types but not cones. They further identify a shift in the timing (heterochrony) of expression of several transcription factors. Further, the authors show that these factors act through species-specific regulatory elements. And overall, functional experiments support a role for several of these candidates in cone production.

      Strengths:

      This study stands out for its rigorous and multi-layered methodology. The combination of transcriptomic, epigenomic, and histological data yields a detailed and coherent view of cone development in 13LGS. Cross-species comparisons are thoughtfully executed, lending strong evolutionary context to the findings. The conclusions are, in general, well supported by the evidence, and the datasets generated represent a substantial resource for the field. The work will be of high value to both evolutionary neurobiology and regenerative medicine, particularly in the design of strategies to replace lost cone photoreceptors in human disease.

      Weaknesses:

      (1) Overall, the conclusions are strongly supported by the data, but the paper would benefit from additional clarifications. In particular, some of the conclusions could be toned down slightly to reflect that the observed changes in candidate gene function, such as those for Zic3 by itself, are modest and may represent part of a more complex regulatory network.

      (2) Additional explanations about the cell composition of the 13LGS retina are needed. The ratios between cone and rod are clearly detailed, but do those lead to changes in other cell types?

      (3) Could the lack of a clear trajectory for rod differentiation be just an effect of low cell numbers for this population?

      (4) The immunohistochemistry and RNA hybridization experiments shown in Figure S2 would benefit from supporting controls to strengthen their interpretability. While it has to be recognized that performing immunostainings on non-conventional species is not a simple task, negative controls are necessary to establish the baseline background levels, especially in cases where there seems to be labeling around the cells. The text indicates that these experiments are both immunostainings and ISH, but the figure legend only says "immunohistochemistry". Clarifying these points would improve readers' confidence in the data.

      (5) Figure S3: The text claims that overexpression of Zic3 alone is sufficient to induce the cone-like photoreceptor precursor cells as well as horizontal cell-like precursors, but this is not clear in Figure S3A nor in any other figure. Similarly, the effects of Pou2f1 overexpression are different in Figure S3A and Figure S3B. In Figure S3B, the effects described (increased presence of cone-like and horizontal-like precursors) are very clear, whereas it is not in Figure S3A. How are these experiments different?

      (6) The analyses of Zic3 conditional mutants (Figure S4) reveal an increase in many cone, rod, and pan-photoreceptor genes with only a reduction in some cone genes. Thus, the overall conclusion that Zic3 is essential for cones while repressing rod genes doesn't seem to match this particular dataset.

      (7) Throughout the text, the authors used the term "evolved". To substantiate this claim, it would be important to include sequence analyses or to rephrase to a more neutral term that does not imply evolutionary inference.

    3. Reviewer #2 (Public review):

      Summary:

      This paper aims to elucidate the gene regulatory network governing the development of cone photoreceptors, the light-sensing neurons responsible for high acuity and color vision in humans. The authors provide a comprehensive analysis through stage-matched comparisons of gene expression and chromatin accessibility using scRNA-seq and scATAC-seq from the cone-dominant 13-lined ground squirrel (13LGS) retina and the rod-dominant mouse retina. The abundance of cones in the 13LGS retina arises from a dominant trajectory from late retinal progenitor cells (RPCs) to photoreceptor precursors and then to cones, whereas only a small proportion of rods are generated from these precursors.

      Strengths:

      The paper presents intriguing insights into the gene regulatory network involved in 13LGS cone development. In particular, the authors highlight the expression of cone-promoting transcription factors such as Onecut2, Pou2f1, and Zic3 in late-stage neurogenic progenitors, which may be driven by 13LGS-specific cis-regulatory elements. The authors also characterize candidate cone-promoting genes Zic3 and Mef2C, which have been previously understudied. Overall, I found that the across-species analysis presented by this study is a useful resource for the field.

      Weaknesses:

      The functional analysis on Zic3 and Mef2C in mice does not convincingly establish that these factors are sufficient or necessary to promote cone photoreceptor specification. Several analyses lack clarity or consistency, and figure labeling and interpretation need improvement.

    4. Reviewer #3 (Public review):

      Summary:

      The authors perform deep transcriptomic and epigenetic comparisons between mouse and 13-lined ground squirrel (13LGS) to identify mechanisms that drive rod vs cone-rich retina development. Through cross-species analysis, the authors find extended cone generation in 13LGS, gene expression within progenitor/photoreceptor precursor cells consistent with a lengthened cone window, and differential regulatory element usage. Two of the transcription factors, Mef2c and Zic3, were subsequently validated using OE and KO mouse lines to verify the role of these genes in regulating competence to generate cone photoreceptors.

      Strengths:

      Overall, this is an impactful manuscript with broad implications toward our understanding of retinal development, cell fate specification, and TF network dynamics across evolution and with the potential to influence our future ability to treat vision loss in human patients. The generation of this rich new dataset profiling the transcriptome and epigenome of the 13LGS is a tremendous addition to the field that assuredly will be useful for numerous other investigations and questions of a variety of interests. In this manuscript, the authors use this dataset and compare it to data they previously generated for mouse retinal development to identify 2 new regulators of cone generation and shed insights into their regulation and their integration into the network of regulatory elements within the 13LGS compared to mouse.

      Weaknesses:

      (1) The authors chose to omit several cell classes from analyses and visualizations that would have added to their interpretations. In particular, I worry that the omission of 13LGS rods, early RPCs, and early NG from Figures 2C, D, and F is notable and would have added to the understanding of gene expression dynamics. In other words, (a) are these genes of interest unique to late RPCs or maintained from early RPCs, and (b) are rod networks suppressed compared to the mouse?

      (2) The authors claim that the majority of cones are generated by late RPCs and that this is driven primarily by the enriched enhancer network around cone-promoting genes. With the temporal scRNA/ATACseq data at their disposal, the authors should compare early vs late born cones and RPCs to determine whether the same enhancers and genes are hyperactivated in early RPCs as well as in the 13LGS. This analysis will answer the important question of whether the enhancers activated/evolved to promote all cones, or are only and specifically activated within late RPCs to drive cone genesis at the expense of rods.

      (3) The authors repeatedly use the term 'evolved' to describe the increased number of local enhancer elements of genes that increase in expression in 13LGS late RPCs and cones. Evolution can act at multiple levels on the genome and its regulation. The authors should consider analysis of sequence level changes between mouse, 13LGS, and other species to test whether the enhancer sequences claimed to be novel in the 13LGS are, in fact, newly evolved sequence/binding sites or if the binding sites are present in mouse but only used in late RPCs of the 13LGS.

      (4) The authors state that 'Enhancer elements in 13LGS are predicted to be directly targeted by a considerably greater number of transcription factors than in mice'. This statement can easily be misread to suggest that all enhancers display this, when in fact, this is only the cone-promoting enhancers of late 13LGS RPCs. In a way, this is not surprising since these genes are largely less expressed in mouse vs 13LGS late RPCs, as shown in Figure 2. The manuscript is written to suggest this mechanism of enhancer number is specific to cone production in the 13LGS- it would help prove this point if the authors asked the opposite question and showed that mouse late RPCs do not have similar increased predicted binding of TFs near rod-promoting genes in C7-8.

    1. eLife Assessment

      This important study shows that calcium stores in the endoplasmic reticulum of the parasitic protozoan, Toxoplasma gondii play a major role in buffering calcium levels in the cytosol as well as other organelles such as the mitochondrion. Advanced imaging techniques, including use of genetically encoded calcium indicators provide compelling evidence for the role of the SERCA-Ca2+ ATPase pump in regulating organellar calcium levels. However, it remains unclear whether intra-organellar calcium transport occurs via ER-mitochondria membrane contact sites or other mechanisms. This work will be of interest to cell and molecular biologists interested in calcium signalling in divergent eukaryotes.

    2. Reviewer #1 (Public review):

      Li et al. investigate Ca2+ signaling in T. gondii and argue that Ca2+ tunnels through the ER to other organelles to fuel multiple aspects of T. gondii biology. They focus in particular on TgSERCA as the presumed primary mechanism for ER Ca2+ filling. Although, when TgSERCA was knocked out there was still a Ca2+ release in response to TG present. Overall the data supports a model where the Ca2+ filling state of the ER modulates Ca2+ dynamics in other organelles.

      Comments on revisions:

      I thank the authors for their careful revisions and response to my comments, which have been addressed.

      Regarding the most critical point of the paper that is Ca2+ transfer from the ER to other organelles, the authors in their rebuttal and in the revised manuscript argue that ER Ca2+ is critical to redistribute and replenish Ca2+ in other organelles in the cell. I agree this conclusion and think it is best stated in the authors' response to point #7: "We propose that this leaked calcium is subsequently taken up by other intracellular compartments. This effect is observed immediately upon TG addition. However, pre-incubation with TG or knockdown of SERCA reduces calcium storage in the ER, thereby diminishing the transfer of calcium to other stores."

      In their rebuttal the authors particularly highlight experiments in Figures 1H-K, 4G-H, and 5H-K in support of this conclusion. The data in Fig 1H-K show that with TG there is increased Ca2+ release from acidic stores. In all cases TG results in a rise in cytoplasmic Ca2+ that could load the acidic stores. So under those conditions the increased acidic organelle Ca2+ is likely due to a preceding high cytosolic Ca2+ transient due to TG. The experiments in 4G-H and 5H-K are more convincing and supportive of an important role of ER Ca2+ to maintain Ca2+ levels in other organelles. Overall, and to avoid a detailed, lengthy discussion of every point, the data support a model where in the absence of SERCA activity ER Ca2+ is reduced as well as Ca2+ in other organelles. I think it would be helpful to present and discuss this finding throughout the manuscript as under physiological conditions ER Ca2+ is regularly mobilized for signaling and homeostasis and this maintains Ca2+ levels in other organelles. This is supported by the new experiment in Supp Fig. 2A.

    3. Author response:

      The following is the authors’ response to the original reviews

      Public Reviews:

      Reviewer #1 (Public review):

      Li et al. investigate Ca2+ signaling in T. gondii and argue that Ca2+ tunnels through the ER to other organelles to fuel multiple aspects of T. gondii biology. They focus in particular on TgSERCA as the presumed primary mechanism for ER Ca2+ filling. Although, when TgSERCA was knocked out there was still a Ca2+ release in response to TG present.

      Note that we did not generate a complete SERCA knockout, as this gene is essential, and its complete loss would not permit the isolation of viable parasites. Instead, we created conditional mutants that downregulate the expression of SERCA. Importantly, some residual activity is present in the mutant after 24 h of ATc treatment as shown in Fig 4C. This is consistent with our Western blots, which demonstrate the presence of residual SERCA protein at 1, 1.5 and 2 days post ATc treatment (Fig. 3B). We have clarified this point in the revised manuscript (lines 232233). See also lines 97-102.

      Overall the Ca2+ signaling data do not support the conclusion of Ca2+ tunneling through the ER to other organelles in fact they argue for direct Ca2+ uptake from the cytosol. The authors show EM membrane contact sites between the ER and other organelles, so Ca2+ released by the ER could presumably be taken up by other organelles but that is not ER Ca2+ tunneling. They clearly show that SERCA is required for T. gondii function.

      Overall, the data presented to not fully support the conclusions reached

      We agree that the data does not support Ca<sup>2+</sup> tunneling as defined and characterized in mammalian cells. In response to this comment, we have modified the title and the text accordingly.

      However, we respectfully would like to emphasize that the study demonstrates more than just the role of SERCA in T. gondii “function”. Our findings reveal that the ER, through SERCA activity, sequesters calcium following influx through the PM (see reviewer 2 comment). The ER calcium pool is important for replenishing other intracellular compartments.

      The experiments support a model in which the ER actively takes up cytosolic Ca²⁺ as it enters the parasite and contributes to intracellular Ca²⁺ redistribution during transitions between distinct extracellular calcium environments. We believe that the role of the ER in modulating intracellular calcium dynamics is demonstrated in Figures 1H–K, 4G-H, and 5H–K. To highlight the relevance of these findings, we have included an expanded discussion in the revised manuscript. See lines 443-449 and 510-522.

      Data argue for direct Ca2+ uptake from the cytosol

      The ER most likely takes up calcium from the cytosol following its entry through the PM and redistributes it to the other organelles. We deleted any mention of the word “tunneling” and replaced it with transfer and re-distribution as they reflect our experimental findings more accurately.

      We interpret the experiments shown in Figure 1 H and I as re-distribution because the amount of calcium released after nigericin or GPN are greatly enhanced after TG addition. We first add calcium to allow intracellular stores to become filled, followed by the addition of TG, which allows calcium leakage from the ER. This leaked calcium can either enter the cytosol and be pumped out or be taken up by other organelles. Our interpretation is that this process leads to an increased calcium content in acidic compartments.

      We conducted an additional experiment in which SERCA was inhibited prior to calcium addition, allowing cytosolic calcium to be exported or taken up by acidic stores. We observed a change in the GPN response (Fig. S2A), possibly indicating that the PLVAC can sequester calcium when SERCA is inactive. While this may support the reviewer’s view, TG treatment does not reflect physiological conditions and may enhance calcium transfer to other compartments. Although the result is interesting, interpretation is complicated by the use of parasites in suspension and drug exposure in solution. Single-parasite measurements are not feasible due to weak signals, and adhered parasites are even less physiological than those in suspension.

      In support of our view, the experiments shown in Figs 4G and H show that down regulating SERCA reduces significantly the response to GPN indicating diminished acidic store loading. In Fig 5I we observe that mitochondrial calcium uptake is reduced in the iDSERCA (+ATc) mutant in response to GPN. Fig 2B demonstrates that TgSERCA can take up calcium at 55 nM, close to resting cytosolic calcium while in Figures 5E and S5B we show that the mitochondrion is not responsive to an increase of cytosolic calcium. Uptake by the mitochondria requires much higher concentrations (Fig 5B-C), which may be achieved within microdomains at MCS between the ER and mitochondrion. This is also consistent with findings reported by Li et al (Nat Commun. 2021) where similar microdomains mediated transfer of calcium to the apicoplast (Fig. 7 E and F of the mentioned reference) was observed.

      Reviewer 2 (Public review):

      The role of the endoplasmic reticulum (ER) calcium pump TgSERCA in sequestering and redistributing calcium to other intracellular organelles following influx at the plasma membrane.

      T. gondii transitions through life cycle stages within and exterior to the host cells, with very different exposures to calcium, adds significance to the current investigation of the role of the ER in redistributing calcium following exposure to physiological levels of extracellular calcium

      They also use a conditional knockout of TgSERCA to investigate its role in ER calcium store-filling and the ability of other subcellular organelles to sequester and release calcium. These knockout experiments provide important evidence that ER calcium uptake plays a significant role in maintaining the filling state of other intracellular compartments.

      We thank the reviewer.

      While it is clearly demonstrated, and not surprising, that the addition of 1.8 mM extracellular CaCl2 to intact T. gondii parasites preincubated with EGTA leads to an increase in cytosolic calcium and subsequent enhanced loading of the ER and other intracellular compartments, there is a caveat to the quantitation of these increases in calcium loading. The authors rely on the amplitude of cytosolic free calcium increases in response to thapsigargin, GPN, nigericin, and CCCP, all measured with fura2. This likely overestimates the changes in calcium pool sizes because the buffering of free calcium in the cytosol is nonlinear, and fura2 (with a Kd of 100-200 nM) is a substantial, if not predominant, cytosolic calcium buffer. Indeed, the increases in signal noise at higher cytosolic calcium levels (e.g. peak calcium in Figure 1C) are indicative of fura2 ratio calculations approaching saturation of the indicator dye.

      We acknowledge the limitations associated with using Fura-2 for cytosolic calcium measurements. However, according to the literature (Grynkiewicz, Get al. (1985). J. Biol. Chem. 260 (6): 3440–3450. PMID 3838314) Fura-2 is suited for measurements between 100 nM and 1 µM calcium. The responses in our experiments were within that range and the experiments with the SERCA mutant and mitochondrial GCaMPfs supports the conclusions of our work.

      However, we agree with the reviewer that the experiment shown in Fig 1C (now Fig 1D) presents a response that approaches the limit of the linear range of Fura-2. In response to this, we have replaced this panel with a more representative experiment that remains within the linear range of the indicator (revised Fig 1D). Additionally, we have included new experiments adding GPN along with corresponding quantifications, which further support our conclusions regarding calcium dynamics in the parasite.

      Another caveat, not addressed, is that loading of fura2/AM can result in compartmentalized fura2, which might modify free calcium levels and calcium storage capacity in intracellular organelles.

      We are aware of the potential issue of Fura-2 compartmentalization, and our protocol was designed to minimize this effect. We load cells with Fura-2 for 26 min at room temperature, then maintain them on ice, and restrict the use of loaded parasites to 2-3 hours. We have observed evidence of compartmentalization as this is reflected in increasing concentrations of resting calcium with time. We carry out experiments within a time frame in which the resting calcium stays within the 100 nM range. We have included a sentence in the Materials and Methods section. Lines 604-606.

      Additionally, following this reviewer’s suggestion, we performed further experiments to directly assess compartmentalization. See below the full response to reviewer 2.

      The finding that the SERCA inhibitor cyclopiazonic acid (CPA) only mobilizes a fraction of the thapsigargin-sensitive calcium stores in T. gondii coincides with previously published work in another apicomplexan parasite, P. falciparum, showing that thapsigargin mobilizes calcium from both CPA-sensitive and CPA-insensitive calcium pools (Borges-Pereira et al., 2020, DOI: 10.1074/jbc.RA120.014906). It would be valuable to determine whether this reflects the off-target effects of thapsigargin or the differential sensitivity of TgSERCA to the two inhibitors.

      This is an interesting observation, and we now include a discussion of this result considering the Plasmodium study and include the citation. Lines 436-442.

      Figure S1 suggests differential sensitivity, and it shows that thapsigargin mobilizes calcium from both CPA-sensitive and CPA-insensitive calcium pools in T. gondii. Also important is that we used 1 µM TG as we are aware that TG has shown off-target effects at higher concentrations. TG is a well-characterized, irreversible SERCA inhibitor that ensures complete and sustained inhibition of SERCA activity. In contrast, CPA is a reversible inhibitor whose effectiveness is influenced by ATP levels, and it may only partially inhibit SERCA or dissociate over time, allowing residual Ca²⁺ reuptake into the ER.

      Additionally, as suggested by the reviewer we performed experiments using the Mag-Fluo-4 protocol to compare the inhibitory effects of CPA and TG. These results are presented in Fig. S3 (Lines 217-223). Under the conditions of the Mag-Fluo-4 assay with digitonin-permeabilized cells, both TG and CPA showed similar rates of Ca<sup>2+</sup> leakage following the addition of the inhibitor. This may indicate that under the conditions of the Mag-Fluo-4 experiments the rate of Ca<sup>2+</sup> leak is mostly determined by the intrinsic leak mechanism and not by the nature of the inhibitor. By contrast, in intact Fura-2–loaded cells, CPA induces a smaller cytosolic Ca²⁺ increase than TG, consistent with less efficient SERCA inhibition likely due to its reversibility and possibly incomplete inhibition under cellular conditions.

      The authors interpret the residual calcium mobilization response to Zaprinast observed after ATc knockdown of TgSERCA (Figures 4E, 4F) as indicative of a target calcium pool in addition to the ER. While this may well be correct, it appears from the description of this experiment that it was carried out using the same conditions as Figure 4A where TgSERCA activity was only reduced by about 50%.

      We partially agree with the reviewer that 50% knockdown of TgSERCA means that the ER may still be targeted by zaprinast, and that there is no definitive evidence of the involvement of another calcium pool. The Mag-Fluo-4 experiment, while we acknowledge that the fluorescence of MagFluo-4 is not linear to calcium, indicates that SERCA activity is present even after 24 hr of ATc treatment. However, when Zaprinast is added after TG, we observed a significant calcium release in wild type cells. This result suggests the presence of another large calcium pool than the one mobilized by TG (PMID: 2693306).

      We recently published work describing the Golgi as a calcium store in Toxoplasma (PMID: 40043955) and we showed in Fig. S4 D-G of that work, that GPN treatment of tachyzoites loaded with Fura-2 diminished the Zaprinast response indicating that they could be impacting a similar store. In the present study we performed additional experiments in which TG was followed by GPN and Zaprinast showing a similar pattern. GPN significantly diminished the Zaprinast response. These results are shown now in Figure S2B. We address these possibilities in the discussion and interpretation of the result. Lines 451-460.

      The data in Figures 4A vs 4G and Figures 4B vs 4H indicate that the size of the response to GPN is similar to that with thapsigargin in both the presence and absence of extracellular calcium. This raises the question of whether GPN is only releasing calcium from acidic compartments or whether it acts on the ER calcium stores, as previously suggested by Atakpa et al. 2019 DOI: 10.1242/jcs.223883. Nonetheless, Figure 1H shows that there is a robust calcium response to GPN after the addition of thapsigargin.

      The results of the indicated experiments did not exclude the possibility that GPN can also mobilize some calcium from the ER besides acidic organelles. We don’t have any evidence to support that GPN can mobilize calcium from the ER either. Based on our unpublished work, we think GPN mainly release calcium from the PLVAC. We included the mentioned citation and discuss the result considering the possibility that GPN may be acting on more than one store. Lines 451-460.

      An important advance in the current work is the use of state-of-the-art approaches with targeted genetically encoded calcium indicators (GECIs) to monitor calcium in important subcellular compartments. The authors have previously done this with the apicoplast, but now add the mitochondria to their repertoire. Despite the absence of a canonical mitochondrial calcium uniporter (MCU) in the Toxoplasma genome, the authors demonstrate the ability of T. gondii mitochondrial to accumulate calcium, albeit at high calcium concentrations. Although the calcium concentrations here are higher than needed for mammalian mitochondrial calcium uptake, there too calcium uptake requires calcium levels higher than those typically attained in the bulk cytosolic compartment. And just like in mammalian mitochondria, the current work shows that ER calcium release can elicit mitochondrial calcium loading even when other sources of elevated cytosolic calcium are ineffective, suggesting a role for ER-mitochondrial membrane contact sites. With these new tools in hand, it will be of great value to elucidate the bioenergetics and transport pathways associated with mitochondrial calcium accumulation in T. gondii.

      We thank this reviewer praising our work. Studies of bioenergetics and transport pathways associated with mitochondrial calcium accumulation is part of our future plans mentioned in lines 520-522 and 545.

      The current studies of calcium pools and their interactions with the ER and dependence on SERCA activity in T. gondi are complemented by super-resolution microscopy and electron microscopy that do indeed demonstrate the presence of close appositions between the ER and other organelles (see also videos). Thus, the work presented provides good evidence for the ER acting as the orchestrating organelle delivering calcium to other subcellular compartments through contact sites in T. gondi, as has become increasingly clear from work in other organisms.

      Thank you

      Reviewer #3 (Public review):

      This manuscript describes an investigation of how intracellular calcium stores are regulated and provides evidence that is in line with the role of the SERCA-Ca2+ATPase in this important homeostasis pathway. Calcium uptake by mitochondria is further investigated and the authors suggest that ER-mitochondria membrane contact sites may be involved in mediating this, as demonstrated in other organisms.

      The significance of the findings is in shedding light on key elements within the mechanism of calcium storage and regulation/homeostasis in the medically important parasite Toxoplasma gondii whose ability to infect and cause disease critically relies on calcium signalling. An important strength is that despite its importance, calcium homeostasis in Toxoplasma is understudied and not well understood.

      We agree with the reviewer. Thank you

      A difficulty in the field, and a weakness of the work, is that following calcium in the cell is technically challenging and thus requires reliance on artificial conditions. In this context, the main weakness of the manuscript is the extrapolation of data. The language used could be more careful, especially considering that the way to measure the ER calcium is highly artificial - for example utilising permeabilization and over-loading the experiment with calcium. Measures are also indirect - for example, when the response to ionomycin treatment was not fully in line with the suggested model the authors hypothesise that the result is likely affected by other storage, but there is no direct support for that.

      The Mag-Fluo-4-based protocol for measuring intraluminal calcium is well established and has been extensively used in mammalian cells, DT40 cells and other cells for measuring intraluminal calcium, activity of SERCA and response to IP3 (Some examples: PMID: 32179239, PMID: 15963563, PMID: 19668195, PMID: 30185837, PMID: 19920131).

      Furthermore, we have successfully employed this protocol in previous work, including the characterization of the Trypanosoma brucei IP3R (PMID: 23319604) and the assessment of SERCA activity in Toxoplasma (PMID: 40043955 and 34608145). The citation PMID: 32179239 provides a detailed description of the protocol, including references to its prior use. In addition, the schematic at the top of Figure 2 summarizes the experimental workflow, reinforcing that the protocol follows established methodologies. We included more references and an expanded discussion, lines 425-435.

      We respectfully disagree with the concern regarding potential calcium overloading. The cells used in our assays were permeabilized, which is a critical step that allows to precisely control calcium concentrations. All experiments were conducted at 220 nM free calcium, a concentration within the physiological range of cytosolic calcium fluctuations. This concentration was consistently used across all studies described above. Importantly, permeabilization ensures that the dye present in the cytosol becomes diluted, and allows MgATP (which cannot cross intact membranes) to access the ER membrane, in addition to be able to expose the ER to precise calcium concentrations.

      The Mag-Fluo-4 loading conditions are designed to allow compartmentalization of the indicator to all intracellular compartments and the calcium uptake stimulated by MgATP exclusively occurs in the compartment occupied by SERCA as only SERCA is responsive to MgATP-dependent transport in this experimental setup

      Regarding the use of IO, we would like to clarify that its broad-spectrum activity is welldocumented. As a calcium ionophore, IO facilitates calcium release across multiple membranes, and not just the ER leading to a more substantial calcium release compared to the more selective effect of TG. The results observed with IO were consistent with this expected broader activity and support our interpretation.

      Lastly, we emphasize that the experiment in Figure 2 was designed specifically to assess SERCA activity in situ under defined conditions. It was not intended to provide a comprehensive characterization of the role of TgSERCA in the parasite. We now clarify this distinction in the revised Discussion lines 425-435.

      Below we provide some suggestions to improve controls, however, even with those included, we would still be in favour of revising the language and trying to avoid making strong and definitive conclusions. For example, in the discussion perhaps replace "showed" with "provide evidence that are consistent with..."; replace or remove words like "efficiently" and "impressive"; revise the definitive language used in the last few lines of the abstract (lines 13-17); etc. Importantly we recommend reconsidering whether the data is sufficiently direct and unambiguous to justify the model proposed in Figure 7 (we are in favour of removing this figure at this early point of our understanding of the calcium dynamic between organelles in Toxoplasma).

      We thank the reviewer for the suggestions and we modified the language as suggested. We limited the use of the word "showed" to references to previously published work. We deleted the other words

      Figure 7 is intended as a conceptual model to summarize our proposed pathways, and, like all models, it represents a working hypothesis that may not fully capture the complexity of calcium dynamics in the parasite. In light of the reviewer’s comments, we revised the figure and legend to clearly distinguish between pathways for which there is experimental evidence from those that are hypothetical.

      Another important weakness is poor referencing of previous work in the field. Lines 248250 read almost as if the authors originally hypothesised the idea that calcium is shuttled between ER and mitochondria via membrane contact sites (MCS) - but there is extensive literature on other eukaryotes which should be first cited and discussed in this context. Likewise, the discussion of MCS in Toxoplasma does not include the body of work already published on this parasite by several groups. It is informative to discuss observations in light of what is already known.

      The sentence in which we state the hypothesis about the calcium transfer refers specifically to Toxoplasma. To clarify this, we have now added the phrase “In mammalian cells” (Line 311) and included additional citations, as suggested by the reviewer. While only a few studies have described membrane contact sites (MCSs) in Toxoplasma, we do cite several pertinent articles (e.g., lines 479-486). We believe that we cited all articles mentioning MCS in T. gondii

      However, we must clarify to the reviewer that the primary focus of our study is not to characterize or confirm the presence of MCSs in T. gondii, but rather to demonstrate functional calcium transfer between the ER and mitochondria. Our data support the conclusion that this transfer requires close apposition of these organelles, consistent with the presence of MCSs.

      Recommendations for the authors:

      Reviewer #1 (Recommendations for the authors):

      (1) Line 45: change influx to release as Ca2+ influx usually referred to Ca2+ entry from the extracellular space. Same for line 71.

      Corrected, line 47 and 73

      (2) Line 54: consider toning down the strong statement of 'widely' accepted as ER Ca2+ subdomain heterogeneity remains somewhat debated.

      Changed the sentence to “it has been proposed”, Line 56

      (3) Line 119-21: A lower release in response to TG is typical and does not reflect TG specific for SERCA. It is due to the slow kinetics of Ca2+ leak out of the ER allowing other buffering and transport mechanisms to act. Also, could be a reflection of the duration after TG treatment to allow complete store depletion. Figure S1A-B shows that there is still Ca2+ in the stores following TG but the TG signal does not go back to baseline arguing that the leak is still active. Hence the current data does not address the specificity of TG for TgSERCA. Please revise the statement accordingly.

      Thank for the suggestion, we changed the sentence to this: “This result could reflect the slow kinetics of Ca²⁺ leak from the ER, allowing other buffering and transport mechanisms to mitigate the phenomenon. Alternatively, it may indicate the duration after TG treatment allowing time to complete store depletion. As shown in Figure S1A-B, residual Ca²⁺ remains in the stores after TG treatment, and the TG-induced phenomenon does not return to baseline, suggesting that the leak remains active”. Lines 124-128

      (4) Figure 1C: the authors interpret the data 'This Ca2+ influx appeared to be immediately taken up by the ER as the response to TG was much greater in parasites previously exposed to extracellular Ca2+'. I don't understand this interpretation, in Ca2+-containing solution it would expected to have a larger signal as TG is likely to activate store-operated Ca2+ entry which would contribute to a larger cytosolic Ca2+ transient. Does T. gondii have SOCE? It cannot be uptake into the ER as SERCA is blocked. Unless the authors are arguing for another ER Ca2+ uptake pathway? But why are Ca2+ uptake in the ER would lower the signal whereas the data show an increased signal?

      We pre-incubated the suspension with calcium to allow filling of the stores, while SERCA is still active, and added thapsigargin (TG) at 400 seconds to measure calcium release. The experiment was designed to introduce the concept that the ER may have access to extracellular calcium, a phenomenon not yet clearly demonstrated in Toxoplasma. We did not expect to have less release by TG but if the ER is not efficient in filling after extracellular calcium entry it would be expected to have a similar response to TG. Yes, it is very possible that when we add TG we are also seeing more calcium entry through the PM as we previously proposed that the increased cytosolic Ca<sup>2+</sup> may regulate Ca<sup>2+</sup> entry. However, the evidence does not support that this increased entry would be triggered by store depletion. The experiments with the SERCA mutant (Fig. 4D) shows that in the conditional knockout mutant, the ER is partially depleted, yet this does not lead to enhanced calcium entry, suggesting that the depletion alone is not sufficient to trigger increased influx.

      There is no experimental evidence supporting the regulation of calcium entry by store depletion in Toxoplasma (PMID: 24867952). We revised the text to clarify this point and expanded the discussion on store-operated calcium entry (SOCE). While it is possible that a channel similar to Orai exists in Toxoplasma, it is highly unlikely to be regulated by store depletion, as there is no gene homologous to STIM. If store-regulated calcium entry does occur in Toxoplasma, it is likely mediated through a different, still unidentified, mechanism. Lines 461-467.

      (5) The choice of adding Ca2+ first followed by TG is curious as it is more difficult to interpret. Would be more informative to add TG, allow the leak to complete, and then add Ca2+ which would allow temporal separation between Ca2+ release from stores and Ca2+ influx from the extracellular space. Was this experiment done? If not would be useful to have the data.

      Yes, this experiment was already published: PMID: 24867952 and PMID: 38382669.

      It mainly highlighted that increased cytosolic calcium may regulate calcium entry most likely through a TRP channel. See our response to point 4 and the description of the new Fig. S2 in the response to point 7.

      (6) Line 136-39: these experiments as designed - partly because of the issues discussed above - do not address the ability of organelles to access extracellular Ca2+ or the state of refilling of intracellular Ca2+ stores. They can simply be interpreted as the different agents (TG, Nig, GPN, CCCP) inducing various levels of Ca2+ influx.

      Concerning TG, the experiment shown in Fig. 4D shows that depletion of the ER calcium does not result in stimulation of calcium entry, indicating the absence of classical SOCE activation in Toxoplasma.

      To our knowledge, neither mitochondria nor lysosomes (or other acidic compartments) are capable of triggering classical SOCE in mammalian cells.

      Given that the ER in Toxoplasma lacks the canonical components required to initiate SOCE, it is unclear why the mitochondria or acidic compartments would be able to do so. While it is possible that T. gondii utilizes an alternative mechanism for store-operated calcium entry, investigating such a pathway would require a comprehensive study. In mammalian systems, it took almost 15 years and the efforts of multiple research groups to identify the molecular components of SOCE. Expecting this complex question to be resolved within the scope of a single study is unrealistic.

      Our current data show that the mitochondrion is unable to access calcium from the cytosol, as shown in Figure 5E. Performing a similar experiment for the PLVAC would be ideal; however, expression of fluorescent calcium indicators in this organelle has not been successful. This is likely due to the presence of several proteases that degrade expressed proteins, as well as the acidic environment, which quenches fluorescence. These challenges have made studying calcium dynamics in the PLVAC particularly difficult.

      To address the reviewer’s comment, we performed an additional experiment presented in Fig. S2A. In this experiment, we first inhibited SERCA with thapsigargin (TG), preventing calcium uptake into the ER, and subsequently added calcium to the suspension. Under these conditions, calcium cannot be sequestered by the ER. We then applied GPN and quantified the response, comparing it to a similar experimental condition without TG. Indeed, under these conditions, we observed a significant but modest increase in the GPN-induced response, suggesting that the PLVAC may be capable of directly taking up calcium from the cytosol. However, this occurs under conditions of SERCA inhibition which creates nonphysiological conditions with elevated cytosolic calcium levels and the presence of TG may promote additional ER leakage, both of which could artificially enhance PLVAC uptake. Under physiological conditions, with functional SERCA activity, the ER would likely sequester cytosolic calcium more efficiently, thereby limiting calcium availability for PLVAC direct uptake. Thus, while the result is intriguing, it may not reflect calcium handling under normal cellular conditions. See lines 172-178.

      (7) Figure 1H-I: I disagree with the authors' interpretation of the results (lines 144-153). The data argue that by blocking ER Ca2+ uptake by TG, other organelles take up Ca2+ from the cytosol where it accumulates due to the leak and Ca2+ influx as is evident from the data allowing more release. The data does not argue for ER Ca2+ tunneling to other organelles. Tunneling would be reduced in the presence of TG (see PMID: 30046136, 24867608).

      We partially agree with this concern. In our experiments, TG was used to inhibit SERCA and block calcium uptake into the ER, allowing calcium to leak into the cytosol. We propose that this leaked calcium is subsequently taken up by other intracellular compartments. This effect is observed immediately upon TG addition. However, pre-incubation with TG or knockdown of SERCA reduces calcium storage in the ER, thereby diminishing the transfer of calcium to other stores.

      To further support our claim, we performed additional experiments in the absence of extracellular calcium, now presented in Figure 1J-K. We observed that calcium release triggered by GPN or nigericin was significantly enhanced when both agents were added after TG. These results suggest that calcium initially released from the ER can be sequestered by other compartments. As mentioned, we deleted any mention of “tunneling,” but we believe the data support the occurrence of calcium transfer. New results described in lines 166-171.

      The experiment in Fig S2A described in the response to (6) also addresses this concern. Under physiological conditions with functional SERCA, cytosolic calcium would likely be rapidly sequestered by the ER, limiting its availability to other compartments. See lines 172178.

      (8) Line 175: SERCA-dependent Ca2+ uptake is higher at 880 nM as would be expected yet the authors state that it's optimal at 220 nM Ca2+ ?

      Yes, it is true that the SERCA-dependent Ca<sup>2+</sup> uptake rate is higher at elevated Ca²⁺ concentrations. We chose to use 220 nM free calcium because of several reasons: 1) this concentration is close to physiological cytosolic levels fluctuations; 2) it is commonly used in studies of mammalian SERCA; and 3) calcium uptake is readily detectable at this level. While this may not represent the maximal activity conditions for SERCA, we believe it is a reasonable and physiologically relevant choice for assessing calcium transport activity SERCA-dependent. We added one sentence to the results explaining this reasoning (lines 204-207) and we deleted the word optimal.

      (9) Figure 3H: the saponin egress data support the conclusion that organelles Ca2+ take up cytosolic Ca2+ directly without the need for ER tunneling.

      The saponin concentration used permeabilizes the host cell membrane, allowing the intracellular tachyzoite to be surrounded with the added higher extracellular calcium concentration. The saponin concentration used does not affect the tachyzoite membrane as the parasite is still moving and calcium oscillations were clearly seen under similar conditions (PMID: 26374900 ). The resulting calcium increase in the tachyzoite cytosol is what stimulates parasite motility and egress. Since SERCA activity is reduced in the mutant, cytosolic calcium accumulates more rapidly, reaching the threshold for egress sooner and thereby accelerating parasite exit. The result does not support that the other stores contribute to this because of the Ionomycin response, which shows that egress is diminished in the mutant, likely because the calcium stores are depleted. We added an explanation in the results, lines 262-269 and the discussion, lines 532-539.

      (10) Figure S2: the HA and SERCA signals do not match perfectly? Could this reflect issues with HA tagging, potentially off-target effects? Was this tested?

      These are not off-target effects, as we did not observe them in the control cells lacking HA tagging. The HA signal also disappeared after treatment with ATc, further confirming that the IFA signal is specific. We agree with the reviewer that the signals do not align perfectly. This discrepancy could be due to differences in antibody accessibility or the fact that the two antibodies recognize different regions of the protein. We added a sentence about this in the result; lines 240-243.

      Reviewer #2 (Recommendations for the authors):

      The description of the data of Figures 1B and S1A starting on line 108 would be easier to follow if Figure S1A was actually incorporated into Figure 1. It is not clear why these two complementary experiments were separated since they are both equally important in understanding and interpreting the data.

      We re-arranged figure 1 and incorporated S1A now as Fig 1C.

      As noted in the public comments, loading of fura2/AM can result in compartmentalized fura2, which can contaminate the cytosolic calcium measurements and might modify free calcium levels and calcium storage capacity in intracellular organelles. This can be assessed using the digitonin permeabilization method used in the MagFluo4 measurements, but in this case, detecting the fura2 signal remaining after cell permeabilization.

      As suggested by the reviewer, we measured Fura-2 compartmentalization by permeabilizing cells with digitonin as we do for the Mag-Fluo-4 and the fluorescence was reduced almost completely and was unresponsive to any additions (see Author response image 1).

      Author response image 1.

      T. gondii tachyzoites in suspension exposed to Thapsigargin Calcium and GPN. The dashed lines shows and experiments using the same conditions but parasites were permeabilized with digitonin shows a similar experiment with parasites exposed to MgATP.to release the cytosolic Fura. Part B

      Following the public comment regarding the residual calcium mobilization response to Zaprinast observed after 24 h ATc knockdown of SERCA (Figsures 4E, 4F, as explained in the legend to Figure 4), was there still a response to Zaprinast after 48 h knockdown, where the thapsigargin response was apparently fully ablated?

      Unfortunately, we were unable to perform this experiment as it is not possible to obtain sufficient cells at 48 h with ATc. Due to the essential role of TgSERCA, parasites are unable to replicate after 24 h.

      As noted in the public comments, the data in Figure 4A vs 4G and Figure 4B vs 4H appear to show that the calcium responses to GPN are similar to that with thapsigargin, which seems unexpected if the acidic compartment is loaded from the ER. The results with GPN addition after thapsigargin (Figure 1H) argue against this, but the authors should still cite the work of Atakpa et al.

      We think that the reviewer is concerned that GPN may also be acting on the ER. This is a possibility that we considered, and we now included the suggested citation (line 457). However, we believe that it is difficult to directly compare the responses, as the kinetics of calcium release from the ER may differ from those of release from the PLVAC. This could be due to differences in the calcium buffering capacity between the two compartments. Additionally, it is possible that calcium leaked from the ER is more efficiently sequestered by other stores or extruded through the plasma membrane than calcium released from the PLVAC. Besides, GPN is known to have a more disruptive effect on membranes compared to TG, which may also influence their responses. As noted by the reviewer, Figure 1H also supports the idea that the acidic compartment is loaded from the ER.

      The abbreviation for the plant-like vacuolar compartment (PLVAC) only appears in a figure legend but should be defined in the main text on first use.

      Corrected, lanes 140-143

      The authors should cite the previous study of Borges-Pereira et al., 2020 (PMID: 32848018) that also demonstrates the incomplete overlap of the calcium pools mobilized by thapsigargin and CPA in P. falciparum. The ability to measure calcium in intracellular stores using MagFluo4 opens the possibility to further investigate this discrepancy between CPA and thapsigargin, but CPA does not appear to have been used in the permeabilized cell experiments with MagFluo4. I would suggest that this could be added to Figure 2 and/or Figure 4, or at least as a supplementary figure.

      In response to this reviewer’s critique we performed additional experiments with Mag-Fluo4 loaded parasites. These are presented in the new Figure S3. We added CPA and TG and combined them to inhibit SERCA and to allow calcium leak from the loaded organelle. Under these conditions, we observed a very similar leak rate after the addition of the inhibitors as measured by the slope of Ca<sup>2+</sup> leak. We believe that the leak rate is most likely determined by the intrinsic ER mechanism. See the discussion of this result in lines 436442 and the previous response to the same reviewer comment.

      Reviewer #3 (Recommendations for the authors):

      Suggestions for improved or additional experiments, data, or analyses

      (1) Figure 1A is not mentioned in the main text even though it is discussed.

      Corrected

      (2) Figure 1G: Values do not match, how can GPN be so high?

      These figures were replaced by new traces and individual quantification analyses for each experiment.

      (3) Figure 1H and I: Is this type of data/results also available for the mitochondrion?

      Unfortunately, we were not able to include this experiment because we were unable to accurately quantify the mitochondrial calcium release. Instead, we used mitochondrial GECIs and the results are shown in Figure 5 to study mitochondrial calcium uptake.

      (4) Figure 1H: where does the calcium go after GPN addition? Taken up by another calcium store?

      Most likely calcium is extruded through the plasma membrane by the activity of the Calcium ATPase TgA1.

      However, the reviewer’s suggestion is also possible, and calcium could be taken by another store like the mitochondrion. In this regard, we did observe a large mitochondrial calcium increase (parasites expressing SOD2-GCaMp6) after adding GPN (Fig 5I) suggesting that the mitochondrion may take calcium from the organelle targeted by GPN. However, the calcium affinity of the mitochondrion is very low, so the concentration of calcium needs to be very high to activate it and these concentrations are most likely achieved at the microdomains formed between the mitochondrion and other organelles.

      (5) Figure 2B-C: Further explanation of why these particular values were chosen for the follow-up experiments would be helpful for the reader.

      We tested a wide range of MgATP and free calcium concentrations to measure ER Ca<sup>2+</sup> uptake catalyzed by TgSERCA. The concentrations shown fall within the linear range.

      We followed the free calcium concentrations used by studies of mammalian SERCA (https://doi.org/10.1016/j.ceca.2020.102188 ). In this protocol they used 220 nM free calcium, which was close to cytosolic Ca<sup>2+</sup> levels. TgSERCA can take up calcium efficiently at this concentration, as shown in Fig 2. We used less MgATP than the mammalian cell protocols, since we did not observe a significant increase in SERCA activity beyond 0.5 mM MgATP. We added one more sentence explaining in the results, lines 204-207.

      (6) Figure 3E: Revise the error bar? (and note that colours do not match the graph legend).

      The colors do match; the problem visualizing it is because vacuoles containing a single parasite are virtually absent in the control group without ATc treatment.

      (7) Figure 3H: 'Interestingly, when testing egress after the addition of saponin in the presence of extracellular Ca2+, we observed that the tachyzoites egressed sooner (Figure 3H, saponin egress).' This is the only graph showing egress timing, and thus it is not clear what is the comparison. The egressed here is sooner compared to what condition? Egress in the absence of Ca2+? This requires clarification and might require the control data to be added.

      In the saponin experiment we compare time to egress of the mutant grown with or without ATc. The measurement is for time to egress after adding saponin. This experiment is in the presence of extracellular calcium. The protocol was previously used to measure time to egress: PMID: 40043955, PMID: 38382669, PMID: 26374900. See also response to question 9 of reviewer 1.

      (8) Figure 4C: There is a small peak appearing right after TG addition this should be discussed and explained.

      This trace was generated in a different fluorometer, F-4000. This was an artifact due to jumping of the signal when adding TG. Multiple repeats of the same experiment in the newer F7000 did not show the peak. We included in the MM the use of the F-4000 fluorometer for some experiments. We apologize for the omission. Lines 609-610

      (9) Figure 5A: An important control that is missing is co-localisation with a mitochondrial marker.

      The expression of the SOD2-GCaMP6 has been characterized: PMID: 31758454

      (10) Figure 5H: This line was made for this study however the line genetic verification is missing.

      In response to this concern we now include a new Figure S5 showing the fluorescence of GCaMP6 in the mitochondrion of the iDTgSERCA mutant (Fig. S5A). We include several parasites. In addition, we show fluorescence measurements after addition of Calcium showing that the cells are unresponsive indicating that the indicator is not in the cytosol. Lines 650-651 and 344-348.

      (11) Figure 6D: since the membranes are hard to see, it is not clear whether the arrows show structures that are in line with the definition of membrane contact sites. The authors should provide an in-depth analysis of the length of the interaction between the membranes where the distance is less than 30 nM, and discuss how many structures corresponding to the definition were analysed.

      All the requested details are now included in the legend to Figure S3.

      Minor corrections to the text and figures

      (1) Unify statistical labelling throughout the paper replacing *** with p values.

      Corrected. We changed the *** with the actual p value in some figures. For figure 2 and Fig S1, we still use the *** due to the space limitation.

      (2) Unify ATC vs ATc throughout the paper.

      Corrected

      (3) Unify capitalization of line name (iΔTgserca/i ΔTgSERCA) throughout the paper.

      Corrected

      (4) Unify capitalization of p value (p/P) throughout the paper.

      Corrected in figures

      (5) Unify Fig X vs Fig. X throughout the text.

      Corrected

      (6) Add values of scale bars to legends (eg Figure S2).

      Corrected

      (7) What is the time point for the data in Figures 4E-H, 5H, and S3? 24hrs? include in the legend.

      Added 24 h to the legends. Fig S3 is now S4.

      (8) Figure 3F: The second graph is NS thus perhaps no need for the p-value?

      Corrected

      (8) Figure 3G: Worth considering swapping the two around: first attachment and then invasion?

      Corrected. Invasion and attachment bars were swapped.

      (10) Figure 4A/B: Wrong colour match for Figure 4B.

      Corrected

      (11) Figure 4F: In the main text, the authors reference to Figure 1F, correct to 4F.

      Corrected

      (12) Figure 4H: In the main text, authors reference to Figure 1H, correct to 4H.

      Corrected

    1. We might have studied Africa for a few weeks in school or glanced occasionally at newspaper headlines about genocide, AIDS, Ebola, or civil war, but rarely have we actually thought seriously about Africa.

      This part of the text is a media stereotype because people get put an image of Africa into their mind where the only things that happen in Africa are bad, like how newspapers are about aids, genocide, Ebola, or civil war.

    1. (GWAS)

      GWAS itself just tell us if a variant is statistically significant and linked with a phenotype but they cannot tell us why they matter, it just says hey this variant is significant for this trait. It can be performed in SNP chips, WGS or WES

    1. eLife Assessment

      Whole-brain imaging of neuronal activity in freely behaving animals holds great promise for neuroscience, but numerous technical challenges limit its use. In this important study, the authors describe a new set of deep learning-based tools to track and identify the activity of head neurons in freely moving nematodes (C. elegans) and jellyfish (Clytia hemisphaerica). While the tools convincingly enable high tracking speed and accuracy in the settings in which the authors have evaluated them, the claim that these tools should be easily generalizable to a wide variety of datasets is incompletely supported.

    2. Reviewer #1 (Public review):

      In this important study, the authors develop a suite of machine vision tools to identify and align fluorescent neuronal recording images in space and time according to neuron identity and position. The authors provide compelling evidence for the speed and utility of these tools. While such tools have been developed in the past (including by the authors), the key advancement here is the speed and broad utility of these new tools. While prior approaches based on steepest descent worked, they required hundreds of hours of computational time, while the new approaches outlined here are >600-fold faster. The machine vision tools here should be immediately useful to readers specifically interested in whole-brain C. elegans data, but also for more general readers who may be interested in using BrainAlignNet for tracking fluorescent neuronal recordings from other systems.

      I really enjoyed reading this paper. The authors had several ground truth examples to quantify the accuracy of their algorithms and identified several small caveats users should consider when using these tools. These tools were primarily developed for C. elegans, an animal with stereotyped development, but whose neurons can be variably located due to internal motion of the body. The authors provide several examples of how BrainAlignNet reliably tracked these neurons over space and time. Neuron identity is also important to track, and the authors showed how AutoCellLoader can reliably identify neurons based on their fluorescence in the NeuroPAL background. A challenge with NeuroPAL though, is the high expression of several fluorophores, which compromises behavioral fidelity. The authors provide some possible avenues where this problem can be addressed by expressing fewer fluorophores. While using all four channels provided the best performance, only using the tagRFP and CyOFP channels was sufficient for performance that was close to full performance using all 4 NeuroPAL channels. This result indicates that the development of future lines with less fluorophore expression could be sufficient for reliable neuronal identification, which would decrease the genetic load on the animal, but also open other fluorescent channels that could be used for tracking other fluorescent tools/markers. Even though these tools were developed for C. elegans specifically, they showed BrainAlignNet can be applied to other organisms as well (in their case, the cnidarian C. hemisphaerica), which broadens the utility of their tools.

      Strengths:

      (1) The authors have a wealth of ground-truth training data to compare their algorithms against, and provide a variety of metrics to assess how well their new tools perform against hand annotation and/or prior algorithms.

      (2) For BrainAlignNet, the authors show how this tool can be applied to other organisms besides C. elegans.

      (3) The tools are publicly available on GitHub, which includes useful README files and installation guidance.

      Weaknesses:

      (1) Most of the utility of these algorithms is for C. elegans specifically. Testing their algorithms (specifically BrainAlignNet) on more challenging problems, such as whole-brain zebrafish, would have been interesting. This is a very, very minor weakness, though.

      (2) The tools are benchmarked against their own prior pipeline, but not against other algorithms written for the same purpose.

      (3) Considerable pre-processing was done before implementation. Expanding upon this would improve accessibility of these tools to a wider audience.

    3. Reviewer #2 (Public review):

      Summary:

      The paper introduced the pipeline to analyze brain imaging of freely moving animals: registering deforming tissues and maintaining consistent cell identities over time. The pipeline consists of three neural networks that are built upon existing models: BrainAlignNet for non-rigid registration, AutoCellLabeler for supervised annotation of over 100 neuronal types, and CellDiscoveryNet for unsupervised discovery of cell identities. The ambition of the work is to enable high-throughput and largely automated pipelines for neuron tracking and labeling in deforming nervous systems.

      Strengths:

      (1) The paper tackles a timely and difficult problem, offering an end-to-end system rather than isolated modules.

      (2) The authors report high performance within their dataset, including single-pixel registration accuracy, nearly complete neuron linking over time, and annotation accuracy that exceeds individual human labelers.

      (3) Demonstrations across two organisms suggest the methods could be transferable, and the integration of supervised and unsupervised modules is of practical utility.

      Weaknesses:

      (1) Lack of solid evaluation. Despite strong results on their own data, the work is not benchmarked against existing methods on community datasets, making it hard to evaluate relative performance or generality.

      (2) Lack of novelty. All three models do not incorporate state-of-the-art advances from the respective fields. BrainAlignNet does not learn from the latest optical flow literature, relying instead on relatively conventional architectures. AutoCellLabeler does not utilize the advanced medNeXt3D architectures for supervised semantic segmentation. CellDiscoveryNet is presented as unsupervised discovery but relies on standard clustering approaches, with limited evaluation on only a small test set.

      (3) Lack of robustness. BrainAlignNet requires dataset-specific training and pre-alignment strategies, limiting its plug-and-play use. AutoCellLabeler depends heavily on raw intensity patterns of neurons, making it brittle to pose changes. By contrast, current state-of-the-art methods incorporate spatial deformation atlases or relative spatial relationships, which provide robustness across poses and imaging conditions. More broadly, the ANTSUN 2.0 system depends on numerous manually tuned weights and thresholds, which reduces reproducibility and generalizability beyond curated conditions.

      Evaluation:

      To make the evaluation more solid, it would be great for the authors to (1) apply the new method on existing datasets and (2) apply baseline methods on their own datasets. Otherwise, without comparison, it is unclear if the proposed method is better or not. The following papers have public challenging tracking data: https://elifesciences.org/articles/66410, https://elifesciences.org/articles/59187, https://www.nature.com/articles/s41592-023-02096-3.

      Methodology:

      (1) The model innovations appear incrementally novel relative to existing work. The authors should articulate what is fundamentally different (architectural choices, training objectives, inductive biases) and why those differences matter empirically. Ablations isolating each design choice would help.

      (2) The pipeline currently depends on numerous manually set hyperparameters and dataset-specific preprocessing. Please provide principled guidelines (e.g., ranges, default settings, heuristics) and a robustness analysis (sweeps, sensitivity curves) to show how performance varies with these choices across datasets; wherever possible, learn weights from data or replace fixed thresholds with data-driven criteria.

      Appraisal:

      The authors partially achieve their aims. Within the scope of their dataset, the pipeline demonstrates impressive performance and clear practical value. However, the absence of comparisons with state-of-the-art algorithms such as ZephIR, fDNC, or WormID, combined with small-scale evaluation (e.g., ten test volumes), makes the strength of evidence incomplete. The results support the conclusion that the approach is useful for their lab's workflow, but they do not establish broader robustness or superiority over existing methods.

      Impact:

      Even though the authors have released code, the pipeline requires heavy pre- and post-processing with numerous manually tuned hyperparameters, which limits its practical applicability to new datasets. Indeed, even within the paper, BrainAlignNet had to be adapted with additional preprocessing to handle the jellyfish data. The broader impact of the work will depend on systematic benchmarking against community datasets and comparison with established methods. As such, readers should view the results as a promising proof of concept rather than a definitive standard for imaging in deformable nervous systems.

    4. Reviewer #3 (Public review):

      Context:

      Tracking cell trajectories in deformable organs, such as the head neurons of freely moving C. elegans, is a challenging task due to rapid, non-rigid cellular motion. Similarly, identifying neuron types in the worm brain is difficult because of high inter-individual variability in cell positions.

      Summary:

      In this study, the authors developed a deep learning-based approach for cell tracking and identification in deformable neuronal images. Several different CNN models were trained to: (1) register image pairs without severe deformation, and then track cells across continuous image sequences using multiple registration results combined with clustering strategies; (2) predict neuron IDs from multicolor-labeled images; and (3) perform clustering across multiple multicolor images to automatically generate neuron IDs.

      Strengths:

      Directly using raw images for registration and identification simplifies the analysis pipeline, but it is also a challenging task since CNN architectures often struggle to capture spatial relationships between distant cells. Surprisingly, the authors report very high accuracy across all tasks. For example, the tracking of head neurons in freely moving worms reportedly reached 99.6% accuracy, neuron identification achieved 98%, and automatic classification achieved 93% compared to human annotations.

      Weaknesses:

      (1) The deep networks proposed in this study for registration and neuron identification require dataset-specific training, due to variations in imaging conditions across different laboratories. This, in turn, demands a large amount of manually or semi-manually annotated training data, including cell centroid correspondences and cell identity labels, which reduces the overall practicality and scalability of the method.

      (2) The cell tracking accuracy was not rigorously validated, but rather estimated using a biased and coarse approach. Specifically, the accuracy was assessed based on the stability of GFP signals in the eat-4-labeled channel. A tracking error was assumed to occur when the GFP signal switched between eat-4-negative and eat-4-positive at a given time point. However, this estimation is imprecise and only captures a small subset of all potential errors. Although the authors introduced a correction factor to approximate the true error rate, the validity of this correction relies on the assumption that eat-4 neurons are uniformly distributed across the brain - a condition that is unlikely to hold.

      (3) Figure S1F demonstrates that the registration network, BrainAlignNet, alone is insufficient to accurately align arbitrary pairs of C. elegans head images. The high tracking accuracy reported is largely due to the use of a carefully designed registration sequence, matching only images with similar postures, and an effective clustering algorithm. Although the authors address this point in the Discussion section, the abstract may give the misleading impression that the network itself is solely responsible for the observed accuracy.

      (4) The reported accuracy for neuron identification and automatic classification may be misleading, as it was assessed only on a subset of neurons labeled as "high-confidence" by human annotators. Although the authors did not disclose the exact proportion, various descriptions (such as Figure 4f) imply that this subset comprises approximately 60% of all neurons. While excluding uncertain labels is justifiable, the authors highlight the high accuracy achieved on this subset without clearly clarifying that the reported performance pertains only to neurons that are relatively easy to identify. Furthermore, they do not report what fraction of the total neuron population can be accurately identified using their methods-an omission of critical importance for prospective users.

    5. Author response:

      Reviewer #1 (Public review):

      In this important study, the authors develop a suite of machine vision tools to identify and align fluorescent neuronal recording images in space and time according to neuron identity and position. The authors provide compelling evidence for the speed and utility of these tools. While such tools have been developed in the past (including by the authors), the key advancement here is the speed and broad utility of these new tools. While prior approaches based on steepest descent worked, they required hundreds of hours of computational time, while the new approaches outlined here are >600-fold faster. The machine vision tools here should be immediately useful to readers specifically interested in whole-brain C. elegans data, but also for more general readers who may be interested in using BrainAlignNet for tracking fluorescent neuronal recordings from other systems.

      I really enjoyed reading this paper. The authors had several ground truth examples to quantify the accuracy of their algorithms and identified several small caveats users should consider when using these tools. These tools were primarily developed for C. elegans, an animal with stereotyped development, but whose neurons can be variably located due to internal motion of the body. The authors provide several examples of how BrainAlignNet reliably tracked these neurons over space and time. Neuron identity is also important to track, and the authors showed how AutoCellLoader can reliably identify neurons based on their fluorescence in the NeuroPAL background. A challenge with NeuroPAL though, is the high expression of several fluorophores, which compromises behavioral fidelity. The authors provide some possible avenues where this problem can be addressed by expressing fewer fluorophores. While using all four channels provided the best performance, only using the tagRFP and CyOFP channels was sufficient for performance that was close to full performance using all 4 NeuroPAL channels. This result indicates that the development of future lines with less fluorophore expression could be sufficient for reliable neuronal identification, which would decrease the genetic load on the animal, but also open other fluorescent channels that could be used for tracking other fluorescent tools/markers. Even though these tools were developed for C. elegans specifically, they showed BrainAlignNet can be applied to other organisms as well (in their case, the cnidarian C. hemisphaerica), which broadens the utility of their tools.

      Strengths:

      (1) The authors have a wealth of ground-truth training data to compare their algorithms against, and provide a variety of metrics to assess how well their new tools perform against hand annotation and/or prior algorithms.

      (2) For BrainAlignNet, the authors show how this tool can be applied to other organisms besides C. elegans.

      (3) The tools are publicly available on GitHub, which includes useful README files and installation guidance.

      We thank the reviewer for noting these strengths of our study.

      Weaknesses:

      (1) Most of the utility of these algorithms is for C. elegans specifically. Testing their algorithms (specifically BrainAlignNet) on more challenging problems, such as whole-brain zebrafish, would have been interesting. This is a very, very minor weakness, though.

      We appreciate the reviewer’s point that expanding to additional animal models would be valuable. In the study, we have so far tested our approaches on C. elegans and Jellyfish. Given that this is considered a ‘very, very minor weakness’ and that it does not directly affect the results or analyses in the paper, we think this might be better to address in future work.

      (2) The tools are benchmarked against their own prior pipeline, but not against other algorithms written for the same purpose.

      We agree that it would be valuable to benchmark other labs’ software pipelines on our datasets. We note that most papers in this area, which describe those pipelines, provide the same performance metrics that we do (accuracy of neuron identification, tracking accuracy, etc), so a crude, first-order comparison can be obtained by comparing the numbers in the papers. But, we agree that a rigorous head-to-head comparison would require applying these different pipelines to a common dataset. We considered performing these analyses, but we were concerned that using other labs’ software ‘off the shelf’ on our data might not represent those pipelines in their best light when compared to our pipeline that was developed with our data in mind. Data from different microscopy platforms can be surprisingly different and we wouldn’t want to perform an analysis that had this bias. Therefore, we feel that this comparison would be best pursued by all of these labs collaboratively (so that they can each provide input on how to run their software optimally). Indeed, this is an important area for future study. In this spirit, we have been sharing our eat-4::GFP datasets (that permit quantification of tracking accuracy) with other labs looking for additional ways to benchmark their tracking software.

      We also note that there are not really any pipelines to directly compare against CellDiscoveryNet, as we are not aware of any other fully unsupervised approach for neuron identification in C. elegans.

      (3) Considerable pre-processing was done before implementation. Expanding upon this would improve accessibility of these tools to a wider audience.

      Indeed, some pre-processing was performed on images before registration and neuron identification -- understanding these nuances can be important. The pre-processing steps are described in the Results section and detailed in the Methods. They are also all available in our open-source software. For BrainAlignNet, the key steps were: (1) selecting image registration problems, (2) cropping, and (3) Euler alignment. Steps (1) and (3) were critically important and are extensively discussed in the Results and Discussion sections of our study (lines 142-144, 218-234, 318-323, 704-712). Step (2) is standard in image processing. For AutoCellLabeler and CellDiscoveryNet, the pre-processing was primarily to align the 4 NeuroPAL color channels to each other (i.e. make sure the blue/red/orange/etc channels for an animal are perfectly aligned). This is also just a standard image processing step to ensure channel alignment. Thus, the more “custom” pre-processing steps were extensively discussed in the study and the more “common” steps are still described in the Methods. The implementation of all steps is available in our open-source software.

      Reviewer #2 (Public review):

      Summary:

      The paper introduced the pipeline to analyze brain imaging of freely moving animals: registering deforming tissues and maintaining consistent cell identities over time. The pipeline consists of three neural networks that are built upon existing models: BrainAlignNet for non-rigid registration, AutoCellLabeler for supervised annotation of over 100 neuronal types, and CellDiscoveryNet for unsupervised discovery of cell identities. The ambition of the work is to enable high-throughput and largely automated pipelines for neuron tracking and labeling in deforming nervous systems.

      Strengths:

      (1) The paper tackles a timely and difficult problem, offering an end-to-end system rather than isolated modules.

      (2) The authors report high performance within their dataset, including single-pixel registration accuracy, nearly complete neuron linking over time, and annotation accuracy that exceeds individual human labelers.

      (3) Demonstrations across two organisms suggest the methods could be transferable, and the integration of supervised and unsupervised modules is of practical utility.

      We thank the reviewer for noting these strengths of our study.

      Weaknesses:

      (1) Lack of solid evaluation. Despite strong results on their own data, the work is not benchmarked against existing methods on community datasets, making it hard to evaluate relative performance or generality.

      We agree that it would be valuable to benchmark many labs’ software pipelines on some common datasets, ideally from several different research labs. We note that most papers in this area, which describe the other pipelines that have been developed, provide the same performance metrics that we do (accuracy of neuron identification, tracking accuracy, etc), so a crude, first-order comparison can be obtained by comparing the numbers in the papers. But, we agree that a rigorous head-to-head comparison would require applying these different pipelines to a common dataset. We considered performing these analyses, but we were concerned that using other labs’ software ‘off the shelf’ and comparing the results to our pipeline (where we have extensive expertise) might bias the performance metrics in favor of our software. Therefore, we feel that this comparison would be best pursued by all of these labs collaboratively (so that they can each provide input on how to run their software optimally). Indeed, this is an important area for future study. In this spirit, we have been sharing our eat-4::GFP datasets (that permit quantification of tracking accuracy) with other labs looking for additional ways to benchmark their tracking software.

      We also note that there are not really any pipelines to directly compare against CellDiscoveryNet, as we are not aware of any other fully unsupervised approach for neuron identification in C. elegans.

      (2) Lack of novelty. All three models do not incorporate state-of-the-art advances from the respective fields. BrainAlignNet does not learn from the latest optical flow literature, relying instead on relatively conventional architectures. AutoCellLabeler does not utilize the advanced medNeXt3D architectures for supervised semantic segmentation. CellDiscoveryNet is presented as unsupervised discovery but relies on standard clustering approaches, with limited evaluation on only a small test set.

      We appreciate that the machine learning field moves fast. Our goal was not to invent entirely novel machine learning tools, but rather to apply and optimize tools for a set of challenging, unsolved biological problems. We began with the somewhat simpler architectures described in our study and were largely satisfied with their performance. It is conceivable that newer approaches would perhaps lead to even greater accuracy, flexibility, and/or speed. But, oftentimes, simple or classical solutions can adequately resolve specific challenges in biological image processing.

      Regarding CellDiscoveryNet, our claim of unsupervised training is precise: CellDiscoveryNet is trained end-to-end only on raw images, with no human annotations, pseudo-labels, external classifiers, or metadata used for training, model selection, or early stopping. The loss is defined entirely from the input data (no label signal). By standard usage in machine learning, this constitutes unsupervised (often termed “self-supervised”) representation learning. Downstream clustering is likewise unsupervised, consuming only image pairs registered by CellDiscoveryNet and neuron segmentations produced by our previously-trained SegmentationNet (which provides no label information).

      (3) Lack of robustness. BrainAlignNet requires dataset-specific training and pre-alignment strategies, limiting its plug-and-play use. AutoCellLabeler depends heavily on raw intensity patterns of neurons, making it brittle to pose changes. By contrast, current state-of-the-art methods incorporate spatial deformation atlases or relative spatial relationships, which provide robustness across poses and imaging conditions. More broadly, the ANTSUN 2.0 system depends on numerous manually tuned weights and thresholds, which reduces reproducibility and generalizability beyond curated conditions.

      Regarding BrainAlignNet: we agree that we trained on each species’ own data (worm, jellyfish) and we would suggest other labs working on new organisms to do the same based on our current state of knowledge. It would be fantastic if there was an alignment approach that generalized to all possible cases of non-rigid-registration in all animals – an important area for future study. We also agree that pre-alignment was critical in worms and jellyfish, which we discuss extensively in our study (lines 142-144, 318-321, 704-712).

      Regarding AutoCellLabeler: the animals were not recorded in any standardized pose and were not aligned to each other beforehand – they were basically in a haphazard mix of poses and we used image augmentation to allow the network to generalize to other poses, as described in our study. It is still possible that AutoCellLabeler is somehow brittle to pose changes (e.g. perhaps extremely curved worms) – while we did not detect this in our analyses, we did not systematically evaluate performance across all possible poses. However, we do note that this network was able to label images taken from freely-moving worms, which by definition exhibit many poses (Figure 5D, lines 500-525); aggregating the network’s performance across freely-moving data points allowed it to nearly match its performance on high-SNR immobilized data. This suggests a degree of robustness of the AutoCellLabeler network to pose changes.

      Regarding ANTSUN 2.0: we agree that there are some hyperparameters (described in our study) that affect ANTSUN performance. We agree that it would be worthwhile to fully automate setting these in future iterations of the software.

      Evaluation:

      To make the evaluation more solid, it would be great for the authors to (1) apply the new method on existing datasets and (2) apply baseline methods on their own datasets. Otherwise, without comparison, it is unclear if the proposed method is better or not. The following papers have public challenging tracking data: https://elifesciences.org/articles/66410, https://elifesciences.org/articles/59187, https://www.nature.com/articles/s41592-023-02096-3.

      Please see our response to your point (1) under Weaknesses above.

      Methodology:

      (1) The model innovations appear incrementally novel relative to existing work. The authors should articulate what is fundamentally different (architectural choices, training objectives, inductive biases) and why those differences matter empirically. Ablations isolating each design choice would help.

      There are other efforts in the literature to solve the neuron tracking and neuron identification problems in C. elegans (please see paragraphs 4 and 5 of our Introduction, which are devoted to describing these). However, they are quite different in the approaches that they use, compared to our study. For example, for neuron tracking they use t->t+1 methods, or model neurons as point clouds, etc (a variety of approaches have been tried). For neuron identification, they work on extracted features from images, or use statistical approaches rather than deep neural networks, etc (a variety of approaches have been tried). Our assessment is that each of these diverse approaches has strengths and drawbacks; we agree that a meta-analysis of the design choices used across studies could be valuable.

      We also note that there are not really any pipelines to directly compare against CellDiscoveryNet, as we are not aware of any other fully unsupervised approach for neuron identification in C. elegans.

      (2) The pipeline currently depends on numerous manually set hyperparameters and dataset-specific preprocessing. Please provide principled guidelines (e.g., ranges, default settings, heuristics) and a robustness analysis (sweeps, sensitivity curves) to show how performance varies with these choices across datasets; wherever possible, learn weights from data or replace fixed thresholds with data-driven criteria.

      We agree that there are some ANTSUN 2.0 hyperparameters (described in our Methods section) that could affect the quality of neuron tracking. It would be worthwhile to fully automate setting these in future iterations of the software, ensuring that the hyperparameter settings are robust to variation in data/experiments.

      Appraisal:

      The authors partially achieve their aims. Within the scope of their dataset, the pipeline demonstrates impressive performance and clear practical value. However, the absence of comparisons with state-of-the-art algorithms such as ZephIR, fDNC, or WormID, combined with small-scale evaluation (e.g., ten test volumes), makes the strength of evidence incomplete. The results support the conclusion that the approach is useful for their lab's workflow, but they do not establish broader robustness or superiority over existing methods.

      We wish to remind the reviewer that we developed BrainAlignNet for use in worms and jellyfish. These two animals have different distributions of neurons and radically different anatomy and movement patterns. Data from the two organisms was collected in different labs (Flavell lab, Weissbourd lab) on different types of microscopes (spinning disk, epifluorescence). We believe that this is a good initial demonstration that the approach has robustness across different settings.

      Regarding comparisons to other labs’ C. elegans data processing pipelines, we agree that it will be extremely valuable to compare performance on common datasets, ideally collected in multiple different research labs. But we believe this should be performed collaboratively so that all software can be utilized in their best light with input from each lab, as described above. We agree that such a comparison would be very valuable.

      Impact:

      Even though the authors have released code, the pipeline requires heavy pre- and post-processing with numerous manually tuned hyperparameters, which limits its practical applicability to new datasets. Indeed, even within the paper, BrainAlignNet had to be adapted with additional preprocessing to handle the jellyfish data. The broader impact of the work will depend on systematic benchmarking against community datasets and comparison with established methods. As such, readers should view the results as a promising proof of concept rather than a definitive standard for imaging in deformable nervous systems.

      Regarding worms vs jellyfish pre-processing: we actually had the exact opposite reaction to that of the reviewer. We were surprised at how similar the pre-processing was for these two very different organisms. In both cases, it was essential to (1) select appropriate registration problems to be solved; and (2) perform initialization with Euler alignment. Provided that these two challenges were solved, BrainAlignNet mostly took care of the rest. This suggests a clear path for researchers who wish to use this approach in another animal. Nevertheless, we also agree with the reviewer’s caution that a totally different use case could require some re-thinking or re-strategizing. For example, the strategy of how to select good registration problems could depend on the form of the animal’s movement.

      Reviewer #3 (Public review):

      Context:

      Tracking cell trajectories in deformable organs, such as the head neurons of freely moving C. elegans, is a challenging task due to rapid, non-rigid cellular motion. Similarly, identifying neuron types in the worm brain is difficult because of high inter-individual variability in cell positions.

      Summary:

      In this study, the authors developed a deep learning-based approach for cell tracking and identification in deformable neuronal images. Several different CNN models were trained to: (1) register image pairs without severe deformation, and then track cells across continuous image sequences using multiple registration results combined with clustering strategies; (2) predict neuron IDs from multicolor-labeled images; and (3) perform clustering across multiple multicolor images to automatically generate neuron IDs.

      Strengths:

      Directly using raw images for registration and identification simplifies the analysis pipeline, but it is also a challenging task since CNN architectures often struggle to capture spatial relationships between distant cells. Surprisingly, the authors report very high accuracy across all tasks. For example, the tracking of head neurons in freely moving worms reportedly reached 99.6% accuracy, neuron identification achieved 98%, and automatic classification achieved 93% compared to human annotations.

      We thank the reviewer for noting these strengths of our study.

      Weaknesses:

      (1) The deep networks proposed in this study for registration and neuron identification require dataset-specific training, due to variations in imaging conditions across different laboratories. This, in turn, demands a large amount of manually or semi-manually annotated training data, including cell centroid correspondences and cell identity labels, which reduces the overall practicality and scalability of the method.

      We performed dataset-specific training for image registration and neuron identification, and we would encourage new users to do the same based on our current state of knowledge. This highlights how standardization of whole-brain imaging data across labs is an important issue for our field to address and that, without it, variations in imaging conditions could impact software utility. We refer the reviewer to an excellent study by Sprague et al. (2025) on this topic, which is cited in our study.

      However, at the same time, we wish to note that it was actually reasonably straightforward to take the BrainAlignNet approach that we initially developed in C. elegans and apply it to jellyfish. Some of the key lessons that we learned in C. elegans generalized: in both cases, it was critical to select the right registration problems to solve and to preprocess with Euler registration for good initialization. Provided that those problems were solved, BrainAlignNet could be applied to obtain high-quality registration and trace extraction. Thus, our study provides clear suggestions on how to use these tools across multiple contexts.

      (2) The cell tracking accuracy was not rigorously validated, but rather estimated using a biased and coarse approach. Specifically, the accuracy was assessed based on the stability of GFP signals in the eat-4-labeled channel. A tracking error was assumed to occur when the GFP signal switched between eat-4-negative and eat-4-positive at a given time point. However, this estimation is imprecise and only captures a small subset of all potential errors. Although the authors introduced a correction factor to approximate the true error rate, the validity of this correction relies on the assumption that eat-4 neurons are uniformly distributed across the brain - a condition that is unlikely to hold.

      We respectfully disagree with this critique. We considered the alternative suggested by the reviewer (in their private comments to the authors) of comparing against a manually annotated dataset. But this annotation would require manually linking ~150 neurons across ~1600 timepoints, which would require humans to manually link neurons across timepoints >200,000 times for a single dataset. These datasets consist of densely packed neurons rapidly deforming over time in all 3 dimensions. Moreover, a single error in linking would propagate across timepoints, so the error tolerance of such annotation would be extremely low. Any such manually labeled dataset would be fraught with errors and should not be trusted. Instead, our approach relies on a simple, accurate assumption: GFP expression in a neuron should be roughly constant over a 16min recording (after bleach correction) and the levels will be different in different neurons when it is sparsely expressed. Because all image alignment is done in the red channel, the pipeline never “peeks” at the GFP until it is finished with neuron alignment and tracking. The eat-4 promoter was chosen for GFP expression because (a) the nuclei labeled by it are scattered across the neuropil in a roughly salt-and-pepper fashion – a mixture of eat-4-positive and eat-4-negative neurons are found throughout the head; and (b) it is in roughly 40% of the neurons, giving very good overall coverage. Our view is that this approach of labeling subsets of neurons with GFP should become the standard in the field for assessing tracking accuracy – it has a simple, accurate premise; is not susceptible to human labeling error; is straightforward to implement; and, since it does not require manual labeling, is easy to scale to multiple datasets. We do note that it could be further strengthened by using multiple strains each with different ‘salt-and-pepper’ GFP expression patterns.

      (3) Figure S1F demonstrates that the registration network, BrainAlignNet, alone is insufficient to accurately align arbitrary pairs of C. elegans head images. The high tracking accuracy reported is largely due to the use of a carefully designed registration sequence, matching only images with similar postures, and an effective clustering algorithm. Although the authors address this point in the Discussion section, the abstract may give the misleading impression that the network itself is solely responsible for the observed accuracy.

      Our tracking accuracy requires (a) a careful selection of registration problems, (b) highly accurate registration of the selected registration problems, and (c) effective clustering. We extensively discussed the importance of the choosing of the registration problems in the Results section (lines 218-234 and 318-321), Discussion section (lines 704-708), and Methods section (955-970 and 1246-1250) of our paper. We also discussed the clustering aspect in the Results section (lines 247-259), Discussion section (lines 708-712), and Methods section (lines 1162-1206). In addition, our abstract states that the BrainAlignNet needs to be “incorporated into an image analysis pipeline,” to inform readers that other aspects of image analysis need to occur (beyond BrainAlignNet) to perform tracking.

      (4) The reported accuracy for neuron identification and automatic classification may be misleading, as it was assessed only on a subset of neurons labeled as "high-confidence" by human annotators. Although the authors did not disclose the exact proportion, various descriptions (such as Figure 4f) imply that this subset comprises approximately 60% of all neurons. While excluding uncertain labels is justifiable, the authors highlight the high accuracy achieved on this subset without clearly clarifying that the reported performance pertains only to neurons that are relatively easy to identify. Furthermore, they do not report what fraction of the total neuron population can be accurately identified using their methods-an omission of critical importance for prospective users.

      The reviewer raises two points here: (1) whether AutoCellLabeler accuracy is impacted by ease of human labeling; and (2) what fraction of total neurons are identified. We address them one at a time.

      Regarding (1), we believe that the reviewer overlooked an important analysis in our study. Indeed, to assess its performance, one can only compare AutoCellLabeler’s output against accurate human labels – there is simply no way around it. However, we noted that AutoCellLabeler was identifying some neurons with high confidence even when humans had low confidence or had not even tried to label the neurons (Fig. 4F). To test whether these were in fact accurate labels, we asked additional human labelers to spend extra time trying to label a random subset of these neurons (they were of course blinded to the AutoCellLabeler label). We then assessed the accuracy of AutoCellLabeler against these new human labels and found that they were highly accurate (Fig. 4H). This suggests that AutoCellLabeler has strong performance even when some human labelers find it challenging to label a neuron. However, we agree that we have not yet been able to quantify AutoCellLabeler performance on the small set of neuron classes that humans are unable to identify across datasets.

      Regarding (2), we agree that knowing how many neurons are labeled by AutoCellLabeler is critical. For example, labeling only 3 neurons per animal with 100% accuracy isn’t very helpful. We wish to emphasize that we did not omit this information: we reported the number of neurons labeled for every network that we characterized in the study, alongside the accuracy of those labels (please see Figures 4I, 5A, and 6G; Figure 4I also shows the number of human labels per dataset, which the reviewer requested). We also showed curves depicting the tradeoff between accuracy and number of neurons labeled, which fully captures how we balanced accuracy and number of neurons labeled (Figures 5D and S4A). It sounds like the reviewer also wanted to know the total number of recorded neurons. The typical number of recorded neurons per dataset can also be found in the paper in Fig. 2E.

    1. One originated in what is now South Africa and had a large influence as far afield as Malawi. This Zulu polity-building process in South Africa set in motion the Mfecane, a period of wars and disturbances that led to migrations and the conquest of thousands of people. The Mfecane, which in the Zulu language means “the era of the crushing or breaking,” may have been directly influenced by the presence of expanding white settlement in South Africa.

      The Zulu State was created from both indigenous and external forces. Which the Mfecane was a shaping part in South Africa political stand.

    1. Culture was defined earlier as the symbols, language, beliefs, values, and artifacts that are part of any society.

      Culture isnt just stuff we use, Its also the ideas and meanings behind them.

    1. he UDL framework is based on neuroscience research evidence that individual learners differ in how they are motivated (affective network), how they comprehend information (recognition network), and how they express what they know (strategic network). Whether the differences facilitate learning or become a detriment to learning depends largely on the educational context.

      Reading about this has been one of the most interesting things we have learned about this year.

    2. Differentiated instruction is an instructional process that has potential to positively impact learning by offering teachers means to provide instruction to a range of students in today’s classroom situations.

      I would like to take time to practice this a bit more I believe that it could really make a unique learning experice.