10,000 Matching Annotations
  1. Nov 2024
    1. Culturally and historically, “Wait for It” reflects broader themes of ambition and the fear of inadequacy. Miranda has expressed that “Burr is every bit as smart as Hamilton, and every bit as gifted, and he comes from the same amount of loss as Hamilton. But because of the way they are wired Burr hangs back where Hamilton charges forward. I feel like I have been Burr in my life as many times as I have been Hamilton” (Mead). Like Miranda, many people find themselves in situations where they have to balance ambition with caution which can lead to moments of doubt and reflection. This makes “Wait for It” resonate deeper because it’s not just about the history of Aaron Burr or Alexander Hamilton—it is a personal and universal narrative that everyone can experience.

      I really like this whole paragraph, it is a great explanation of Burr's character by comparing him to Hamilton, and you did a great job at describing how Burr's feelings of doubt can be universal.

  2. rws511.pbworks.com rws511.pbworks.com
    1. They use massive surveillance of ourbehavior, online and off, to generate increasingly accurate, automated predictions of whatadvertisements we are most susceptible to and what content will keep us clicking, tapping,and scrolling down a bottomless feed.So what does this algorithmic public sphere tend to feed us? In tech parlance, Facebook andYouTube are “optimized for engagement,” which their defenders will tell you means thatthey’re just giving us what we want. But there’s nothing natural or inevitable about thespecific ways that Facebook and YouTube corral our attention. The patterns, by now, are wellknown. As Buzzfeed famously reported in November 2016, “top fake election news storiesgenerated more total engagement on Facebook than top election stories from 19 major newsoutlets combined.”Humans are a social species, equipped with few defenses against the natural world beyondour ability to acquire knowledge and stay in groups that work together. We are particularlysusceptible to glimmers of novelty, messages of affirmation and belonging, and messages ofoutrage toward perceived enemies. These kinds of messages are to human community whatsalt, sugar, and fat are to the human appetite. And Facebook gorges us on them—in what thecompany’s first president, Sean Parker, recently called “a social-validation feedback loop.”Sure, it is a golden age of free speech—if you can believe your lying eyes.There are, moreover, no nutritional labels in this cafeteria. For Facebook, YouTube, andTwitter, all speech—whether it’s a breaking news story, a saccharine animal video, an anti-Semitic meme, or a clever advertisement for razors—is but “content,” each post just anotherslice of pie on the carousel. A personal post looks almost the same as an ad, which looks verysimilar to a New York Times article, which has much the same visual feel as a fake newspapercreated in an afternoon.See What’s Next in Tech with the Fast Forward NewsletterFrom artificial intelligence and self-driving cars to transformed cities and new startups, signup for the latest news.Will be used in accordance with our Privacy Policy.

      Well said— giving very Orwellian

    1. They are now convoluting your schedule, your work, the fact that your mom just texted you that something’s going on with your grandparents — it’s just too much for your body to handle. Print media gives us the opportunity to sit down, and decide when we want to feel the emotions we want to feel, rather than letting some arbitrary algorithm decide how we should feel.

      Very good point about how all this digesting of news can get your body to feel overloaded. So when something important in your life actually happens, you don't hve any space left. You are already filled to the top.

      We need to allow ourselves space to be partially empty. So we can be more observant. To allow ourselves the opportunity to be absorbant. Instead of constantly full all the time.

    2. I’ll read news, not other people’s reactions to news.

      Hmmm, this is an interesting point. When you read other people's reactions to news on twitter and other social networks, you get only clips. And it's seemingly infinite.

      I've long supported the idea of people commenting. Of people digesting something, and reacting with their own words. Alas, that never quite worked for me, because there aren't many people who read my words. Like really, I'm not sure if anyone I know regularly reads my blog posts, and those are just once a week now. Usually just 300-350 words.

      Yeah, it's mostly because there's simply too much to read. Too many people commenting about other stuff online.

      I like this notion of reading mostly in print. It's a time to focus. You don't get swept away with all the emotions of the general public. Although it begs the question of how do people get their work into newspapers. You'd think I would know that after working in the newspaper business for 24+ years. Lol. Especially at a syndicate. It's mostly just timing and luck, and skill at writing.

    1. 14.1. What Content Gets Moderated# Social media platforms moderate (that is ban, delete, or hide) different kinds of content. There are a number of categories that they might ban things: 14.1.1. Quality Control# In order to make social media sites usable and interesting to users, they may ban different types of content such as advertisements, disinformation, or off-topic posts. Almost all social media sites (even the ones that claim “free speech”) block spam, mass-produced unsolicited messages, generally advertisements, scams, or trolling. Without quality control moderation, the social media site will likely fill up with content that the target users of the site don’t want, and those users will leave. What content is considered “quality” content will vary by site, with 4chan considering a lot of offensive and trolling content to be “quality” but still banning spam (because it would make the site repetitive in a boring way), while most sites would ban some offensive content. 14.1.2. Legal Concerns# Social media sites also might run into legal concerns with allowing some content to be left up on their sites, such as copyrighted material (like movie clips) or child pornography. So most social media sites will often have rules about content moderation, and at least put on the appearance of trying to stop illegal content (though a few will try to move to countries that won’t get them in trouble, like 8kun is getting hosted in Russia). With copyrighted content, the platform YouTube is very aggressive in allowing movie studios to get videos taken down, so many content creators on YouTube have had their videos taken down erroneously. 14.1.3. Safety# Another concern is for the safety of the users on the social media platform (or at least the users that the platform cares about). Users who don’t feel safe will leave the platform, so social media companies are incentivized to help their users feel safe. So this often means moderation to stop trolling and harassment. 14.1.4. Potentially Offensive# Another category is content that users or advertisers might find offensive. If users see things that offend them too often, they might leave the site, and if advertisers see their ads next to too much offensive content, they might stop paying for ads on the site. So platforms might put limits on language (e.g., racial slurs), violence, sex, and nudity. Sometimes different users or advertisers have different opinions on what should be allowed or not. For example, “The porn ban of 2018 was a defining event for Tumblr that led to a 30 percent drop in traffic and a mass exodus of users that blindsided the company.”

      Almost every social media site should be moderated, specifically for posts, unless they are advertised as unmoderated. I didn't know about quality control. I kind of thought that even if the video was boring or random, it would still be shown to people. I'm also curious about other legal concerns social media follows. Of course, safety concern videos should be moderated unless it's just a video of someone trying something dumb and getting hurt; that is more educational for the person watching. I don't think potentially offensive videos should be moderated, only hurtful posts against a specific kind of people should be deleted.

    1. 15.1. Types of Content Moderator Set-Ups# There are a number of different types of content moderators and ways of organizing them, such as: 15.1.1. No Moderators# Some systems have no moderators. For example, a personal website that can only be edited by the owner of the website doesn’t need any moderator set up (besides the person who makes their website). If a website does let others contribute in some way, and is small, no one may be checking and moderating it. But as soon as the wrong people (or spam bots) discover it, it can get flooded with spam, or have illegal content put up (which could put the owner of the site in legal jeopardy). 15.1.2. Untrained Staff# If you are running your own site and suddenly realize you have a moderation problem you might have some of your current staff (possibly just yourself) start handling moderation. As moderation is a very complicated and tricky thing to do effectively, untrained moderators are likely to make decisions they (or other users) regret. 15.1.3. Dedicated Moderation Teams# After a company starts working on moderation, they might decide to invest in teams specifically dedicated to content moderation. These teams of content moderators could be considered human computers hired to evaluate examples against the content moderation policy of the platform they are working for. 15.1.4. Individuals moderating their own spaces# You can also have people moderate their own spaces. For example: when you text on the phone, you are in charge of blocking numbers if you want to (though the phone company might warn you of potential spam or scams) When you make posts on Facebook or upload videos to YouTube, you can delete comments and replies Also in some of these systems, you can allow friends access to your spaces to let them help you moderate them. 15.1.5. Volunteer Moderation# Letting individuals moderate their own spaces is expecting individuals to put in their own time and labor. You can do the same thing with larger groups and have volunteers moderate them. Reddit does something similar where subreddits are moderated by volunteers, and Wikipedia moderators (and editors) are also volunteers. 15.1.6. Automated Moderators (bots)# Another strategy for content moderation is using bots, that is computer programs that look through posts or other content and try to automatically detect problems. These bots might remove content, or they might flag things for human moderators to review.

      I feel like every social media app should have some sort of moderation, whether it's trained or not. Especially Twitter, he can still have the app as real as he wants but should have some sort of moderation around cyberbullying and if he already has it, I think it needs more of it.

    1. In the contexts of social media and public debate, moderation has a meaning that is about creating limits and boundaries about what is posted to keep things working well. But this meaning of “moderation” grew out of a wider, more generic concept of moderation. You might remember seeing moderation coming up in lists of virtues in virtue ethics, back in Chapter 2. So what does moderation (the social practice of limiting what is posted) have to do with moderation (the abstract ethical quality)?

      Setting boundaries on social media isn't just about control, but about keeping conversations constructive and respectful. This balance reflects the ethical side of moderation—aiming for harmony both within ourselves and in our interactions. It’s a great reminder that the virtues we aim for personally can have meaningful applications even in something as modern as online content moderation.

    1. Charmaraman et al. argue that more training is needed to ensure that schoolprofessionals understand Title IX's requirement that policies and action en-sure an equitable learning environment. As discussed in the Introduction tothis volume, neglecting to protect students from gender-based discrimina-tion can lead to school district liability, as well as negative student out-comes, so ensuring that all school personnel understand their obligations iscrucial.

      The authors argue that a more comprehensive approach is needed. Training for school professionals shouldn’t just be a one-time session but a continuous and evolving part of school policy, making it clear that Title IX obligations are about creating safe, equitable environments for all students. This isn’t just about compliance with federal laws; it’s about fostering a school culture that acknowledges and actively counters biases, harassment, and discrimination. Schools that implement ongoing programs, such as weekly discussions and anti-bias training, are shown to be more inclusive and supportive.

    1. Welcome back and in this demo lesson you're going to experience the difference that EFS can make to our WordPress application architecture.

      Now this demo lesson has three main components.

      First we're going to deploy some infrastructure automatically using the one-click deployments.

      Then I'm going to step through the CloudFormation template and explain exactly how this architecture is built.

      And then right at the end you're going to have the opportunity to see exactly what benefits EFS provides.

      So to get started make sure that you're currently logged in to the general AWS account, so the management account of the organization, and as always you need to have the Northern Virginia region selected.

      Now this lesson actually has two one-click deployments.

      The first deploys the base infrastructure and the second deploys a WordPress EC2 instance, which has been enhanced to utilize EFS.

      So you need to apply both of these templates in order and wait for the first one to finish before applying the second.

      So we're going to start with the base VPC RDS EFS template first.

      So this deploys the base VPC, the Elastic File System and an RDS instance.

      Now everything should be pre-populated.

      The stack should be called EFS demo -vpc -rds -efs.

      Just scroll all the way down to the bottom, check the capabilities box and click on create stack.

      While that's going let's switch over to the CloudFormation template and just step through exactly what it does.

      So this is the template that you're deploying using the one-click deployment.

      It's deploying the Base Animals for Life VPC, an EFS file system as well as mount targets and an Aurora database cluster.

      So if we just scroll down we can see all of the VPC and networking resources used by the Base Animals for Life VPC.

      Continue scrolling down we'll see the subnets that this VPC contains IP version 6 information.

      We'll see an RDS security group, a database subnet group.

      We've got the database instance.

      Then we've got an instance security group which controls access to all the resources in the VPC that we use that security group on.

      Then we have a rule which allows anything with that security group attached to it to communicate with anything else.

      We have a rule that the WordPress instance will use and note that this includes permissions on the Elastic File System.

      Then we have the instance profile that that instance uses.

      Then we have the CloudWatch agent configuration and this is all automated.

      And if we just continue scrolling down here we can see the Elastic File System.

      So we create an EFS file system and then we create a file system mount target in each application subnet.

      So we've got mount target zero which is in application subnet A which is in US East 1A.

      We've got mount target one which is an application subnet B which logically is in US East 1B.

      And then finally target two which is in subnet app C which is in availability zone 1C.

      So we create the VPC, the database and the Elastic File System in this first one click deployment.

      Now we need this to be in a create complete state before we continue with the demo lessons.

      So go ahead and pause the video, wait for this to move into a create complete status and then we can use the second one click deployment.

      Okay that stacks now finished creating which means we can move on to the second one click deployment.

      Now there are actually two WordPress one click deployments which are attached to this lesson.

      We're going to use them both but for now I want you to use the WordPress one one click deployment.

      So go ahead and click on that link this will create a stack called EFS demo hyphen WordPress one.

      Everything should be pre-populated just go ahead and click on create stack.

      Now this is going to use the infrastructure provided by that first one click deployment.

      So it's going to use EFS demo hyphen VPC hyphen RDS hyphen EFS and let's quickly step through exactly what this is doing while it's provisioning.

      So this is the cloud formation template that is being used and we can skip past most of this.

      What I want to focus on is the resource that's being created so that's WordPress EC2.

      So this is using cross stack references to import a lot of the resources created in that first cloud formation stack.

      So it's importing the instance profile to use it's importing the web a subnet so it knows where to place this instance.

      And it's importing the instance security group that's created in that previous cloud formation stack.

      Now in addition to this if we look through the user data for this WordPress instance one major difference is that it's mounting the EFS file system into this folder.

      So forward slash var forward slash w w w forward slash HTML forward slash WP hyphen content.

      Now if you remember from earlier demo lessons this is the folder which WordPress users to store its media.

      So now instead of this folder being on the local EC2 file system this is now the EFS file system.

      The EFS file system is mapped into this folder on this WordPress instance.

      Other than that everything else is the same WordPress is installed.

      It's configured to you as the RDS instance the cow say custom login banner is displayed.

      It automatically configures the cloud watch agent and then it signals cloud formation that it's finished provisioning this instance.

      Now what we'll end up with when this stack has finished creating is an EC2 instance which will use the services provided by this original stack.

      So let's just refresh this.

      It's still in progress so go ahead and pause the video and wait for this stack to move into a create complete state and then we good to continue.

      So this stacks now finished creating and if we move across to the EC2 console so click on services locate EC2 right click and open that in a new tab.

      Then click on instances running and you'll see that we have this A4L WordPress instance.

      Now if we select that copy the IP address into your clipboard and then open that in a new tab we need to perform the WordPress installation.

      So go ahead and enter the site title the best cats and add some exclamation points.

      For username we need to use admin then for the password go back to the cloud formation stack and click on parameters and we're going to use the DB password.

      So copy that into your clipboard then go back paste it into the password box and then put test at test.com for the email address and click install WordPress.

      Then as before we need to log in so click on login admin for username reenter that password and click on login.

      Then we need to go to posts we need to click on trash below hello world to delete that post then click on add new close down this dialogue.

      For title put the best cats ever and some exclamation points then click on the plus click gallery click upload.

      There's a link attached to this lesson with four cat images so go ahead and download that link and extract it locate those four images select them and click on open.

      And then once you've done that click on publish and publish again and then click on view post.

      Now what that's doing in the background is it's adding these images to the WP hyphen content folder which is on the EC two instance but now we have that folder mounted using EFS and so the images are being stored on the elastic file system rather than the local instance file system.

      The cat pictures are there but what we're going to do to validate this is to go back to instances right click on this a four L hyphen WordPress instance and click on connect and then connect to this instance using EC two instance connect.

      Now once we connected to the instance you CD space forward slash VAR forward slash WWW forward slash HTML and then do an LS space hyphen LA to do a full listing you'll see that we have this WP hyphen content folder.

      So type CD space WP hyphen content and press enter then we'll clear the screen and do an LS space hyphen LA and then inside this folder we have plugins themes and uploads go into the uploads folder do an LS space hyphen LA depending on when you do this demo lesson you should see a folder representing the year so move into that folder then a folder representing the month again this will vary depending on when you do the demo lesson.

      Move into that folder and then you should see all four of my cat images and if you do a DF space hyphen K you'll be able to see that this folder so forward slash VAR forward slash WWW forward slash HTML WP hyphen content this is actually mounted using EFS so this is an EFS file system.

      Now this means the local instance file system is no longer critical it no longer stores the actual media that we upload to these posts so what we can do is we can go back to cloud formation go to stacks select the EFS demo hyphen WordPress one stack and then click on delete and delete that stack so that's going to terminate the EC two instance that we've just used to upload that media.

      We need to wait for that stack to fully delete before continuing so go ahead and pause the video and wait for this stack to disappear so that stacks disappeared and now there's a second WordPress one click deployment link attached to this lesson remember there are two so now go ahead and click on the second one this one should create a stack called EFS demo hyphen WordPress two scroll to the bottom and click on create stack that's going to create a new stack and a new EC two instance.

      So while we're doing this just close down all of these additional tabs at the top of the screen close them all down apart from the cloud formation one.

      We're going to need to wait for this to finish provisioning and move into the create complete state so again pause the video wait for this to change into create complete and then we go to to continue.

      After a few minutes the WordPress two stack has moved into a create complete state click on services open the EC two console in a new tab click on instances running you'll see a new A4L hyphen WordPress instance this is a brand new instance which has been provisioned using the one click deployment link that you've just used so the WordPress two one click deployment link.

      If we select this copy the public IP address into your clipboard and open that in a new tab it again loads our WordPress blog if we open the blog post.

      Now we can see these images because they're being loaded from EFS from the file system that EFS provides so no longer are we limited to only operating from a single EC two instance for our WordPress application because now there's nothing which gets stored specifically on that EC two instance.

      Instead everything stored on EFS and accessible from any EC two instance that we decide to give permissions to know what we can do to demonstrate this if we go back to cloud formation.

      Now remember attached to this lesson are two WordPress one click deployments we initially applied number one then we deleted that and applied number two so now I want you to reapply number one.

      So again click on the WordPress one one click deployment this again will create a new stack this time called EFS demo hyphen WordPress one click on create stack you need to wait for this to move into a create complete state so pause the video and resume it once the stack changes to create complete after a few minutes this stack also moves into create complete.

      Let's click on resources we can see it's provisioned a single EC two instance so let's click on this to move directly to this new instance select it copy this instance is IP address into your clipboard and open that in a new tab and again we have our WordPress blog and if we click on the post it loads those images so now we have a number of EC two instances we have to EC two instances both with WordPress installed both using the same RDS data.

      And both using the shared file system provided by EFS and it means that if any posts are edited or any images uploaded on either of these two EC two instances then those updates will be reflected on all other EC two instances and this means that we've now implemented this architecture that's on screen now and this is what's going to support us when we evolve this architecture more and add scalability in an upcoming section of the core.

      For now though we've just been focused on the shared file system now all that remains at this point is for us to tidy up the infrastructure that we've used in this demo lesson so close down all of these tabs we need to be at the cloud formation console we need to start by deleting EFS demo WordPress one and WordPress two so pick either of those click delete and then delete stack then select the other delete and then delete stack.

      Now we need both of these to finish deleting and then we can delete this last stack so go ahead and pause the video wait for both of these to disappear and then we can resume both of those have deleted so now we can click the final stack EFS demo hyphen VPC hyphen RDS hyphen EFS so select that delete and then delete stack and that's everything that you need to do in this demo lesson and once that stacks finished deleting the account will be in the same state as it was at the start of this.

      Now I hope you've enjoyed this demo lesson and that it's been useful what you've implemented in this demo is one more supportive step towards us moving this architecture from being a monolith through to being fully elastic.

      Now the application is in this state where we have a single shared RDS database for all of our application instances and we're also using a shared file system provided by EFS and this means that we can have one single EC2 instance we could have two EC2 instances or even 200 all of them sharing the same database and the same shared file system provided by EFS.

      Now in an upcoming section of this course we're going to extend this further by creating a launch template which automatically builds EC2 instances as part of this application architecture.

      We're going to utilize auto scaling groups together with application load balancers to implement an architecture which is fully elastic and resilient and this has been one more supportive step towards that objective.

      At this point though that's everything that you needed to do in this demo lesson so go ahead complete this video and when you're ready I look forward to you joining me in the next.

    1. Welcome back, this is part two of this lesson.

      We're going to continue immediately from the end of part one, so let's get started.

      Okay, so all three of these mount targets are now in an available state and that means we can connect into this EFS file system from any of the availability zones within the Animals for Life VPC.

      So what we need to do is test out this process and we're going to interact with this file system from our EC2 instances.

      So move back to the tab where we have the EC2 console open.

      And at this point I want you to either, and this depends on your browser, I'll either want you to right click and duplicate this tab to open another identical copy.

      If you can't do this in your browser then just open a new tab and copy and paste this URL into that tab.

      You'll end up with two separate tabs open to the same EC2 screen.

      So on the first tab we're going to connect to A4L-EFS instance A.

      So right click and then select connect.

      We're going to use instance connect.

      So make sure the username is EC2-user and then click on connect.

      Now right now this instance is not connected to this EFS file system and we can verify that by running a DF space-k and press enter.

      You'll see that nowhere here is listed this EFS file system.

      These are all volumes directly attached to the EC2 instance and of course the boot volume is provided by EBS.

      Now within Linux all devices or all file systems are mounted into a folder.

      So the first thing that we need to do to interact with EFS is to create a folder for the EFS file system to be mounted into.

      And we can do that using this command so shudu space mkdir space-p space/efs/wp-content.

      Now the hyphen p option just means that everything in this path will be created if it's not already.

      So this will create forward/EFS if it doesn't already exist.

      So press enter to create that folder.

      So I'm going to clear the screen to keep this easy to see.

      And the next thing I need to do is to install a package of tools which allows this instance or specifically the operating system to interact with the EFS product.

      Now the command I'm going to use to install these tools is shudu to give us admin permissions and then DNF which is the package manager for this operating system.

      And then a space hyphen y to automatically acknowledge any prompts and then a space and then install because I want to install a package and then a space.

      And then the name of the tools that I want to install is amazon hyphen EFS hyphen utils.

      So this is a set of tools which allows this operating system to interact with EFS.

      So go ahead and press enter and that will install these tools and then we can configure the interaction between this operating system and EFS.

      Again I'm going to clear the screen to keep this easy to see and I want to mount this EFS file system in that folder that we've just created.

      But specifically I want it to mount every time the instance is restarted.

      So of course that means we need to add it to the FSTAB file.

      Now if you remember this file from elsewhere in the course it's contained within the forward/ETC folder.

      So we need to move into that folder cd///ETC and then the file is called FSTAB.

      So we need to run shudu to give us admin permissions and then nano which is a text editor and then the name of the file which is FSTAB.

      So press enter and the file will likely have only one or two lines which is the root and/or boot volume of this instance.

      So let's just move to the end because we're going to add a new line and this is contained within the lesson commands document but we're going to paste in this line.

      So this line tells us that we want to mount this file system ID so file system ID colon forward/.

      We want to mount that into this folder so forward/efs forward/wp-content.

      We tell it that the file system type is EFS.

      Remember EFS is actually based on NFS which is the network file system but this is one provided by AWS as a service and so we use a specific AWS file system which is EFS.

      And the support for this has been installed by that tools package which we just installed.

      Now the exact functionality of this is beyond the scope of this course but if you do want to research further then go ahead and investigate exactly what these options do.

      What we need to do though is to point it at our specific EFS file system.

      So this is this component of the line all the way from the start here to this forward/.

      So to get the file system ID we need to go back to the EFS console and we need to copy down this full file system ID and yours will be different so make sure you copy your own file system ID into the clipboard.

      Then go back here and select the colon and then delete all the way through to the start of this line.

      And once you've done that paste in your file system ID what it should look like is the file system ID then a colon and then a forward/.

      So at this point we need to save this file so control O to save and then enter and control X to exit.

      Again I'm going to clear the screen to make it easier to see.

      Then I'll run a DF space -K and this is what the file systems currently attached to this instance look like.

      Then we're going to mount the EFS file system into the folder that we've created and the way that we do this is with this command.

      So shudu mount and then we specify the name of the folder that we want to mount.

      Now the way that this works is that this uses what we've just defined in the FSTAB file.

      So we're going to mount into this folder whatever file system is defined in that file.

      So that's the EFS file system and if we press enter after a few moments it should return back to the prompt and that's mounted that file system.

      There we go we back at the prompt and if we do a DF space -K again we'll see that now we've got this extra line at the bottom.

      So this is the EFS file system mounted into this folder.

      Now to show you that this is in fact a network file system let's go ahead and move into that folder using this command.

      And now that we're in that folder we're going to create a file.

      So we're going to use shudu so that you have admin privileges and then we're going to use the command touch which if you remember from earlier in the course just creates an empty file.

      And we're going to call this file amazing test file dot txt.

      Go ahead and press enter and then do an LS space -LA and you'll see that we now have this file created within this folder.

      And while we're creating it on this EC2 instance it's actually put this file on a network file system.

      Now to verify that let's move back to the other tab that we have open to the EC2 console the one that's still on this running instances screen.

      And now let's go ahead and connect to instance B.

      So right click on instance B select connect again instance connect verify the username is as it should be and click on connect.

      So now we're on instance B.

      Let's do a DF space -K to verify that we don't currently have any EFS file system mounted.

      Next we need to install the EFS tools package so that we can mount this file system.

      So let's go ahead and install that package clear the screen to make it easier to see then we need to create the folder that we're going to be mounting this file system into.

      We'll use the same command as on instance A.

      Then we need to edit the FSTAB file to add this file system configuration.

      So we'll do that using this command so shudu space nano space forward slash ETC forward slash FSTAB press enter.

      Remember this is instance B so it won't have the line that we added on instance A.

      So we need to go down to the bottom paste in this placeholder and then we need to replace the file system ID at the start with the actual file system ID.

      So delete this leaving the colon and forward slash go back to the EFS console copy the file system ID into your clipboard.

      Move back to this instance paste that in everything looks good.

      Save that file with control O press enter exit with control X then we back at the prompt clear the screen.

      We'll use the shudu mount forward slash EFS forward slash WP hyphen content command again to mount the EFS file system onto this instance and again it's using the configuration that we've just defined in the FSTAB file press enter.

      After a few moments you'll be placed back at the prompt we can verify whether this is mounted with DF space hyphen K.

      It has mounted by the looks of things it's at the bottom.

      So now if we move into that folder so CD forward slash EFS forward slash WP hyphen content forward slash and press enter.

      We now in that folder and if we do a listing so LS space hyphen LA what we'll see is the amazing test file dot txt which was created on instance A.

      So this proves that this is a shared network file system where any files added on one instance are visible to all other instances.

      So EFS is a multi user network based file system that can be mounted on both EC2 Linux instances as well as on premises physical or virtual servers running Linux.

      Now this is a simple example of how to use EFS for now we've done everything that we need to do in this demo lesson so we just need to clean up all of the infrastructure that we've used to do that.

      Go back to the EFS console we're going to go ahead and delete this file system so we should already have it selected just select delete you'll need to confirm that process by pasting in the file system ID.

      So go ahead and put your file system ID and then select confirm.

      Now that can take some time to delete and you'll need to wait for this process to complete.

      Once it has completed we're going to go ahead and move across to the cloud formation console.

      You should still have this open in a tab if you don't just type cloud formation in the search box at the top and then move to the cloud formation console.

      You should still have the stack name of implementing EFS which is the stack you created at the start with the one click deployment.

      Go ahead and select this stack then click on delete and confirm that deletion and once that finishes deleting that's all of the infrastructure gone that we've created in this demo lesson.

      So I hope this has been a fun and enjoyable demo lesson where you've gained some practical experience of working with EFS at this point though that is everything that you need to do in this demo lesson.

      So go ahead and complete the video and when you're ready I'll look forward to you joining me in the next.

    1. Welcome back and in this demo lesson I want to give you some abstract practical experience of using the Elastic File System or EFS.

      Now we're going to need some infrastructure.

      Before we apply that as always make sure that you're logged into the general AWS account, so the management account of the organization and you'll need the Northern Virginia region selected.

      Now attached to this lesson is a one-click deployment link so go ahead and click that.

      This is going to provision some infrastructure.

      It's going to take you to the quick create stack screen and everything should be pre-populated.

      You'll just need to scroll to the bottom, check the box beneath capabilities and then click on create stack.

      You're also going to be typing some commands within this demo lesson so also attached to this lesson is a lesson commands document.

      Go ahead and open that in a new tab.

      So this is just a list of the commands that we're going to be using during the demo lesson and there are some placeholders such as file system ID that you'll need to replace as we go but make sure you've got this open for reference.

      Now we're going to need this stack to be in a create complete state before we continue with the demo lesson so go ahead pause the video and resume it once your stack moves into a create complete state.

      Okay so the stacks now moved into a create complete status and what this has actually done is create the animals for life base VPC as well as a number of EC2 instances.

      So if we go to the EC2 console and click on instances running you'll note that we've created a for L - EFS instance A and a for L - EFS instance B and we're going to be creating an EFS file system and mount points and then mounting that on both of these instances and interacting with the data stored on that file system.

      We're going to get you the experience of working with a network shared file system so let's go ahead and do that.

      So to get started we need to move to the EFS console so in the search box at the top just type EFS and then open that in a brand new tab.

      We're going to leave this tab open to the instances part of the EC2 console because we're going to come back to this very shortly.

      So let's move across to the EFS console that we have open in a separate tab and the first step is to create a file system so a file system is the base entity of the elastic file system product and that's what we're going to create.

      Now you've got two options for setting up an EFS file system you can use this simple dialogue or you can click on customize to customize it further.

      So if we're using the simple dialogue we'd start by naming the file system so let's say we use A4L - EFS and then you'd need to pick a VPC for this file system to be provisioned into and of course we'd want to select the animals for life VPC.

      Now we want to customize this further we don't want to just accept these high-level defaults so we need to click on customize.

      This is going to move us to this user interface which has many more options so we've still got the A4L - EFS name for this file system.

      Now for the storage class we're going to pick standard which means the data is replicated across multiple availability zones.

      If you're doing this in a test or development environment or you're storing data which is not important then you can choose to use one zone which stores data redundantly but only within a single AZ.

      Now again in this demonstration we are going to be using multiple availability zones so make sure that you pick standard for storage class.

      You're able to configure automatic backups of this file system using AWS backup and if you're taking an appropriate certification course this is something which I'll be covering in much more detail.

      You can either enable this or disable it obviously for a production usage you'd want to enable it but for this demonstration we're going to disable it.

      Now EFS as I mentioned in the theory lesson comes with different classes of storage and you can configure lifecycle management to move files between those different storage classes so if you want to configure lifecycle management to move any files not accessed for 30 days you can move those into the infrequent access storage class and you can also transition out of infrequent access when anything is accessed so go ahead and select on first access for transition out of IA.

      So in many ways this is like S3 with the different classes of storage for different use cases.

      When you're creating a file system you're able to set different performance and throughput modes.

      For throughput mode you can choose between bursting and enhanced.

      If you pick enhanced you're able to select between elastic and provisioned.

      I've talked more about these in the theory lesson.

      We're going to pick bursting.

      Now for performance you can choose between general purpose and max I/O.

      General purpose is the default and rightfully so and you should use this for almost all situations.

      Only use max I/O if you want to scale to really high levels of aggregate throughput and input output operations per second so only select it if you absolutely know that you need this option.

      You've also got the ability to encrypt the data on the file system and if you do encrypt it it uses KMS and you need to pick a KMS key to use.

      Of course this means that in order to interact with objects on this file system permissions are needed both on the EFS service itself as well as the KMS key that's used for the encryption operation.

      Now this is something that you will absolutely need to use for production usage but for this demonstration we're going to switch it off.

      We won't be setting any tags for this file system so let's go ahead and click on next.

      You need to configure the network settings for this file system so specifically the mount targets that will be created to access this file system.

      Now best practice is that any availability zones within a VPC where you're consuming the services provided by EFS you should be creating a mount target so in our case that's US - East - 1A, 1B and 1C.

      So we're going to go through and configure this so first let's delete all of these default security group assignments.

      Every mount target that you create will have an associated security group so we'll be setting these specifically.

      For now though we need to choose the application subnet in each of these availability zones so in the top drop-down which is US - East - 1A I'm looking for app A so go ahead and do the same.

      In US - East - 1B I want to select the app B subnet and then in US - East - 1C logically I'll be selecting the app C subnet so that's app A, app B and app C.

      Now for security groups the CloudFormation 1 click deployment has provisioned this instance security group and by default this security group allows all connections from any entities which have this attached so this is a really easy way that we can allow our instances to connect to these mount targets so for each of these lines go ahead and select the instance security group you'll need to do that for each of the mount targets so we'll do the second one and then we'll do the third one and that's all of the network configuration options that we need to worry about so click on next it's here where you can define any policies on the file system so you can prevent root access by default you can enforce read only access by default you can prevent anonymous access or you can enforce encryption in transit for all clients connected to this EFS file system so any clients that connect to the mount targets to access the file system you can ensure that that uses encryption in transit and if you're using this in production you might want to select at least this last option to improve security for this demo lesson we're not going to use any of these policy options nor are we going to define a custom policy in the policy editor instead we'll just click on next at this point we just need to review everything's to our satisfaction everything looks good so we're going to scroll down to the bottom and just click on create now in order to continue with this demo lesson we're going to need both the file system and all of its mount targets so go into the file system click on network and you'll see three mount targets being created all three of these need to be ready before we can continue the demo lesson so this seems like a great time to end part one of this demo lesson go ahead and finish this video and then when all of these mount targets are ready to go you can start part two.

    1. This passage speaks volumes about the “coming-out” experience and how it is portrayed inAmerican film and media. Oftentimes when non-LGBTQ+ people hear the term “coming-out,”they always attribute it to this giant event that happens once in a person's life. But in reality,coming-out as queer, gay, lesbian, trans, or non-binary and gender nonconforming is somethingthat occurs repetitively and continuously for many LGBTQ+ youth. When Ngo says that “you’repotentially coming-out whenever you meet someone new” he attributes it to his own K-12experiences in middle and high school. Ngo believes that many of his peers and classmateswould “hint” that they’re queer, and they would come-out at different times and with differentpeople who fall under the same spectrum. Ngo even explains how coming-out became arecurring activity with his own mother, “I will say that, in terms of coming out in middle andhigh school, it’s definitely true that there is no set coming out experience. I told my Mom, andshe didn’t believe me, I told her later and she didn’t believe me. I told her two years later and shedidn’t believe me” (Ngo, 2022). In his book, Mayo explains how LGBTQ+ youth lack supportfrom family members within their immediate household and school environments, “these [lackof supports] may include a lack of role models in schools, discomfort with parental involvementor, especially in the case of children with LGBTQ parents, difficult relations between school andfamily” (Mayo 2014). And this ties back to that idea of a continuous coming-out experience

      This passage talks about the "coming-out" experience for LGBTQ+ people and how it is shown in American movies and media. Many non-LGBTQ+ people think of coming out as a big event that happens once, but for many LGBTQ+ youth, it is something that happens over and over again throughout their lives. Ngo mentions that you might come out every time you meet someone new, based on his experiences in middle and high school. He observes that his classmates often hinted at being queer and came out at different times to different people. He shares a personal story about coming out to his mother multiple times, saying, "I told my Mom, and she didn't believe me... I told her two years later, and she didn't believe me." In his book, Mayo discusses how LGBTQ+ youth often don't get support from their families or schools. This lack of support can include not having role models in schools or having tough relationships between families and schools. This all connects back to the idea that coming out is not just a one-time thing but a continuous process for many LGBTQ+ individuals.

    1. That development time acceleration of 4 days down to 20 minutes… that’s equivalent to about 10 years of Moore’s Law cycles. That is, using generative AI like this is equivalent to computers getting 10 years better overnight. That was a real eye-opening framing for me. AI isn’t magical, it’s not sentient, it’s not the end of the world nor our saviour; we don’t need to endlessly debate “intelligence” or “reasoning.” It’s just that… computers got 10 years better.

      To [[Matt Webb]] the project using GPT3 extracting data from web pages saved him 4d of work (compared to 20 mins coding up the GPT-3 instructions, and ignoring GPT-3 then ran overnight). Saying that's about 10yrs of Moore's law happening to him all at once. 'computers got 10yrs better' an enticing thought and framing. It depends on the use case probably, others will lose 10 yrs of their time making sense of generated nonsense. (Vgl the #pke24 experiments I did w text generation, none of it was usable bc enough was wrong to not be able to trust anything). Sticking to specific niches probably true : [[Waar AI al redelijk goed in is 20201226155259]], turning the issue into the time needed to spot those niches for yourself.

    1. I also want to point out that despite the many challenges we face, our lives are no doubt much easier than those without our many privileges of skin color, social class, and language

      Its sad but its true. These challenges come from anything nd anywhere and it just questions what are they being taught or for why do they do things to fit in. It's hard seeing how damaging these situations really are where kids will forever remember how they were treated growing up. it's even worse when teachers don't put a stop to they way kids talk.

    1. d just a song. This song

      I understand what you're trying to say here, but the grammar is getting in your way a bit - it's a bit jarring to go from saying that it is more than a song in the previous sentence to referring to it as "this song" in the next sentence

    1. In conclusion,

      There's nothing wrong with this kind of transition, but it's generally not needed - just the phrase ""Love Wins All" is more than just a song" already signals to your reader that you're wrapping up your discussion

    2. “Love Wins All” is a response to this dark reality, choosing to believe that love can overcome even in these dark moments.

      It's grammatically a bit unclear what "these dark moments" is referring to here - is it what was discussed at the beginning of the paragraph (Sulli's death) or what was just discussed in the previous couple of sentences (body shaming of female idols, etc.)?

    1. I think there are you know literally hundreds or thousands of discoveries to be made quite accidentally like that just from walking around with an infrared detector ultraviolet and it's not that people don't have cool stuff set up in Labs I mean it's not like we've never seen an UltraViolet or infrared but doing it as a citizen scientist and just walking around in the world I think we'll pick up on lots of stuff

      for - sensory substitution - citizen science - David Eagleman

    2. little camera on glasses and you turn it into an audio image um and there are very sophisticated examples of this now one is called The Voice v i and it's it's an app that you can just download on your phone

      for - Deep Humanity - BEing journey - example - umwelt - visual to audio app - The Voice - David Eagleman - to - search - Google - android app "The Voice" translates images into audio signal - https://hyp.is/OJKKmJ1MEe-TAp_w_0SK_Q/www.google.com/search?q=android+app+%22The+Voice%22+translates+images+into+audio+signal&sca_esv=6fa4053b1bfce2fa&sxsrf=ADLYWIK_UqZZZ9OCRCwH4D6FoSaykbMTpQ:1731013461104&ei=VSstZ4eCBqi8xc8P5KP_kAU&ved=0ahUKEwjHgM3Tj8uJAxUoXvEDHeTRH1IQ4dUDCA8&uact=5&oq=android+app+%22The+Voice%22+translates+images+into+audio+signal&gs_lp=Egxnd3Mtd2l6LXNlcnAiO2FuZHJvaWQgYXBwICJUaGUgVm9pY2UiIHRyYW5zbGF0ZXMgaW1hZ2VzIGludG8gYXVkaW8gc2lnbmFsMggQABiABBiiBDIIEAAYgAQYogQyCBAAGIAEGKIEMggQABiABBiiBDIIEAAYgAQYogRI2xdQpglYjRJwAXgCkAEAmAGZA6ABmQOqAQM0LTG4AQPIAQD4AQGYAgOgAqADwgIKEAAYsAMY1gQYR8ICBBAAGEeYAwDiAwUSATEgQIgGAZAGCJIHBTIuNC0xoAewBA&sclient=gws-wiz-serp

    1. we are afraid-with good reason-t hat ou r polit ical class is wholly incapable of seizi n g th ose tools and implem enti n gt hose plans

      it's easier to just say we're fucked out of fear that our elites will never be able to meet the Earth's demands

    1. "The sky is the color of a bruised banana, swollen and pale. The air is thick, with moisture and heat, like a fever. The land, the house, the trees—everything seems to be holding its breath. Even China, the dog, has stopped barking at the chickens, and she lies on the porch, her ears flat against her skull, her body still, like she knows something is coming. I don’t know how much longer we have. I feel it in my bones, the storm waiting to roll in. I taste it in the back of my throat, like salt.I look at Daddy and see the way he’s been getting stiffer lately, his hands like claws, his face drawn tight and lost. He’s the same man he always was, but something in him has changed, has gone under. I think maybe it’s not just the storm, not just the way things are in the world, but the way he is. The way the sickness has taken him, little by little, until all that’s left is a shadow of who he used to be."We need to get ready for the storm," I say, and he looks at me as if he doesn’t understand, as if I’m not the one who’s supposed to be taking care of things. I have to remind him. I have to remind him that there’s a storm coming, that we’re running out of time, that the world is about to change."We’ll be ready," he says, but his voice is weak. He’s not sure anymore. He doesn’t even believe in his own hands, in the way they’re supposed to work. He can’t hold anything anymore, not the way he used to. And I wonder how long it’ll be before he can’t even stand. I can’t stop thinking about it, about the way his body’s already failing him, about how he’s already slipping away from us."

      Salvage the Bones passage

    1. Even if you do well in comparison with others, you may be artificially inflated from this comparison. It’s a short-lived boost of ego if you win the comparison — easily knocked down.

      This is true! You shouldn't be basign your self-esteem, or worse, self-worth on where you are compared to other people. While I think it's good to know where you are relative to other people (bc when you are better at a skill than others, you can use that as an advantage), it's also important to remain humble. I always remind myself that anyone can become as good as or even better than you--it's just a matter of time and effort. (This applies to you too!)

    1. if (*flags & FOLL_NOFAULT) return -EFAULT; if (*flags & FOLL_WRITE) fault_flags |= FAULT_FLAG_WRITE; if (*flags & FOLL_REMOTE) fault_flags |= FAULT_FLAG_REMOTE; if (*flags & FOLL_UNLOCKABLE) { fault_flags |= FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_KILLABLE; /* * FAULT_FLAG_INTERRUPTIBLE is opt-in. GUP callers must set * FOLL_INTERRUPTIBLE to enable FAULT_FLAG_INTERRUPTIBLE. * That's because some callers may not be prepared to * handle early exits caused by non-fatal signals. */ if (*flags & FOLL_INTERRUPTIBLE) fault_flags |= FAULT_FLAG_INTERRUPTIBLE; } if (*flags & FOLL_NOWAIT) fault_flags |= FAULT_FLAG_ALLOW_RETRY | FAULT_FLAG_RETRY_NOWAIT; if (*flags & FOLL_TRIED) { /* * Note: FAULT_FLAG_ALLOW_RETRY and FAULT_FLAG_TRIED * can co-exist */ fault_flags |= FAULT_FLAG_TRIED; } if (unshare) { fault_flags |= FAULT_FLAG_UNSHARE; /* FAULT_FLAG_WRITE and FAULT_FLAG_UNSHARE are incompatible */ VM_BUG_ON(fault_flags & FAULT_FLAG_WRITE); } ret = handle_mm_fault(vma, address, fault_flags, NULL); if (ret & VM_FAULT_COMPLETED) { /* * With FAULT_FLAG_RETRY_NOWAIT we'll never release the * mmap lock in the page fault handler. Sanity check this. */ WARN_ON_ONCE(fault_flags & FAULT_FLAG_RETRY_NOWAIT); *locked = 0; /* * We should do the same as VM_FAULT_RETRY, but let's not * return -EBUSY since that's not reflecting the reality of * what has happened - we've just fully completed a page * fault, with the mmap lock released. Use -EAGAIN to show * that we want to take the mmap lock _again_. */ return -EAGAIN; } if (ret & VM_FAULT_ERROR) { int err = vm_fault_to_errno(ret, *flags); if (err) return err; BUG(); } if (ret & VM_FAULT_RETRY) { if (!(fault_flags & FAULT_FLAG_RETRY_NOWAIT)) *locked = 0; return -EBUSY; }

      Seems it's just setting flags for page faults based on flags param

    1. “What’s more, we can see that the Android tweets are angrier and more negative, while the iPhone tweets tend to be benign announcements and pictures. …. this lets us tell the difference between the campaign’s tweets (iPhone) and Trump’s own (Android).”

      It's crazy that just the phones from which the tweets were posted was enough to tell all of this. I feel like this really reenforces the importance of being careful about what you put out there, as even the smallest details are enough to tell a lot.

    1. . Since an alkene protonation step is endergonic, the stability of the more highly substituted carbocation is reflected in the stability of the transition state leading to its formation.

      Alkene Protonation Step: In this context, "alkene protonation" means adding a proton (H⁺) to an alkene (a molecule with a carbon-carbon double bond). This addition creates a carbocation (a positively charged carbon atom) as an intermediate.

      Endergonic Reaction: An endergonic reaction is one that requires energy input to proceed; it’s not energetically favorable on its own. When an alkene is protonated, forming a carbocation intermediate, energy is required to reach this unstable, high-energy state.

      Carbocation Stability and Transition State: When forming a carbocation, the reaction passes through a transition state, which is a high-energy state that comes just before the carbocation actually forms. The energy level of this transition state largely depends on how stable the resulting carbocation will be.

      More Substituted Carbocations: A carbocation is more stable when it's more substituted (i.e., when the positively charged carbon is bonded to more alkyl groups). This is because alkyl groups help stabilize the positive charge through electron-donating effects.

      Linking Stability to Transition State: Because a more substituted carbocation is more stable, the transition state leading to its formation is also more stable. This means it requires slightly less energy to reach this transition state compared to forming a less substituted, less stable carbocation.

      So, in summary: Protonating an alkene to form a carbocation is an energy-requiring (endergonic) step. However, if the carbocation formed is highly substituted and stable, the transition state (which precedes the carbocation) will also be relatively stable, making it easier for the reaction to proceed in that direction.

    1. Being aware of the amygdala’s ability to take over is crucial foranyone who’s struggling with anxiety. It’s a reminder that the brainis hardwired to allow the amygdala to seize control in times of danger.And because of this wiring, it’s difficult to directly use reason-basedthought processes arising in the higher levels of the cortex to controlamygdala-based anxiety. You may have already recognized that youranxiety often doesn’t make sense to your cortex, and that your cortexcan’t just reason it away.

      IMP

    Annotators

    1. Adam taqlks to himself ... "pincihesing (ki) Bill" .. yo, this one--wake up, make it better. Most excellsior.

      @bgates ..,.. "uvrin?"

      ringing the idea ... the metaphorical idea; where's the universal mandate for a "guild of programmer's that program away the need to fight?"

      have you played starcraft galactica?

      do you think it's just the Bureau or .. the KGB or Courtney Cox herself; who has it?

    1. How could I be so stupid? One night, too much wine, that stupid red dress. I can’t believe I letthis happen, that I let him touch me at all. What is wrong with me? I used to have dreams. I usedto feel like I belonged to myself. There was a time when I had this picture in my mind of a lifethat was mine to build. But that girl is gone, it’s just me, the diner, this house, and him. And nowthere’s this thing growing inside me, tying me here even tighter.What happened to the girl who used to be mine? She’s trapped, and so is this baby.

      This is PERFECT!!

    2. Earl’s

      I know it's supposed to be a diary, so it is hard to like actually introduce the character, but the reader has no idea who that is. You could maybe say like My husband has been acting strange. Earl (something he does)... This might just contextualize it better.

    1. Welcome back and in this demo lesson you're going to get some experience of migrating a snapshot which you've previously taken from Aurora in provisioned mode and migrate this into Aurora running in serverless mode.

      Now before we begin as always make sure that you're logged in to the general AWS account, so the management account of the organization and you'll need to have Northern Virginia selected.

      Now attached to this lesson is a one-click deployment link and I'll need you to go ahead and open that to start the process.

      Now once you've got this open we're going to need the Aurora snapshot name that you created in a previous demo.

      So click on the services drop down and locate RDS.

      It's probably going to be in the recently visited section if not you can search in the box at the top but go ahead and open that in a new tab.

      Go to that tab and once it's loaded click on snapshots and you should have these two snapshots in your account.

      The first snapshot is a4l wordpress -with -cat - post -mySQL57.

      The other snapshot the one that we're going to use is this one so a4l wordpress -aura -with -cat -post.

      Go ahead and select that entire snapshot name and copy that into your clipboard because we're restoring an Aurora provisioned snapshot into Aurora serverless.

      We're not performing a migration we're performing a restore and so we don't need the snapshot ARN we need the snapshot name.

      So go back to the cloud formation stack everything should be pre-populated but there's a box that you need to paste in the snapshot name that you'll be restoring so that's this one paste that in check the acknowledgement box at the bottom under capabilities and then create stack.

      Now that process can take up to 45 minutes to complete sometimes it can be a little bit quicker and while that's working we're going to follow the same process through manually but we're going to stop before provisioning the Aurora serverless cluster.

      So go back to the RDS tab make sure that you still have this snapshot selected then click on actions and then restore snapshot and I want to step through the options available when restoring an Aurora provisioned snapshot into Aurora serverless.

      So these are the options you'll have when you're restoring an Aurora provisioned snapshot you'll see a list of compatible engines so anything compatible with the snapshot that you're restoring in our case it's only my SQL compatibility then you'll have to select your capacity type now it defaults to provisioned but we want to restore to a serverless cluster so we'll select serverless.

      You need to select the version of Aurora serverless that you're restoring to and again it's only going to show you compatible versions in this case only 2.07.1 and that's why I was so precise with the version numbers when doing the demos earlier in this section.

      Now under database identifier it's here where we would need to provide a unique identifier within this region inside this account for what we're restoring so we might use a4l wordpress-serveless we then need to provide connectivity information so we'd click in the VPC drop-down and make sure we select the animals for life VPC we'd still need to provide a database subnet group to use now currently there isn't one that exists in the account because the cloud formation template is still provisioning but we'd need to choose a relevant subnet group in this box we'd also need to choose a VPC security group which controls access to this database cluster then we have additional configuration and this is a feature which I'm going to be talking about in a dedicated lesson if you're doing the developer or sysops associate courses and this is an API which can be provisioned to give access to the data within this Aurora serverless cluster and it can do so in a way which is very lightweight and this makes it ideal for use with things like serverless applications which prefer a connectionless architecture so this is something that you will use if you want to use for example Aurora serverless with a serverless application based on lambda now something unique to Aurora serverless is the concept of capacity units and I've talked about these in the theory lesson where I talk about Aurora serverless these are the units of database service which the Aurora serverless cluster can make use of and you're able to set a minimum capacity unit and a maximum capacity unit and this provides a range of resources that this cluster can move between based on load on the cluster so as I've talked about in the theory lesson it will automatically provision more capacity or less capacity between these two values based on load now you have additional options for scaling and one that I'll be demonstrating a little bit later on in this demo lesson is how you can actually pause the compute capacity after a consecutive number of minutes of inactivity and this as long as your application supports it can actually reduce the amount of cost that you have running a database platform down to almost zero so you won't have any compute capacity build when the Aurora serverless cluster isn't in use and again I'll be demonstrating that very shortly in this demo lesson you're able to set encryption options just like with other forms of RDS and then under additional configuration you can also configure backup options now these options are obviously based on restoring a snapshot and you have a similar yet more extensive set of options if you're creating an Aurora serverless cluster from scratch so if we select Amazon Aurora and then we go down and select the serverless capacity type then obviously we can select from different versions and we have a wider range of options that we can set so the cluster identifier the admin username and password we've still got the capacity settings we still need to define connectivity options we've got additional configuration options around creating a database controlling the parameter group customizing backup options encryption and enabling deletion protection so whether you're restoring a snapshot or creating an Aurora serverless cluster from scratch these options are similar but you have access to slightly more configuration if you're creating a brand new cluster because when you're restoring a snapshot many of these configuration items are taken from that snapshot at this point we're not going to actually create the cluster manual so I'm going to cancel out of that and I'm going to refresh and as you can see we already have our Aurora serverless DB cluster and it's in an available state so let's go back to our cloud formation stack and refresh it's still in a create in progress state for the stack itself and in order to continue with this demo lesson we're going to need this to be in a create complete state so go ahead pause the video wait for your stack to move into a create complete state and then we can continue so this stacks now moved into a create complete state and we're good to continue so the first thing that I want to draw your attention to if we move back to the RDS console and then let's just refresh you'll see that this cluster is currently using two Aurora capacity units let's go inside the cluster we'll be able to see that it's available it's currently using two capacity units but otherwise it looks very similar to a provisioned Aurora cluster now what we're going to do is click on services open the EC2 console in a new tab go to instances running you should see a single WordPress instance so select that copy the public IP version 4 address into your clipboard making sure not to use this open address and open that in a new tab you'll see that it loads up the WordPress application and it still has the post within it that you created in the previous demo lesson the best cats ever and if you open this post you'll see that it doesn't have any of the attached images because remember they're not stored in the database they're stored on the local instance file system and that's something that we're going to rectify in an upcoming section of the course either called advanced storage or network storage depending on what course you're currently taking but I just wanted to demonstrate that all we've done is restore an Aurora provision snapshot into an Aurora serverless cluster and it still operates in the same way as Aurora provisioned but this is where things change if we go back to the RDS console we know that this Aurora serverless cluster makes use of Aurora capacity units or ACUs and currently it's set to move between one and two Aurora capacity units and the reason it's currently set to two is because we've just used it we've just restored an existing snapshot into this cluster and that operation comes with a relatively high amount of overhead so it needs to go to the two capacity units maximum in order to give us the best performance now what we're going to see over the next few minutes if we just sit here and keep refreshing this screen what should happen is that because we're not using our application first we're going to see it drop down from two capacity units to one capacity unit and that will of course reduce the costs for running this Aurora serverless cluster after a certain amount of time it's going to go from one capacity unit to zero capacity units because it's going to pause the cluster because of no usage we've got this configured if I click on the configuration tab to pause the compute capacity after a number of consecutive minutes of inactivity and it's set to five minutes so after five minutes of no usage on this database it's actually going to pause the compute capacity and we won't be incurring any costs for the compute side of this Aurora serverless cluster so that's one of the real benefits of Aurora serverless versus all of the other types of RDS database engine so let's just go ahead and refresh this and see if it's already changed from two capacity units it's currently still on two so let's select logs and events and refresh we don't see any events currently so this means that we've had no scaling events on this database but if we click on monitoring you'll see how the CPU utilization has decreased from around 25% to just over 5% and the database connection count has reduced from the one when we just accessed the application back down to zero after a few refreshers we'll see that it either decreases from two capacity units down to one or it will go straight to zero if we reach this five minute timer before it performs that scaling event to reduce from two to one so in our case we've skipped the point of having one capacity unit we've reached that five minute threshold where it pauses the compute capacity and so it's gone straight down to zero so your experience might vary it might go from two down to one and then pause or it might go from two straight down to zero but in a case for me my database is currently running at zero capacity units because this time frame has been reached with no activity and the compute has been paused so this means I have no costs for the compute side of Aurora serverless now if I go back to the application and do a refresh you'll see that we don't get a refresh straight away there's a pause and this is because now that the database cluster experiences some incoming load it's unpausing that compute it's resuming the compute part of the cluster and this isn't an immediate process so it's important to understand that when you implement an application and use this functionality the application does need to be able to tolerate lengthier connection times now sometimes in the case of WordPress you will see an error page when you attempt to do a refresh because a timeout value within WordPress is reached before the cluster can resume in the case of this demo lesson that didn't happen it was able to resume the cluster straight away and if we go back to the RDS console and then refresh this page we'll be able to see just how many capacity units this cluster is now operating with and it's operating with two capacity units now in production usage you could be a lot more granular and customize this based on the needs of your application in my case my minimum is one and my maximum is two and my pause time frame is a relatively low five minutes because I wanted to keep it simple for this demo lesson in production usage you might have a larger range between minimum and maximum you might have a higher minimum to be able to cope with a certain level of base load and the time frame between the last access and the pausing of the compute might be significantly longer than five minutes but this demonstration lesson is just that a demo and it's just designed to highlight this at a really high level so that when it comes to you using this in production you understand the architecture now that's everything that I wanted to cover in this demo lesson it's just been a brief bit of experience of using Aurora serverless now to tidy up to return the account into the same state as it was at the start of the demo lesson just go ahead and close down all of these tabs we need to go back to the cloud formation console make sure the Aurora serverless stack is selected and then just go ahead and click on delete and then delete stack and that will remove all of those resources returning the account into the same state as it was at the start of the demo now this whole section of the course has been around trying to improve the database part of our application so we've moved from having a database running on the same server as the application we've split that off we've moved it into rds and we've evolved that from my sequel rds through to Aurora provisioned and now to Aurora serverless we still have one major limitation with our application and that's that for any posts you make on the blog the media for those posts are stored locally on the instance file system and that's something that we're going to start tackling next in the course and we're going to be using the elastic file system product or EFS at this point though that's everything that I wanted to cover in this demo lesson go ahead and complete this video and when you're ready I look forward to you joining me in the next.

    1. Welcome back.

      This is part two of this lesson.

      We're going to continue immediately from the end of part one.

      So let's get started.

      Now the next thing that I want to demonstrate is how we can restore RDS if we have data corruption.

      The way that we're going to simulate this is to go back to our WordPress blog and we're going to corrupt part of this data.

      So we're going to change the title of this blog post from the best cats ever to not the best cats ever, which is clearly untrue.

      But we're going to change this and this is going to be our simulation of data corruption of this application.

      So go ahead and click on update to update the blog post with this new obviously incorrect data.

      Now let's assume that we need to restore this database from an earlier snapshot.

      Now let's ignore the automatic backup feature of RDS and just look at manual snapshots.

      Well, let's move back to the RDS console and click on snapshots and we'll be able to see the snapshot that we created at the start of this demo lesson.

      Remember, this does have the blog post contained within it in its original correct form.

      Now to do a restore, we need to select this snapshot, click on actions and then restore snapshot.

      Now I mentioned this in the theory lesson about backups and restores within RDS.

      Restoring a snapshot actually creates a brand new database instance.

      It doesn't restore to the existing one using normal RDS.

      So we have to restore a snapshot.

      Obviously the engine set to MySQL community and we're provided with an entry box for a brand new database identifier.

      And we're going to use a4lwordpress-restore.

      So this allows us to more easily distinguish between this and the original database instance.

      We also need to decide on the deployment option.

      So go ahead and select single DB instance.

      This is only a demo, so we don't need to select multi-AZ DB instance.

      We need to pick the type of instance that we're going to restore to.

      And again, because this is a new instance, we're not limited to the previous free tier restrictions.

      So we're able to select from any of the available instance types.

      So go ahead and select burstable classes and then pick either t2 or t3.micro.

      We'll leave storage as default.

      We'll need to provide the VPC to provision this new database instance into.

      So we'll make sure that a4l-vpc1 is selected and we'll use the same subnet group that was created by the one-click deployment, which you used at the start of this demo.

      You're allowed to choose between public access yes or no.

      We'll choose no.

      You'll have to pick a VPC security group to use for this RDS instance.

      Now the one-click deployment did create one, so click in the drop-down and select the RDS multi-AZ snap RDS security group.

      So not the instance security group, but the RDS security group.

      Once you've selected that, then delete default, scroll down.

      You can specify database authentication and encryption settings.

      And again, if applicable in the course that you're studying, I'll be covering these in a separate lesson.

      We'll leave all of that as default and click on restore DB instance.

      Now this is going to begin the process of restoring a brand new database instance from that snapshot.

      Now the important thing that you need to understand is this is a brand new instance.

      We're not restoring the snapshot to the same database instance.

      Instead, it's creating a brand new one.

      Now when this finishes restoring, when it's available for use, if we want our application to make use of it, and the restored non-corrupted data, then we're going to need to change the application to point at this newly restored database.

      So at this point, go ahead and pause the video because for the next step, which is to adjust the WordPress configuration, we need this database to be in an available state.

      So pause the video, wait for the status to change from creating all the way through to available, and then we're good to continue.

      Okay, so the snapshot restore is now completed and we have a brand new database instance, A4LWordPress-Restore.

      And in my case, it took about 10 minutes to perform that restoration.

      Now just to reiterate this concept, because it's really important, it features all the time in the exams, and you'll need this if you operate in the real world using AWS.

      If we go into the original RDS instance, just pay attention to this endpoint DNS name.

      So we have a standard part, which is the region, and then .rds, and then .amazonaws.com.

      Before this, though, we have this random part.

      Now this represents the name of the database instance as well as some random.

      If we go back to the databases list and then go into the restored version, now we can see that we have A4LWordPress-Restore.

      And this is different than that original database endpoint name for the original database.

      So the really important, the critical thing to understand is that a restore with a normal RDS will create a brand new database instance.

      It will have a brand new database endpoint DNS name, the CNAME, and you will need to update any application configuration to use this brand new database.

      So go ahead and just leave this open in this tab because we'll be needing it very shortly.

      Click on Services, find EC2, and open that in a new tab.

      So as a reminder, if we go back to the WordPress tab and just hit Refresh, we can see that we still have the corrupt data.

      Now what we want to do is point WordPress at the restored correct database.

      So to do that, go to the EC2 tab that you just opened, right click on the A4LWordPress instance, select Connect.

      We're going to use Instance Connect, so choose that to make sure the username is EC2-user and then connect to the instance.

      This process should be familiar by now because we're going to edit the WordPress configuration file.

      So cd/var/www/html, then we'll do a listing ls-la, and we want to edit the configuration file which is wp-config.php, so shudu, space, nano which is the text editor, space, wp-config.php.

      Once we're in this file, just scroll down and again we're looking for the dbhost configuration which is here.

      Now this DNS name you'll recognize is pointing at the existing database with the corrupt data.

      So we need to delete all of this just to leave the two single quotes.

      Make sure your cursor's over the second quote.

      Go back to the RDS console and we need to locate the DNS name for the A4LWordPress-Restore instance.

      Remember this is the one with the correct data.

      So copy that into your clipboard, go back to EC2 and paste that in, and then Ctrl+O and Enter to save, and Ctrl+X to exit.

      That's all of the configuration changes that we need.

      If we go back to the WordPress application and hit refresh, we'll see that it's now showing the correct post, the best cats ever, because we're now pointing at this restored database instance.

      So the key part about this demo lesson really is to understand that when you're restoring a normal RDS snapshot, you're restoring it to a brand new database instance, its own instance with its own data and its own DNS endpoint name.

      So you have to update your application configuration to point at this new database instance.

      With normal RDS, it's not possible to restore in place.

      You have to restore to a brand new database instance.

      Now this is different with a feature of Aurora which I'll be covering later in this section, but for normal RDS, you have to restore to a brand new instance.

      So those are the features which I wanted to demonstrate in this demo lesson.

      I wanted to give you a practical understanding of the types of recovery options and resilience options that you have available using the normal RDS version, so MySQL.

      Now different versions of RDS such as Microsoft SQL, PostgreSQL, Oracle, and even AWS specific versions such as Aurora and Aurora Serverless, they all have their own collections of features.

      For the exam and for most production usage, you just need to be familiar with a small subset of those.

      Generally, you'll either be using Oracle, MSSQL, or one of the open source or community versions, so you'll only have to know the feature set of a small subset of the wider RDS product.

      So I do recommend experimenting with all of the different features and depending on the course that you're taking, I will be going into much more depth on those specific features elsewhere in this section.

      For now though, that is everything that I wanted to talk about, so all that remains is for us to tidy up the infrastructure that we've used in this demo lesson.

      So go to databases.

      I want you to select the A4L WordPress -Restore instance because we're going to delete this fully.

      We're not going to be using this anymore in this section of the course, so select it, click on the Actions drop down, and then select Delete.

      Don't create a final snapshot.

      We don't need that.

      Don't retain automated backups and because we don't choose either of these, we need to acknowledge our understanding of this and type Delete Me into this box.

      So do that and then click on Delete.

      Now that's going to delete that instance as well as any snapshots created as part of that instance.

      So if we go to Snapshots, we only have the one manual snapshot.

      If we go to System Snapshots, we can see that we have one snapshot for this Restore database, and if you're deleting a database instance, then any system created snapshots for that database instance will also be deleted either immediately or after the retention period expires.

      So those will be automatically cleared up as part of this deletion process.

      We're not going to delete the manual snapshot that we created at the very start of this lesson with the catpost in because we're going to be using this elsewhere in the course.

      So leave this in place.

      Click on Databases again.

      We're going to need to wait for this Restored Database instance to finish deleting before we can continue.

      So go ahead and pause the video, wait for this to disappear from the list, and then we can continue.

      Okay, so that Restored Database instance has completed deleting.

      So now all that remains is to move back to the CloudFormation console.

      You should still have a tab open.

      Select the stack deployed as part of the one-click deployment.

      It should be called RDS Multi-AZ Snap.

      Select Delete and then confirm that deletion, and that will clear up all of the infrastructure that we've used in this demo lesson.

      It will return the account into the same state as it was at the start of this demo with one exception.

      And that one exception is the snapshot that we created of the RDS instance as part of this deployment.

      So that's everything you need to do in this demo lesson.

      I hope you've enjoyed it.

      I know it's been a fairly long one where you've been waiting a lot of the time in the demo for things to happen, but it's important for the exam and real-world usage that you get the practical experience of working with all of these different features.

      So you should leave this demo lesson with some good experience of the resilience and recovery features available as part of the normal RDS product.

      Now at this point, that's everything you need to do, so go ahead and complete this video, and when you're ready, I look forward to you joining me in the next.

    1. Welcome back and in this demo lesson we're going to continue implementing this architecture.

      So in the previous demo lesson you migrated a database from a self-managed MariaDB running on EC2 into RDS.

      In this demo lesson you're going to get the experience working with RDS's multi-availability zone mode as well as creating snapshots, restoring those snapshots and experimenting with RDS failover.

      Now in order to complete this demo lesson you're going to need some infrastructure.

      So let's move across to our AWS console.

      You need to be logged in to the general AWS account.

      So that's the management account of the organization and as always make sure that you have the Northern Virginia region selected.

      Now attached to this lesson is a one-click deployment link so go ahead and open that.

      This will take you to a quick create stack page and everything should be pre-populated and ready to go.

      So the stack name is RDS multi-AZ snap.

      All of the parameters have default values.

      Multi-AZ is currently set to false so leave that at false, check the capabilities box at the bottom and then click on create stack.

      Now this infrastructure will take about 15 minutes to apply and we need it to be in a create complete state before we continue.

      So go ahead, pause the video and resume it once CloudFormation has moved into a create complete state.

      Okay so now that this stack has moved into a create complete state we need to complete the installation of WordPress and add our test blog post because we're going to be using those throughout this demo lesson.

      Now this is something that you've done a number of times before so we can speed through this.

      So click on the services drop down, move to the EC2 console.

      We need to go to running instances and we'll need the public IP version 4 address of A4L-WordPress.

      So go ahead and copy the public IP version 4 address into your clipboard.

      Don't use this open address.

      Open that in a new tab.

      We'll be calling the site as always the best cats for username put admin for the password.

      We'll be using the animals for life strong password and then as always test at test.com for the email address.

      Enter all of that and click on install WordPress.

      Then you'll need to log in admin for the username and to the password click on login.

      Once we logged in go to posts click on trash under hello world to delete the existing post and then add a new post.

      Close down this dialogue for the title of the post the best cats ever click on the plus select gallery.

      At this point go ahead and click the link that's attached to this lesson to download the blog images.

      Once downloaded extract that zip file and you'll get four images.

      Once you've got those images ready click on upload locate those images select them and click on open and that will add those to the post.

      Once they're fully loaded in we can go ahead and click on publish and then publish again and that will publish this post to our blog.

      And as a reminder that stores these images on the local instance file system and adds the post metadata to the database and that's now running within RDS.

      Now I want to step through a few pieces of functionality of RDS and I want you for a second to imagine that this blog post is actually a production enterprise application.

      Maybe a content management system and I want to view all of the actions that we perform in this demo lesson through the lens of this being a production application.

      So go ahead and return to the AWS console click on services and we're going to move back to RDS.

      The first thing that we're going to do is to take a snapshot of this RDS instance.

      So just close down any additional dialogues that you see go to databases.

      Then I want you to select the database that's been created by the one click deployment link that you used at the start of this demo lesson.

      Then select actions and then we're going to take a snapshot.

      Now a snapshot is a point in time copy of the database.

      When you first do a snapshot it takes a full copy of that database so it consumes all of the capacity of the data that's being used by the RDS instance.

      So this initial snapshot is a full snapshot containing all of the data within that database instance.

      Now we're going to take a snapshot and we're going to call it a four L wordpress hyphen with hyphen cat hyphen post hyphen mySQL hyphen and then the version number without any dots or spaces.

      Now depending on when you're watching this video doing this lesson you might have been using a different version of SQL.

      And so in the lesson description for this lesson I've included the name of the snapshot that you need to use.

      So go ahead and check that now and include that in this box.

      So that informs us what it is, what it contains and the version number that this snapshot refers to.

      So go ahead and enter that and then click on take snapshot and that's going to begin the process of creating this snapshot.

      Now the process takes a variable amount of time.

      It depends on the speed of AWS on that particular day.

      It depends on the amount of data contained within the database and it also depends on whether this is the first snapshot or a subsequent snapshot.

      Now the way that snapshots work within AWS is the first snapshot contains a full copy of all of the data of the thing being snapshotted and any subsequent snapshot only contains the blocks of data which have changed from that last previous successful snapshot.

      So of course the first snapshot always takes the longest and everything else only takes the amount of time required to copy the changed data.

      So if we just give this a few minutes let's keep refreshing.

      Mine's still reporting at 0% complete so we need to allow this to complete before we move on.

      So go ahead and pause the video and resume it once your snapshot has completed.

      And there we go our snapshots now moved into an available status and the progress has completed.

      And in my case that took about five minutes to complete from start to finish.

      So again just to reiterate this snapshot has been taken.

      It's a copy of an RDS MySQL database of a particular version and it contains our WordPress database together with the cat post that we just added.

      And that's important to keep in mind as we move on with the demo lesson.

      Now you could go ahead and take another snapshot and this one would be much quicker to complete.

      It would only contain any data changed between the point that you take it and when you took this previous snapshot.

      I'm not going to demonstrate that in this video but you can do that.

      And for production usage you may use snapshots in addition to the normal automated backups provided by RDS.

      Snapshots that you take manually live past the life cycle of the RDS instance.

      And if you want to tidy them up you have to do that manually or by using scripts that you create.

      So snapshots that are taken manually are not managed by RDS in any way.

      And that's important to understand from a DR and the cost management perspective.

      Now the next thing that I want to demonstrate is the multi AZ mode of RDS.

      So if we go back to the RDS console just expand this menu and go to databases.

      Currently this database is using a single RDS instance.

      So this RDS instance is not resilient to the failure of an availability zone within this region.

      Now to change that process we can provision a standby replica in another availability zone and that's known as multi AZ.

      Now it's worth noting that this is not included within the AWS free tier.

      So there will be a small charge to do this optional step to enable multi AZ mode.

      Make sure that you have the database instance selected and then click on modify.

      Now it's on this screen that we can change a lot of the options which relate to this entire RDS instance.

      We've got the option to adjust the database identifier, provide a new database admin password.

      We can change the DB instance size or type if we want.

      We can adjust the amount of storage available to the database instance, even enable storage auto scaling.

      But what we're looking for specifically is adjusting the availability and durability settings.

      Currently this is set to do not create a standby instance and we're going to modify this.

      We're going to change it to create a standby instance and this is something that's recommended for any production usage.

      This creates a standby replica in a different availability zone.

      So it picks another availability zone, specifically another subnet that's available within the database subnet group that was created by the one click deployment.

      So we're going to set that option and scroll down and then select continue.

      Now because we have a maintenance window defined on this RDS instance, we have two different options of when to apply this change.

      We can either apply the change during the next scheduled maintenance window.

      Remember, this is a definable value that you can set when you create an RDS instance or you modify its settings.

      Or we can specify that we want to apply immediately the change that we're making.

      And for this demo lesson, that's what we're going to do.

      Now it does warn you that any changes could cause a performance impact and even an outage.

      So it's really important that if you are applying changes immediately, you understand the impact of those changes.

      So make sure that you have apply immediately selected and then click on modify DB instance.

      Now a multi AZ deployment is essentially an automatic standby replica in a separate availability zone.

      What happens behind the scenes is that the primary database instance is synchronously replicated into this standby replica inside a different availability zone.

      Now this provides a few benefits.

      It provides data redundancy benefits.

      It means that any operations which can interrupt IO such as system backups will occur from the standby replica.

      So won't impact the primary database and that provides a real advantage for production RDS deployments.

      But the main reason beyond performance is that it helps protect any databases in the primary instance against failure of an availability zone.

      So if the availability zone of the primary instance fails and then the C name of the database will be changed to point at the standby replica.

      And that will minimize any disruption to your application and its users.

      Now if we just hit refresh, we can see the status is modifying and what's happening behind the scenes is AWS are taking a snapshot of the primary DB instance.

      It's restoring that snapshot into the standby replica, which is in a different availability zone.

      And then it's setting up synchronous replication between the primary and the standby replica.

      So this is a process which happens behind the scenes.

      But it does mean that we need to wait for this process to be complete until the process is complete.

      This is not a multi AZ deployment.

      So go ahead and pause the video and wait for the status to change away from modifying.

      We need this to be in an available state in order to continue with the demo.

      So go ahead and pause the video and resume it once this modification has completed.

      Okay, so the status has now changed to available.

      And in my case, it took about 10 minutes to enable multi AZ mode.

      So that's the provisioning of a standby replica in another availability zone.

      Now, the likelihood of an AZ failure happening while I'm recording this demo lesson is relatively small, but we can simulate a failure to do that.

      If we have the database instance selected and then select the actions drop down and then reboot, we can use the option reboot with failover.

      If we choose this option, then part of the process is that a simulated failover occurs.

      So the C name, the database endpoint, that's moved so that it now points at the standby replica and then the old primary instance is restarted.

      So that's what we're going to do to simulate this process.

      So go ahead and select to reboot the database instance.

      Make sure that you have reboot with failover selected and then click on confirm.

      And this will begin the process of rebooting the database instance.

      Now, if we go back to the WordPress blog and we click on view post, you'll see that right away it's not immediately loading.

      And that's because the failover from the primary to the standby isn't immediate.

      Failover times are typically 60 to 120 seconds.

      So that's important to keep in mind if you're deploying RDS in a business critical situation.

      It doesn't offer immediate failover.

      So let's just stop this and hit reload again.

      And now we can see that the page is starting to load because the C name for the database has been moved from pointing at the primary to pointing at the standby replica, which is the new primary.

      Okay, so this is the end of part one of this lesson.

      It was getting a little bit on the long side and so I wanted to add a break.

      It's an opportunity just to take a rest or grab a coffee.

      Part two will be continuing immediately from the end of part one.

      So go ahead, complete the video and when you're ready, join me in part two.

    1. Welcome back.

      This is part two of this lesson.

      We're going to continue immediately from the end of part one.

      So let's get started.

      Okay, so the instance is now in an available state.

      Let's just close down this informational dialogue at the top.

      And let's just minimize this menu on the left.

      Let's maximize the amount of screen space that we have for this specific purpose.

      So I just want us to go inside this database instance and explore together the information that we have available.

      So I talked in the theory lesson how every RDS instance is given an endpoint name and an endpoint port.

      So this is the information that we'll use to connect to this RDS instance.

      Networking wise, this instance has been provisioned in US-EAST-1A.

      It's in the Animals for Life VPC and it's used our A4L subnet group that we created at the start of this demo.

      And that means that it's currently utilizing all three database subnets in the Animals for Life VPC.

      But it's chosen because we only deployed one instance to use US-EAST-1A.

      Now this is the VPC security group that we're going to need to configure.

      So right click on this and open it in a new tab and move to that tab.

      This is the security group which controls access to this RDS instance.

      So let's expand this at the bottom.

      So currently it has my IP address being the only source allowed to connect into this RDS instance.

      So the only inbound rule on the security group protecting this RDS instance is allowing my IP address.

      So we're going to click on Edit and then click on Add Rule.

      And we're going to add a rule which allows our other instances to connect to this RDS instance.

      So first in the type drop down click and then type mySQL to get the same option as the line above and then click to select.

      Next go ahead and type instance into the source box and then select the migrate to RDS-instance security group.

      Now this is the security group that's used by any instances deployed by our one click deployment.

      And this allows those instances to connect to our RDS instance and that's what we want.

      So go ahead and select that and then click on Save Rules.

      And this means now that our WordPress instance can communicate with RDS.

      So now let's move back to the RDS tab and then make sure we're inside the A4L WordPress database instance.

      So that's the connectivity and the security tab.

      We also have the monitoring tab and it's here where you can see various CloudWatch provided metrics about the database instance.

      You also have logs and events related to this instance.

      So if we go and have a look at recent events we can see all of the events such as when the database instance was created, when its first backup was created.

      And you can explore those because they might be different in your environment.

      You can click on the Configuration tab and see the current configuration of the RDS instance.

      The Maintenance and Backups tab is where you can configure the maintenance and backup processes and then of course you can tag the RDS instance.

      Now in other lessons in this section of the course and depending on what course you're taking I will be talking about many of these options, what you can modify and which actions you can perform on RDS instances.

      But for now we're just going to move on with this demo.

      So the next step is that we need to migrate our existing data into this RDS instance.

      So what we're going to do is to click on the Connectivity and Security tab and we're going to leave this open.

      We're going to need this endpoint name and port very shortly.

      You should still have a tab open to the EC2 console.

      If you don't you can reach that by going on Services and then opening EC2 in a new tab.

      But I want you to select the A4L-WordPress instance and then right click and connect to it using Instance Connect.

      So go ahead and do that.

      Once you've done that we're going to start referring to the lesson commands document.

      So make sure you've got that open.

      We're going to use this command to take a backup of the existing MariaDB database.

      So we need to replace a placeholder.

      What we need to do is delete this and replace it with the private IP address of the MariaDB EC2 instance.

      So go back to the EC2 console, select the DB-WordPress instance and copy the private IP version 4 address into your clipboard.

      And then let's move back to the WordPress instance and paste that in.

      Go ahead and press Enter and it will prompt you for the password.

      And the password is the same Animals for Life strong password that we've been using everywhere.

      Copy that into your clipboard.

      So this is the password for the A4L WordPress user on the MariaDB EC2 instance.

      So paste that in and press Enter and then LS-LA to confirm that we now have this A4L WordPress.SQL database backup file.

      And we do, so that's good.

      So as we did in the previous demo lesson, we're going to take this backup file and we're going to import it into the new destination database, which is going to be the RDS instance.

      To do that, we'll use this command, but we're going to need to replace the placeholder hostname with the CNAME of the RDS instance.

      So go ahead and delete this placeholder, then go back to the RDS console and I'll want you to copy the endpoint name into your clipboard.

      So select it, right click and then copy.

      We won't need the port number because this is the standard MySQL port and if you don't specify it, most applications will assume this default.

      So just make sure that you have the endpoint DNS name or endpoint CNAME in your clipboard.

      And then back on the WordPress EC2 instance, go ahead and paste this database name into this command and press Enter.

      And again, you'll be asked for the password and that's the same Animals for Life strong password.

      So copy that into your clipboard, paste that in and press Enter.

      And that's imported this A4LWordPress.SQL file into the RDS instance.

      So now we need to follow the same process and change WordPress so that it points at the RDS instance.

      And we do that by moving to where the WordPress configuration file is.

      So cd space forward slash var forward slash ww w forward slash html and press Enter.

      And then shudu.

      So we have admin privileges, nano, which is a text editor and then wp-config.php and press Enter.

      Then we need to scroll down and we're looking for where it says DB host and currently it has a host name here.

      Now if you go back to the EC2 console and you look at the A4L-DB-WordPress instance, you'll see that its private IP version for DNS name is what's listed inside this configuration item.

      So it's currently pointing at this dedicated database instance.

      What we need to do is replace that and we're going to replace it with the RDS database DNS name or the CNAME of this RDS instance.

      So copy that into your clipboard and then go ahead and delete this private DNS name for the MariaDB EC2 instance and then paste in the RDS endpoint name, also known as the RDS CNAME.

      Once you've done that, control O and Enter to save and control X to exit.

      And now our WordPress instance is pointing at the RDS instance for its database.

      Now we can verify that by checking WordPress, move back to instances, select the WordPress instance, copy the public IP version for addressing to your clipboard.

      Don't use this open address link.

      Open that in a new tab.

      Go ahead and just click on the best cats ever to verify the functionality and it does look as though it's working.

      And to verify that, if we go back to the EC2 console, select the A4L-DB-WordPress instance and right click and then stop that instance.

      Now the original database that was providing database services to WordPress is going to move into a stopped state.

      And if our WordPress blog continues functioning, we know that it's using the RDS instance.

      So let's keep refreshing and wait for this to change into a stopped state.

      There we go.

      It's stopped.

      And if we go back to our WordPress page and refresh, it still loads.

      And so we know that it's now using RDS for its database services.

      So at this point, that's everything that I wanted you to do in this demo lesson.

      You've stepped through the process of provisioning an RDS instance.

      So you've created a subnet group, provisioned the instance itself, explored the functionality of the instance, including how to provide access to it by selecting a security group.

      And then editing that security group to allow access.

      You've performed a database migration and you've explored how the RDS instance is presented in the console.

      So that's everything that you need to do within this demo lesson.

      And don't worry, we're going to be exploring much more of the advanced functionality of RDS as we move through this section of the course.

      For now, though, I want us to clear up the infrastructure that we've created as part of this demo lesson.

      Now, because we've provisioned RDS manually outside of CloudFormation, unfortunately, there is a little bit more manual work involved in the cleanup.

      So I want you to go to the RDS console, move to databases, select this database, click on actions, and then select delete.

      Now it will prompt you to create a final snapshot and we're not going to do that.

      We're not going to retain automated backups and so you'll need to acknowledge that upon instance deletion, automated backups including any system snapshots and pointing time recoveries will no longer be available.

      And don't worry, I'll be talking about backups and recovery in another lesson in this section of the course.

      For now, just acknowledge that and then type delete me into this box and confirm the deletion.

      Now this deletion is going to take a few minutes.

      It's not an immediate process.

      It will start in a deleting state and we need to wait for this process to be completed before we continue the cleanup.

      So go ahead and pause this video and wait for this instance to fully delete before continuing.

      Now that the instance has been deleted, it vanishes from this list.

      Next, we need to delete the subnet group that we created earlier.

      So click on subnet groups, select the subnet group and then delete it.

      You'll need to confirm that deletion.

      Once done, it too should vanish from that list.

      Next, go to the tab you've got open to the VPC console, scroll down and select security groups.

      Now look through this list and locate the security group that you created as part of provisioning the RDS instance.

      It should be called a4LVPC-RDS-SG.

      Select that, click on actions and then delete security group and you'll need to confirm that process as well.

      Once that's deleted, the final step is to go to the cloud formation console and then you'll need to delete the cloud formation stack that was created using the one-click deployment at the start of the demo.

      It should be called migrate to RDS.

      Select it, click on delete and confirm that deletion.

      And once deleted, the account will be returned into the same state as it was at the start of the demo lesson.

      So all of the infrastructure that we've used will be removed from the account and the account will be in the same state as at the start of the demo.

      Now I hope you've enjoyed this demo and that we're repeating the same WordPress installation and then the creation of the blog post over and over again.

      But I want you to get used to different parts of this process over and over again.

      You need to know why not to use a database on EC2.

      You need to know why not to perform a lot of these processes manually.

      From this point onward in the course, we're going to be using RDS to evolve our WordPress design into something that is truly elastic.

      And so all of these processes, the things I'm having you repeat are really useful to aid in your understanding of all of these different components.

      So from this point onward, we're going to be automating the creation of RDS and focusing on the specific pieces of functionality that you need to understand.

      But at this point, that's everything that you need to do in this demo.

      So go ahead, complete the video and when you're ready, I look forward to you joining me in the next.

    1. Welcome back and in this demo lesson you're going to get some experience of how to provision an RDS instance and how to migrate a database from an existing self-managed MariaDB database instance through to RDS.

      So over the next few demo lessons in this section of the course, you're going to be evolving your database architecture.

      We're going to start with a single database instance, then we're going to add multi-AZ capability as well as talking about backups and restores.

      But in this demo lesson specifically, we're going to focus on provisioning an RDS instance and migrating data into it.

      Now in order to get started with this demo lesson, as always make sure that you're logged into the general AWS account, so the management account of the organization and you need to have the Northern Virginia region selected.

      Now attached to this lesson is a one-click deployment link that you'll need to use to provision this demo lesson's infrastructure.

      So go ahead and click on that link now.

      That's going to move you to a quick create stack screen.

      The stack name should be pre-populated with migrate to RDS.

      Scrolling down all of the parameter values will be pre-populated.

      All you need to do is to click on the capabilities checkbox and then create stack.

      There's also a lesson commands document linked to this lesson and I'd suggest you go ahead and open that in a new tab because you'll be referencing it as you move through this demo lesson.

      Now you'll notice that this will look similar to the previous demo lesson's lesson commands document, but it has one small difference.

      The initial command way of doing the backup of the source database, because that source database is going to be stored on a separate MariaDB database running on a separate EC2 instance, instead of taking the backup from the local instance, in this case it's connecting to a separate EC2 instance.

      Otherwise, most of these commands are similar to the ones you used in the previous demo lesson.

      Now you're going to need to wait for this stack to move into a create complete state before you continue the demo.

      So go ahead and pause the video, wait for your stack to change to create complete and then you're good to continue.

      Okay, so that cloud formation stack has now moved into a create complete state and it's created a familiar set of infrastructure.

      Let's go ahead and click on the services drop down and then move to the EC2 console and just take a look.

      So if we click on instances, you'll see that we have the same two instances as you saw in the previous demo lesson.

      So we have A4L-WordPress, which is running the Apache web server and the WordPress application.

      And then we have A4L-DB-WordPress and this is running the separate MariaDB database instance.

      So what we need to do in order to perform this migration is first create the WordPress blog itself and the sample blog post.

      And this is the same thing that we did in the previous demo.

      So we should be able to go through this pretty quickly.

      So go ahead and select the A4L-WordPress instance and copy its public IP version for address into your clipboard and then open that in a new tab.

      And again, make sure not to use the open address because this uses HTTPS.

      So copy the public IP version for address and then open that in a new tab.

      Again, we're going to call the site the best cats.

      We're going to use admin for the username.

      And then for the password, let's go back to the CloudFormation tab.

      Make sure you've got the migrate to RDS stack selected and then click on parameters.

      We're going to use the same database password.

      So copy that into your clipboard and replace the automatically generated one with the animals for live complex password.

      And then enter test@test.com into the email box and click on install WordPress.

      Once installed, click on login.

      You'll need to use the admin username and the same password.

      Click on login.

      Then we're going to go to posts.

      We're going to select the existing Hello World post.

      Select trash this time.

      Then click on add new.

      Close down this dialog for title.

      We're going to use the best cats ever.

      Click on the plus.

      Select gallery.

      At this point, go ahead and click the link that's attached to this lesson to download the blog images.

      Once downloaded, extract that zip file and you'll get four images.

      Once you've got those images ready, click on upload, locate those images, select them and click on open.

      Wait for them to load in.

      Select publish and publish again.

      And that saved the images onto the application instance and added the data for this post onto the separate MariaDB database.

      So now we have this simple working blog.

      Let's go ahead and look at how we can provision an RDS instance and how we can migrate the data into that RDS instance.

      So move back to the AWS console.

      Click on the services drop down and type RDS into the search box and open that in a new tab.

      Now, as I've mentioned in the theory parts of this section, RDS is a managed database server as a service product from AWS.

      It allows you to create database instances and those instances can contain databases that your applications can make use of.

      Now to provision an RDS instance, the first thing that we need to do is to create a subnet group.

      Now a subnet group is how we inform RDS which subnets within a VPC we want to use for our database instance.

      So first we need to create a subnet group.

      So select subnet groups on the menu on the left and then create a DB subnet group.

      Now we're going to use a4lsn group, so animals for life subnet group for both the name and for the description.

      And then select the VPC drop down and we're going to select the a4l-vpc1vpc.

      So this is the animals for life VPC which has been created by the one click deployment that you used at the start of this demo.

      Now once we've selected a name and a description and a VPC for this subnet group, then what we need to do is select the subnets that this database will be going into.

      So we're going to select the database subnets in US East 1A, US East 1B and US East 1C.

      So click on the availability zone drop down and pick those three availability zones.

      So 1A, 1B and 1C.

      Once we've selected the availability zones that this subnet group is going to use, next we pick the subnets.

      So click on the drop down.

      Now we want to pick the database subnets within the animals for life VPC and all we can see here are the IP address ranges.

      So to help us with this click on the services drop down, type VPC and then open that in another new tab.

      Once that loads, go ahead and click on subnets, sort the subnets by name and then locate sn-dba, dbb and dbc.

      And just move your cursor across to the right hand side and note what the IP address ranges are for those different database subnets.

      So 16, 80 and 144.

      Go back to the RDS console, click on the subnets drop down and we need to pick each of those three subnets.

      So 16, 80 and 144.

      So these represent the database subnets in availability zone 1A, 1B and 1C.

      And then once we've configured all of that information, we can go ahead and click on create to create this subnet group.

      So this subnet group is something that we use when we're provisioning an RDS instance.

      And as I mentioned moments ago, it's how RDS determines which subnets to place database instances into.

      Now when we're only using a single database instance, then that decision is fairly easy.

      But RDS deployments can scale up to use multiple replicas in multiple different availability zones.

      You can have multi-AZ instances, read replicas.

      Aurora has a cluster architecture which we'll talk about later in this section.

      And so subnet groups are essential to inform RDS which subnets to place things into.

      So now that we've configured that subnet group, let's go ahead and provision our RDS database instance.

      So to do that, click on databases and then we're going to create a database.

      So click on create database.

      Now when you're creating a database, you have the option of using standard create where you have visibility of all of the different options and then easy create which applies some best practice configurations.

      Now I want you to get the maximum experience possible, so we're going to use standard create.

      Now when you're creating an RDS database instance, you have the ability to pick from many different engines.

      So some of these are commercial like Oracle or Microsoft SQL Server.

      And with some of these, you have the option of either paying for a license included with RDS or you can bring your own license.

      For other database engines, there isn't a commercial price to pay for their usage and so they're much cheaper to use.

      But you should select the engine type which is compatible with your application.

      Now we're going to be talking about Amazon Aurora in dedicated lessons later in this section of the course.

      Amazon Aurora is an AWS designed database product which has compatibility with MySQL and PostgreSQL.

      For this demo lesson, we're going to use MySQL.

      So go ahead and select MySQL and it's going to be using MySQL Community Edition.

      So now let's just scroll down and step through some of the other options that we get to select when provisioning an RDS instance.

      Now for all of these database engines, you have the ability to pick different versions of that engine.

      And this is fairly critical because there are different major and minor versions that you can select from.

      And different versions of these have different limitations.

      So for example, we're going to be talking about snapshots later in this section.

      And if you want to take a snapshot of an RDS database and then import that into an Aurora cluster, you need to pick a compatible version.

      And then Aurora Serverless which we'll be talking about later on in this section has even more restrictions.

      Now to keep things simple, I want you to ignore what version I pick in this video and instead look in this lesson's description and pick the version that I indicate in the lesson description because I'll keep this updated if AWS make any changes.

      Now you can choose to use a template.

      These templates give you access to only the options which are relevant for the type of deployment that you're trying to use.

      So in production, you would pick the production template.

      If you have any smaller or less critical dev or test workloads, then you could pick this template.

      If you want to ensure that you can only select free tier options, then you should pick this template.

      And that's what we're going to do in this demo because we want this demo to fall under the free tier.

      So click on the free tier template.

      I'll be talking about availability and durability later in this section because we've selected free tier only.

      We don't have the ability to create a multi AZ RDS deployment.

      And now we need to provide some configuration information about the database instance specifically.

      So the first thing that we need to do is to provide a database instance identifier.

      So this is the way that you can identify one particular instance from any other instances in the AWS account in the current region.

      So this needs to be unique.

      So we're going to use a four L WordPress for this database instance.

      Then we need to pick a username which will be given admin privileges on this database instance.

      And we're going to replace admin with a four L WordPress.

      So we're going to use this for both the database identifier and the admin user of this database.

      Now for the password for this admin user, we're going to move back to the cloud formation console and we're going to use this same animals for life complex password.

      So copy that into your clipboard and paste it in for the password and the confirm password box.

      And this just keeps things consistent between the self banished database and the RDS database.

      Scroll down further still and it's here where you can select the database instance class to use.

      Now because we've selected free tier only, we're limited as to what database size and type we can pick.

      If we'd have selected production or dev test from the templates above, we would have access to a much wider range of database instance classes, both standard, memory optimized and burstable.

      But because we've selected the free tier template, we're limited as to what we can select.

      Now this might change depending on when you're watching this demonstration, but at the point I'm recording this video, it's db.t3.micro.

      So don't be concerned if you see something different in this box.

      Just make sure that you select the type of instance which falls under the free tier.

      Then continue scrolling down and we need to pick the size of storage and the type of storage to use for this RDS instance.

      Now whether you need to select this is dependent on what engine type you pick.

      If you select Aurora, which we'll be talking about later on in this section, then you don't need to pre-allocate storage.

      If you're using the MySQL version of RDS, then you do need to set a type of storage and a size of storage.

      Now we're going to use the minimum which is 20GIB because our requirements for this database are relatively small.

      And if we wanted to, if this was production, we could set storage autoscaling.

      And this allows RDS to automatically increase the storage when a particular threshold is met.

      But again, because this is a demo and it's only using a very small blog, we don't need storage autoscaling.

      So go ahead and uncheck that option.

      Now we need to select a VPC for this RDS instance to go into.

      So click in the drop down and select the Animals for Life VPC.

      So that's A4L-VPC1.

      And then we need to pick a subnet group.

      Now this is the thing that we've just created.

      We only have one in this account, so there's nothing else to select.

      But this is how we can advise RDS on which subnets to use inside the VPC.

      Scroll down further still and we can specify whether we want this database to be available.

      We want this database to be publicly accessible.

      So this is whether we want instances and devices outside the VPC to be able to connect to this database.

      This obviously comes with some security trade-offs.

      And because we don't need that in this demonstration, because the only thing that we want to connect to this RDS instance is our WordPress instance, which is in the same VPC, then we can select Not to Use Public Access.

      So make sure the No option is selected.

      Now the way that you control access to RDS is you allocate a VPC security group to that instance.

      So we could either choose an existing security group or we could create a new one.

      So it's this security group which surrounds the network interfaces of the database and controls access to what can go into that database.

      So we want to create a new VPC security group.

      So we want to make that option.

      We're going to call the security group A4LVPC-RDS-SG.

      And we need to remember to update this so that our WordPress instance can communicate with our RDS instance.

      And we'll do that in the next step.

      If we wanted to pick a specific availability zone for this instance to go into, then we could select one here or we can leave it up to RDS to pick the most suitable.

      So we can select No Preference.

      Continue scrolling down.

      We won't change the Database Authentication option because we want to allow password authentication.

      Continue scrolling down and we're going to expand Additional Configuration.

      By default, an RDS instance is created with no database on that instance.

      In this case, because we're migrating an existing WordPress database into RDS, we're going to go ahead and create an initial database.

      And to keep things easy and consistent, we're going to use the same name, so A4L WordPress.

      Now you can enable automatic backups for RDS instances.

      And I'll be talking about these in a separate theory lesson.

      If you do select automatic backups, then you can also pick a backup retention period as well as a backup window.

      So we've got Advanced Monitoring, various log exports.

      We don't need to use any of those.

      You can also set the Maintenance window for an RDS instance.

      So when Maintenance will be performed, you can enable Deletion Protection if you want.

      If this is a production database, we don't need to do that.

      What we're going to do is scroll all the way down to the bottom and then click on Create Database.

      Now this process can take some time.

      I've seen it take anywhere from five to 45 minutes.

      And we're going to need this to be finished before we move on to the next step.

      So this seems like a great time to end this video.

      It gives you the opportunity to grab a coffee or stretch your legs.

      Wait for this database creation to finish.

      And then when you're ready, I'll look forward to you joining me in part two of this video.

    1. Welcome to this demo lesson where you're going to migrate from the monolithic architecture on the left of your screen towards a tiered architecture on the right.

      Essentially you're going to split the WordPress application architecture, you're going to move the database from being on the same server as the application to being on a different server and this will form step one of moving this architecture from being a monolith through to being a fully elastic architecture.

      Now this is the first stage of many but it is a necessary one.

      Now in order to perform this demonstration you're going to need some infrastructure.

      Before we apply the infrastructure just make sure that you're logged in to the general AWS account, so the management account of the organization and as always you need to have the Northern Virginia region selected.

      Now once you've got both of those set there's a one-click deployment link attached to this lesson so go ahead and click on that link.

      What this is going to do is deploy the Animals for Life base infrastructure, it's going to deploy the monolithic WordPress application instance and it's also going to deploy a separate MariaDB database instance that you're going to use as part of the migration.

      Now everything set, the stack name should be set to a suitable default, all you need to do is to scroll all the way down to the bottom, check this capabilities box and click on create stack.

      Now also attached to this lesson is a lesson commands document which contains all the commands you'll be using throughout this demo.

      So go ahead and open that in a new tab, you'll be referencing it constantly as you're making the adjustments to the WordPress architecture.

      Now we're going to need this CloudFormation stack to be fully complete before we can continue so go ahead and pause the video and resume once the CloudFormation stack moves into a create complete state.

      So now the stacks moved into a create complete state, we're good to continue.

      Now this has created the base Animals for Life infrastructure which includes a number of EC2 instances so let's take a look at those, let's click on services and then locate and open EC2 in a brand new tab.

      Once you're at the EC2 console if you do see any dialogues around user interface updates then just go ahead and close those down and then click on instances running.

      Once you're here you'll see two EC2 instances, one will be called A4L-WordPress and this is the monolith so this is the EC2 instance which contains the WordPress application and the built-in database.

      So this is the WordPress installation that we're going to migrate from and then this instance A4L-DB-WordPress this contains a standalone MariaDB installation so we're going to migrate the database for WordPress from this instance onto the DB instance and this will create a tiered application architecture rather than the monolith which we currently have.

      So step number one is to perform the WordPress installation so to do that I want you to go ahead and copy the public IP version for address of the WordPress EC2 instance into your clipboard and then open it in a new tab.

      Now be careful not to use the open address link that will use HTTPS which we're not currently using so copy the IP address into your clipboard and open that in a new tab.

      Now when you do that you'll see a familiar WordPress installation dialog we're going to create a simple blog for site title go ahead and call it the best cats for username pick admin and then for the password instead of using the randomly selected one go ahead and use this same complex password that we've used for the CloudFormation template so this is animals for life but with number substitution.

      So if you go back to your CloudFormation tab and go to the parameters tab this is the same password that we use for the DB password and the DB root password.

      Now of course in production this is incredibly bad practice we're just doing it in this demo to keep things simple and avoid any mistakes.

      So back to the WordPress installation screen site title the best cats username admin this for the password and then just go ahead and type a fake email so I don't want to use my real email for this I'm going to type test at test.com you can do the same and then go ahead and click on install WordPress so this is installed the WordPress application and it's using the Maria DB server that's on the same EC2 instance so part of the same monolith.

      So we're going to log in we'll need to type admin and then use the animals for life strong password and click on login and once we logged in we're going to create a simple blog post so click on posts we're going to select the existing hello world post select trash this time then click on add new then we're going to add a new post we can close down this introduction dialogue and for title go ahead and type the best cats ever and then some exclamation points next click on this plus sign and we're going to add a gallery now at this point you're going to need some images to upload to this blog post I've attached an images link to this lesson so if you go ahead and click that link it will download a zip file if you extract that zip file it's going to contain four image files all four of my cats so at this point once you've downloaded and extracted that file go ahead and click on upload locate those images there should be four select them all and click on open that will add these images to this blog post and once you've added them all you can go ahead and click on publish and then publish again and this will publish this blog post so it will add data to the database that's running on the monolithic application instance as well as store these images on the local instance file system now making a point of mentioning that these images are stored on the file system because as you'll see later in the course this is one of the things that we need to migrate when we're moving to a fully elastic architecture we can't have images stored on the instances themselves we need to move that to a shared file system for now though we're focusing on the database so at this point we have the working blog the images for this blog are stored on the local file system of a4l-wordpress and the data for that blog post is stored on the MariaDB database that's also running on this EC2 instance so the next step of this demo lesson is that you're going to migrate the data from a4l-wordpress onto a4l-db-wordpress and this is an isolated MariaDB instance this is dedicated for the database so to do this migration select a4l-wordpress right click we're going to connect to this instance we'll be using EC2 instance connect so just make sure that the username is set to EC2-user and then click on connect now this is where you're going to be using the commands that are stored within the lesson commands document so you need to make sure that you have this ready to reference because it's far easier to copy and paste these commands and then adjust any placeholders rather than type them out manually because that's prone to errors the first step is to get the data from the database that's running on this monolithic application instance and store it in a file on disk so that's the first thing we need to do we need to do a backup of the database into a .sql file now to do that we use this command so it's a utility called my sql-dump it uses the -u to specify the user that we're going to be using to connect to the database then we use -p to specify that we want to provide a password and we could either provide the password on the command line or we could have it prompt us now if we supply the password with no spaces next to this -p then it will accept it as input on this command if we don't specify anything so there's a space here then it's going to ask us for the password the next thing we specify is the database name that we want to do the dump of in this case it's a4l WordPress which is the database for the animals for life WordPress instance now if we just run this command on its own it would output the dump so all of the data in the database to standard output which in this case is our screen we don't want it to do that we want it to store the results in a file called a4l WordPress dot sql and so we use this symbol which means that it's going to take the output of this component of the command and it's going to redirect it into this file so let's go ahead and run this command and it's going to prompt us for the password for this database now to get that go back to cloud for information make sure parameters are selected and it's this password that we need which is the DB password so copy that into your clipboard go back to the instance paste that in press enter and that will output all the data in the database to this file now you won't see any indication of success or failure but if you do an LS space -la and press enter one of the files that you'll see is a4l WordPress dot sql so now we have a copy of the WordPress database containing our blog post the next thing that we need to do is to take this file this backup of the database and inject it into the new database that we want to use so the dedicated Maria DB EC2 instance and to do that we're going to use this command so this command has two components the first component is this which connects to the Maria DB database instance the second component is this which takes the backup that we've just made and it feeds it into this command so this backup contains all the necessary definitions to create a new database and inject the data required this component of the command just allows us to connect to this new dedicated Maria DB instance now there are some place holders that we need to change the database name that we're going to use is the same so a4l WordPress we're still going to want to be prompted for a password so -p is what we use this time though we're going to connect using a user called a4l WordPress so we're not using the root user we're going to connect to this separate Maria DB database instance using a user a4l WordPress the only thing that we need to change is that we need to connect to a non-local host so when we used the mysql dump command we didn't specify a host to connect to and this defaulted to local host so the current machine in the case of this command we're operating with a separate server this dedicated EC2 instance which is running the Maria DB database server so a4l -db -wordpress we need to connect to this so what we'll need to connect to this is the private IP version for address of this separate database instance so select it look for private IP version for addresses and then click on the icon next to this to copy the private IP version for address of this separate database server into your clipboard then return to the application instance and we need to replace the placeholder here with that value so make sure that you're one space after the end of this placeholder and just delete this leave a space between -h and where the cursor is and then paste in that IP address so this is going to connect to this separate EC2 instance using its private IP it's going to use the a4l WordPress user it will prompt us for a password it will perform the operation on the a4l WordPress database and it's going to use the contents of this backup file to perform those tasks so go ahead and press enter and you'll be prompted for a password now again it's the same password this has all been set up as part of the cloud formation one-click deployment this lesson is about the migration process not setting up a database server so I've automated this component of the infrastructure so copy the DB password into your clipboard go back to the instance paste it in and press enter so now we've uploaded our WordPress application database into this separate MariaDB database server the next step is to configure WordPress to point at this new database server so to do that cd space forward slash var forward slash www forward slash html and press enter and then we're going to run a shudu space nano which is a text editor space wp-config.php and this is the WordPress configuration file so press enter now what we're looking for if we scroll down is we're looking for the line which says define and then a space and then DB host so this is the database host that WordPress attempts to connect to and currently it's set to local host which means it will use the database on the same EC2 instance as the application we're going to delete this local host so delete until we have two single quotes and then make sure that you still have the private IP version for address of this separate database instance in your clipboard if you don't just go ahead and copy it again from the EC2 console and then paste that in place of local host so now you should see DB underscore host and this now represents this private IP address and now the private IP address that you should use here will be different you need to use your private IP address of your a4l - DB - WordPress EC2 instance so now that you've updated this configuration file press control o and enter to save and then control x to exit out of editing this file now this now means that the WordPress instance is going to be communicating with the separate MariaDB database instance let's verify that let's go back to the tab that we have to our WordPress application and let's just go ahead and do a refresh if everything's working as expected we should see that the blog reloads successfully now this means that this blog is now pointing at this separate MariaDB database instance to be doubly sure of this though let's go back to the WordPress instance and let's shut down the MariaDB database server and we do that using this command so shudu space service space MariaDB space and then stop so type or copy and paste that command in and press enter and that's going to stop the MariaDB database service which is running on a4l WordPress so now the only MariaDB database that we have running is on the a4l - DB - WordPress EC2 instance now we can go back to the WordPress tab and hit refresh and assuming it loads in as it does in my case this now confirms that WordPress is communicating with this dedicated MariaDB EC2 instance now the reason why I wanted to step you through all these tasks in this demo lesson is the time a firm believer that in order to understand best practice architecture you need to understand bad architecture and as I mentioned in the theory lesson there is almost no justification for running your own self-managed database server on an EC2 instance in almost all situations it's preferable to use the RDS service but I need you to understand exactly how the architecture works when you're self managing a database and how to migrate from a monolithic all-in-one architecture through to having a separate self managed database in the demo lesson that's coming up next in the course you're going to migrate from this through to an RDS instance so that's step two but at this point you've done everything that I wanted you to do in this demo lesson you've implemented the architecture that's on screen now on the right all we need to do is to tidy up all of the infrastructure that we've used within this lesson so to do that it's nice and easy just go back to the cloud formation console make sure that you have the monolith to EC2 DB stack selected click on the delete button and then confirm that deletion and that stack deleting will clean up all of the infrastructure that we've used throughout this demo lesson and it will return the account into the same state as it was at the start of the lesson at this point you've completed all of the tasks that I want you to do so I hope you've enjoyed this demo lesson go ahead and complete this video and when you're ready I'll look forward to you joining me in the next.

    1. Welcome back.

      This is part two of this lesson.

      We're going to continue immediately from the end of part one.

      So let's get started.

      Now one final thing before we finish with this demo lesson, and I want to talk about private hosted zones.

      So move back to the Route 53 console.

      I'm going to go to Hosted Zones, and I'm going to create a private hosted zone.

      So click "Create Hosted Zone" because it's a private hosted zone, it doesn't even need to be one that I actually own.

      So I'm going to call my hosted zone "IlikeDogsReally.com".

      It's going to be a private hosted zone.

      And for now, I'm going to associate it with the default VPC in US-East-1.

      So I'm going to pick the region, US-East, and then select Northern Virginia, and then click in the VPC ID box, and we should see two VPCs listed.

      One is the Animals for Life VPC, it's tagged A4L-VPC1, but I'm not going to pick this one, I'm going to pick the one without any text after it, which is the default VPC.

      So once that's set, I'm going to create the hosted zone.

      Then inside the hosted zone, I'm going to create a record.

      The record's going to use the simple routing policy.

      Click on "Next".

      I'm going to define a simple record.

      I'm going to call it "www".

      The record type is going to be "a routes traffic to an IP version 4 address and some resources".

      I'm going to click in this endpoint box and select IP address or another value, depending on record type.

      And then into this box, I'm just going to put a test IP address of 1.1.1.1.

      And then down at the bottom, I'm going to click "1M" to change this TTL to 60 seconds.

      And I'm going to click "Define simple record".

      And then finally, "Create records".

      So now we have a record called "www.ilikedogsrealy.com".

      So copy that into the clipboard.

      Move back to the EC2 console.

      Click on "Dashboard".

      Click on "Instances running".

      Right click, "Connect".

      We're going to use EC2 "Instance connect".

      And then just click on "Connect".

      Now once connected, I'm going to try pinging the record which I just created.

      So "Ping Space" and then paste in "www.ilikedogsrealy.com" and press "Enter".

      What you should see is "Name or service not found".

      The reason for this is the private hosted zone which we created is currently associated with the default VPC.

      And this instance is not in the default VPC.

      To enable this instance to resolve records inside this private hosted zone, we need to associate it with the "Animals for Life" VPC.

      So go back to the Route 53 console.

      Expand "Hoster Zone Details" and then "Edit hosted zone".

      Scroll down and we're going to add another VPC.

      In the region drop down, "US-East-1" and then in the "Choose VPC" box select "A4L-VPC-1".

      Scroll down and save changes.

      Now this might take a few seconds to take effect, but if we go back to the EC2 instance and try to run this ping again, and we still get "Name or service not found".

      So what I want you to do is go ahead and pause this video, wait for 4 or 5 minutes and then resume and try this command again.

      Now in my case it took about 5 minutes, but after a while I can now ping www.ilikedogsreally.com because I've now associated this private hosted zone with the VPC that this instance is running from.

      Now that's everything that I wanted to cover in this demo lesson, so all that remains is for us to clean up all of the infrastructure which we've created in this demo lesson.

      So if we go back to the Route 53 console and select "Health Checks", first we're going to delete the health check.

      So select "A4L Health" and click on "Delete Health Check" and confirm.

      Click on "Hostered Zones".

      Go inside the private hosted zone that you created.

      Select the www.ilikedogsreally.com record and then click on "Delete Record".

      Confirm that deletion.

      Go back to "Hostered Zones".

      Select the entire private hosted zone and click on "Delete".

      Type "Delete" and then click to confirm.

      And that will delete the entire private hosted zone.

      Then go inside the public hosted zone that you have.

      Select the two www records that you created earlier in this lesson.

      Click on "Delete Records".

      Click "Delete" to confirm.

      Then go to the S3 console.

      Click on the bucket that you created earlier in this lesson.

      Click "Empty".

      Copy and paste or type "Permanently Delete" and click on "Empty".

      Once that bucket is emptied click on "Exit".

      With it still selected click on "Delete".

      Copy and paste or type the full name into the box and click on "Delete Bucket".

      Then go to the EC2 console.

      Click the hamburger menu.

      Scroll down.

      Click "Elastic IPs".

      Select the elastic IP that you associated with the EC2 instance.

      Click on the actions drop down.

      Disassociate and then click to disassociate.

      With it still selected click on "Actions".

      Release elastic IP addresses and click on "Release".

      At that point all of the manually created infrastructure has been removed.

      Go back to the cloud formation console.

      Go to "Stacks".

      Select the stack that you created at the start of this lesson using the one click deployment.

      It should be called DNS and failover demo.

      Select it.

      Click on "Delete".

      Then click on "Delete Stack" to confirm that deletion.

      Once that's deleted the account will be back in the same state as it was at the start of the lesson.

      At this point that's everything I wanted to cover in this demo.

      I hope it's been enjoyable and it's given you some good practical experience of how to use failover routing and private hosted zones.

      That will be useful both for the exam and real world usage.

      At this point that's everything so go ahead and complete this video.

      When you're ready I'll look forward to you joining me in the next.

    1. Welcome to this demo lesson where you're going to get experience configuring fail-over routing as well as private hosted zones.

      Now with this demo lesson you have the choice of either following along in your own environment or watching me perform the steps.

      If you do wish to follow along in your own environment you will need a domain name that's registered within Route 53.

      Remember that was an optional step at the start of this course so if you did register a domain of your own then you can do this demo lesson.

      In my case I registered animals for life 1337.org.

      If you registered a domain it will be different and so wherever you see me use animals for life.org you need to replace it with your registered domain.

      If you didn't register one then you'll have to watch me perform all of these steps because you can't do this lesson without your own registered domain.

      In order to get started you need to make sure that you're logged in as the I am admin user of the general AWS account which is the management account of the organization and you'll need to have the Northern Virginia region selected.

      Now we're going to need to create some infrastructure in order to perform this demo lesson so attached to this lesson is a one-click deployment link and you should go ahead and click that link now.

      That's going to take you to a quick create stack screen.

      Everything should be pre-populated the stack name is DNS and failover demo all you'll need to do is scroll down to the bottom check this capabilities box and then click on create stack.

      That's going to take a few minutes and it's going to create infrastructure that we're going to need to continue with the demo lesson so go ahead and pause the video wait for your stack to move into a create complete state and then we're good to continue.

      Okay so the stacks now in a create complete state and it's created a number of resources the most important one being a public EC2 instance so we just need to test this first so if you just click in the search box and type EC2 and then right click to open that in a new tab.

      Once you're there click on instances running and you should see a4l-web just select that under public IP version 4 just click on this symbol to copy the IP address into your clipboard make sure you don't click open address because that's going to try and use HTTPS which we don't want so copy this IP address into your clipboard and then open that in a new tab and you should see the animals for life super minimal homepage and if you see that it means everything's working as intended so go ahead and close down that tab.

      Now we also need to give this instance an elastic IP address so that it has a static public IP version 4 address now to give it an elastic IP on the menu on the left scroll down to the bottom under network and security select elastic IPs and then we need to allocate an elastic IP make sure us - east - 1 is in this box scroll down and click on allocate once the elastic IP address is allocated to this account select it click on actions and then associate elastic IP once we're at this screen make sure instance is selected click in this search box and then select a4l-web once selected click in the private IP address box and select the private IP address of this instance and then check the box to say allow this elastic IP address to be re-associated once all that's complete click on associate and that now means that our EC2 instance has been allocated with a static IP version 4 address now we're configuring failover DNS and so the EC2 instance is going to be our primary record so we're going to assume that this is the animals for life main website and we want to configure an S3 bucket which is running as the backup in case this EC2 instance fails so the next thing we need to do is to create the S3 bucket so click in the search box type S3 and open that in a new tab and then go to the S3 console and at this point we're going to create an S3 bucket and configure it as a static website now the naming of the S3 bucket is important earlier in the course you should have registered a domain name in my case I registered animals for life 1337.org so I'm going to create a bucket with the name www.animalsforlife1337.org you need to create one which is called www.and then the domain name that you registered so I'm going to click and create bucket the bucket name is www.animalsforlife1337.org and it's going to be in the US east northern Virginia region which is US-East-1 then we're going to scroll down and we're going to need to uncheck block all public access because this bucket is going to be used to host a static website I'll need to acknowledge that I'm okay with that so I'll do that and then scroll all the way down to the bottom and then I'm going to click on create bucket then I'm going to go inside the bucket click on upload and then add files now attach to this lesson is an assets file I want you to go ahead and download that file then extract it and wave extracted it it should create a folder called R53 underscore zones underscore and underscore failover go inside that folder and there'll be two more folders one which is 01 underscore A4L website and another which is 02 underscore A4L failover we're interested in the A4L failover so go into that folder select both these files so index.html and minimal.jpeg click on open and then upload those files so we'll scroll down and click on upload once that's completed click on close then we go into enable static website hosting so click on properties and to enable this it's all the way down towards the bottom click on edit next to static website hosting and enable it make sure that host a static website is selected and then for the index document and the error document we're going to type index.html and once both of those are entered scroll down to the bottom and save changes now we've one final thing to do on this bucket we need to add a bucket policy so that this bucket is public so we need to click on permissions scroll down and then under bucket policy click on edit and this bucket currently does not have a bucket policy now also inside the assets folder that you extracted earlier in this lesson there's a file called bucket underscore policy.json this is the file so you'll need to copy the contents of that file into your clipboard and then inside this policy box paste that in and then click on the icon next to the bucket ARN to copy that into your clipboard and then we need to replace this placeholder with what you've just copied so I want you to select from just to the right of the first speech mark all the way through to before the forward slash so you should have ARN colon AWS colon S3 colon colon colon colon and then example bucket and then go ahead and paste the text from your clipboard which will overwrite that with the actual bucket ARN so it should look like this once you've got that scroll down and save the changes so now we have the failover website configured the static website running from the S3 bucket so now we need to go ahead and move to the route 53 console where we going to create a health check and configure the failover record so click in the search box type route 53 right click and open that in a new tab then click on health checks we're going to create a health check for the health check name type a4l health and it's going to be an end point health check scroll down we're going to specify the endpoint by IP address the protocols going to be HTTP and we need the IP address of the EC2 instance so if we go back to the EC2 console the EC2 instance is now using the elastic IP so if we scroll down and click on elastic IPs and copy the elastic IP into our clipboard then go back to the route 53 console and paste that in and the health check is going to be configured to health check the index.html document so in path we need to click and type index.html then we're going to expand advanced configuration and by default a health check is checking every 30 seconds so this is a standard health check we need to change this to fast because we want our health check to react as fast as possible if our primary website fails so select fast scroll down to the bottom click on next we don't want to create an alarm because we don't want to take any action if this health check fails we're just going to use it as part of our fail-over routing so go ahead and make sure no is selected and then click create health check now the health check is going to start off with an unknown status because it hasn't gathered enough information about the health of the primary website it's going to take a few minutes to move from this status to either healthy or unhealthy what we can do though is if we check this we can click on the health checkers tab and start to see the results of the globally distributed set of health check endpoints so we can see that we're already getting success HTTP status code 200 and this is telling us our primary website is already passing these individual checks and after a couple of minutes if we hit refresh we should see that the status changes from unknown to healthy so next we need to create the failover record so click on hosted zones locate the hosted zone for the domain that you registered at the start of the course and click it then click on create record now you can switch between two different modes either the quick create record mode or the wizard mode we're going to keep this demo simple so click on switch to wizard we're going to choose a failover record so select failover and click next we're going to call the record www we're going to set a TTL of one minute so click 1m and that will change the TTL seconds to 60 scroll down and we're going to define some failover records so click define failover record first we need to create the primary record so click in this first drop down and we're going to pick IP address or another value depending on record type so click that and then we need the elastic IP address so go back to the EC2 console and copy the elastic IP into your clip board and paste that into this box and then for failover record type this is the primary record so click on primary we need to associate it with a health check so click in that drop down and choose a for L health now once we do that it means that this primary record will only be returned if this health check is healthy otherwise the secondary record will be returned which we're going to define in a second under record ID just go ahead and type EC2 this needs to be unique within this set of records with the same name so going to call one EC2 and the other S3 so this one's EC2 so define that failover record and then we're going to define a new failover record so click that box again this time in this drop down we need to scroll down and we're going to select alias to an S3 website endpoint so select that choose the region and it needs to be US - East - 1 once selected you should be able to click in this box and see the S3 bucket that you just created so click on this to select that S3 bucket and we're going to set this as the secondary record so click on secondary we won't be associating this with a health check and we won't be evaluating the target health this record will only ever be used if the primary fails its health check and so we want this record to take effect whenever the health check associated with the primary fails and we're going to test that by shutting down the EC2 instance so this record should then take over so finally we need to enter S3 in the record ID and click on define failover record once we've done both of those we can go ahead and click on create records so now that we have both of those records in place the primary pointing at EC2 and the secondary at S3 if we copy down this full DNS name into our clipboard and open that in a new tab that should direct towards the animals for life.org super minimal homepage remember this is the website running on EC2 now what we need to do is to simulate a failure so go back to the EC2 console scroll to the top click on EC2 dashboard then instances running right click on this instance select stop instance and confirm that by clicking stop so now we've stopped this instance it should begin failing the health check so let's go back to the route 53 console click on health checks select this a4 health health check click on the health checkers tab and then click on refresh and over the coming seconds we should start to see some failure responses in this status column there we go we're getting connection timed out and over the next minute or so we should see that the status of the health check overall should move from healthy to unhealthy let's click on refresh it might take a minute or so for that to take effect so let's just give it a minute or so and now we can see that it's moved into an unhealthy state now this means that our failover record will detect this and then it's going to start returning the secondary record rather than the primary now DNS does have a cache remember we set the TTL value to 60 seconds so one minute but what we should find after that cache expires if we go back to this tab which we have open to the www.animalsforlife.org website and if we now hit refresh we should see that it changes to the animals for life.org super minimal failover page and this is the website that's running on s3 so the failover record has used a health check detected the failure of the EC2 instance and redirected us towards the backup s3 site so now we can go ahead and reverse that process if we go back to the EC2 console we can right click on this instance and start the instance that will take a few minutes to move from the stopped state through the pending state and then finally to running and once it's in a running state if we go back to the route 53 console and select this health check and then refresh on the health checkers initially we'll see a number of different messages if we keep hitting refresh over the next few minutes we should see this change to an okay message there we can see the first HTTP status code 200 if we keep refreshing we'll see more of those again more 200 statuses which means okay now that all of these are coming back okay let's click refresh on the health check itself it's still showing us unhealthy let's give it a few more seconds now it's reporting as healthy again if we go back to the tab that we have open to the website and click on refresh now it should change back to the original EC2 based website and it does so that means our failover record has worked in both directions it's failed over to s3 and failed back to EC2 okay so this is the end of part one of this lesson it was getting a little bit on the long side and so I wanted to add a break it's an opportunity just to take a rest or grab a coffee part 2 will be continuing immediately from the end of part one so go ahead complete the video and when you're ready join me in part 2.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public review):

      Summary:

      This paper describes the covalent interactions of small molecule inhibitors of carbonic anhydrase IX, utilizing a pre-cursor molecule capable of undergoing beta-elimination to form the vinyl sulfone and covalent warhead.

      Strengths:

      The use of a novel covalent pre-cursor molecule that undergoes beta-elimination to form the vinyl sulfone in situ. Sufficient structure-activity relationships across a number of leaving groups, as well as binding moieties that impact binding and dissociation constants.

      Overall, the paper is clearly written and provides sufficient data to support the hypothesis and observations. The findings and outcomes are significant for covalent drug discovery applications and could have long-term impacts on related covalent targeting approaches.

      Weaknesses:

      No major weaknesses were noted by this reviewer.

      Reviewer #2 (Public review):

      Summary:

      The authors utilized a "ligand-first" targeted covalent inhibition approach to design potent inhibitors of carbonic anhydrase IX (CAIX) based on a known non-covalent primary sulfonamide scaffold. The novelty of their approach lies in their use of a protected pre(pro?)-vinylsulfone as a precursor to the common vinylsulfone covalent warhead to target a nonstandard His residue in the active site of CAIX. In addition to a biochemical assessment of their inhibitors, they showed that their compounds compete with a known probe on the surface of HeLa cells.

      Strengths:

      The authors use a protected warhead for what would typically be considered an "especially hot" or even "undevelopable" vinylsulfone electrophile. This would be the first report of doing so making it a novel targeted covalent inhibition approach specifically with vinylsulfones.

      The authors used a number of orthogonal biochemical and biophysical methods including intact MS, 2D NMR, x-ray crystallography, and an enzymatic stopped-flow setup to confirm the covalency of their compounds and even demonstrate that this novel pre-vinylsulfone is activated in the presence of CAIX. In addition, they included a number of compelling analogs of their inhibitors as negative controls that address hypotheses specific to the mechanism of activation and inhibition.

      The authors employed an assay that allows them to assess target engagement of their compounds with the target on the surface of cells and a fluorescent probe which is generally a critical tool to be used in tandem with phenotypic cellular assays.

      Weaknesses:

      While the authors show that the pre-vinyl moiety is shown biochemically to be transformed into the vinylsulfone, they do not show what the fate of this -SO2CH2CH2OCOR group is in a cellular context. Does the pre-vinylsulfone in fact need to be in the active site of CAIX on the surface of the cell to be activated or is the vinylsulfone revealed prior to target engagement?

      I appreciate the authors acknowledging the limitations of using an assay such as thermal shift to derive an apparent binding affinity, however, it is not entirely convincing and leaves a gap in our understanding of what is happening biochemically with these inhibitors, especially given the two-step inhibitory mechanism. It is very difficult to properly understand the activity of these inhibitors without a more comprehensive evaluation of kinact and Ki parameters. This can then bring into question how selective these compounds actually are for CAIX over other carbonic anhydrases.

      The authors did not provide any cellular data beyond target engagement with a previously characterized competitive fluorescent probe. It would be critical to know the cytotoxicity profile of these compounds or even how they affect the biology of interest regarding CAIX activity if the intention is to use these compounds in the future as chemical probes to assess CAIX activity in the context of tumor metastasis.

      Reviewer #3 (Public review):

      Summary:

      Targeted covalent inhibition of therapeutically relevant proteins is an attractive approach in drug development. This manuscript now reports a series of covalent inhibitors for human carbonic anhydrase (CA) isozymes (CAI, CAII, and CAIX, CAXIII) for irreversible binding to a critical histidine amino acid in the active site pocket. To support their findings, they included co-crystal structures of CAI, CAII, and CAIX in the presence of three such inhibitors. Mass spectrometry and enzymatic recovery assays validate these findings, and the results and cellular activity data are convincing.

      Strengths:

      The authors designed a series of covalent inhibitors and carefully selected non-covalent counterparts to make their findings about the selectivity of covalent inhibitors for CA isozymes quite convincing. The supportive X-ray crystallography and MS data are significant strengths. Their approach of targeted binding of the covalent inhibitors to histidine in CA isozyme may have broad utility for developing covalent inhibitors.

      Weaknesses:

      This reviewer did not find any significant weaknesses. However, I suggest several points in the recommendation for the authors' section for authors to consider.

      Recommendations for the authors:

      Reviewing Editor Comments:

      The reviewers have made excellent suggestions. We believe a revised version addressing those points can improve the assessment and quality of your work.

      Reviewer #1 (Recommendations for the authors):

      (1) The beta-elimination process is referred to as a "rearrangement" in both the text and the Figure 2 legend. Based on the proposed mechanism the authors provided, it is a simple beta-elimination and conjugate addition mechanism, and is not a rearrangement mechanism. This change should be reflected in the text and Figure 2 legend.

      We have made the requested change from rearrangement to elimination reaction.

      (2) From a structure-based design perspective, it is not obvious why only large cyclo-alkyl groups were used to target the lipophilic pocket, with the exception of the phenyl carbamates. Perhaps this is background literature on CAIX that describes this? It seems like this is a flexible functional moiety that could be used to impact drug properties. Why were other lipophilic and especially more aromatic or heteroaromatic moieties not studied?

      The structure-affinity relationship of the lipophilic ring versus other moieties has been studied and reported previously in manuscripts: Dudutiene 2014, Zubriene 2017, Linkuviene 2018, chapter 16 by Zubriene (https://doi.org/10.1007/978-3-030-12780-0_16). The lipophilic ring served better than a flexible tail or an aromatic ring.

      (3) The color-coded "correlation map" in Figure 8 is difficult to follow. Perhaps a standard SAR table with selectivity and affinity values would be easier to read and follow.

      We are trying to promote “correlation maps” because in our opinion they are easier to follow than tables.

      (4) Although there is a statement for this in line 254 of the SI, the compound numbering in the SI, vs. the numbering used in the manuscript is confusing. The standard format for these is to consecutively number all compounds and have identical compound numbers in both the SI and manuscript. The synthetic intermediates included in the SI can be identified by IUPAC names.

      An additional numbering system had to be made because the synthesis was described in the supplementary materials. We would prefer to leave the numbering as in the current manuscript. There are quite a few intermediate compounds that we assigned intermediate numbers such as 20x in order to make it simpler to distinguish intermediate synthesis compounds from compounds that were studied for binding affinity.

      (5) Ranges of isolated yields for the synthetic steps in SI schemes SI, S2, and S3 need to be included.

      We have remade the SI schemes S1, S2, and S3 to include the yields of each compound.

      (6) Presumably, the AcOH/H2O2 reaction forms the sulfones and not sulfoxides when heat is used. In the SI, the structures of 9x and 10x are shown to be sulfoxides and not sulfones. Initially, this is thought to be a simple structural mistake, however, this is concerning, since the HRMS data (for compound 9x) reported is for the sulfoxide (HRMS for C8H7F4NO4S2 [(M+H)+]: calc. 321.9825, found 321.9824. 482) and not the sulfone? In the synthesis scheme S1, condition "C" is used for both the sulfoxide and sulfone synthesis (i.e. 3ax to 9x vs. 12x to 13x). It appears the sulfoxide is prepared using a room temperature procedure, vs. the sulfone requiring 75 degrees centigrade heat. These two similar conditions need to be designated as different synthetic steps in the schemes with the specific conditions noted since the products formed are different.

      We have made requested corrections/adjustments and added separate reaction conditions for sulfoxide synthesis in SI scheme S1.

      Reviewer #2 (Recommendations for the authors):

      I appreciate that it's difficult to determine parameters such as kinact or Ki of such potent inhibitors and ones that work by a two-step mechanism. I might suggest characterizing the steps separately to determine the detailed parameters. Maybe something like NMR for the for the activation step and SPR for the kinact and Ki of the unmasked vinylsulfone?

      We agree that such information would be helpful. However, it requires significant effort and equipment and will be performed in a separate study.

      I always advocate for at least a global proteomics analysis using a pulldown probe to get an idea of the specificity profile, especially for the so-far untried and untested pre-vinylsulfone moiety.

      We fully agree that the pull-down assay is a good idea. However, this major task will be performed in a separate study.

      This might be picky but wouldn't this be considered a pro-vinylsulfone rather than pre-vinylsulfone? Just as the term "prodrug" is used?

      We agree that both the pre-vinylsulfone and pro-vinylsulfone are suitable names. However, in pharmacology, the prodrug is common, but in organic synthesis, the precursor is commonly used. Therefore, we prefer to keep the pre-vinylsulfone.

      I would also be curious to know what species is responsible for activating the compound to the vinylsulfone. Maybe make some key point mutations of nearby basic residues?

      The His64 formed the covalent bond, thus His64 was the likely activating base. Preparing a mutation could be a good path for future studies.

      Reviewer #3 (Recommendations for the authors):

      (1) The authors presented only a close-up view of the active site with a 2Fo-Fc map mesh in three panels of Figure 4. For readers unfamiliar with the carbonic anhydrase field, adding a complete illustration of each protein-inhibitor complex (protein in cartoon mode and ligand in stick) will be helpful. Also, an image of the 180º rotation of the close-up view presented in each panel should be added. Depicting h-bonds between critical residues (Asn62, Gln 92, etc.) with dashed lines and marking the distances will be helpful for readers.

      We have prepared a requested picture for CAIX. Panels on the left show entire protein molecule view of the bound ligands to each isozyme and there are two close-up views for each structure rotated 180 degrees.

      (2) Line 198 should be revised to refer to the correct complexes. 20, 21, and 23 should be 21, 20, 23.

      We appreciate that the reviewer noticed this error. We corrected the mistake.

      (3) Omit electron density maps around each ligand in Figure 4 should be included for compounds 20, 21, and 23, perhaps as a supplementary figure.

      Detailed electron density map information is provided in the mtz files that have been submitted to the PDB. We think the omit maps are not necessary in the supplementary materials.

      (4) The cyclooctyl group is stabilized by hydrophobic active site residues, L131, A135, L141, and L198. However, only L131 is shown in Figure 4. All residues that stabilize the ligands should be shown.

      For clarity purposes of the figure, we have omitted some of the residues that make contact with the ligand molecule. We think that the structure provided to the PDB could be analyzed in detail to see all contacts between the ligand and protein molecule.

      (5) The supplementary table S1 lacks the crystallographic data on the CAIX-23 complex.

      We have added a new version of the supplementary materials that contains the crystallographic data on the CAIX-23 complex.

      (6) A minor peak (30213 Da) with a 638 Dalton shift compared to the unmodified enzyme is for Figure 5A, not Figure 5B, as mentioned in line 235. This sentence in line 235 should be corrected.

      We corrected this mistake.

      (7) As the authors stated in the text, a minor peak (30213 Da) represents a potential second binding site. Can they revisit their electron density maps and show any residual density if it is present around a second histidine residue? The MS data in Figure S17C indicates the presence of additional sites for compound 12. Thus, additional electron density around the secondary and tertiary sites is possible.

      CAII contains His3 and His4 that are at the N-end of the protein and not visible in the crystal structure. The NMR data indicate that the additional modification may occur at one of these His residues.

      (8) MS data were presented for compounds 12 and 22 in Figure 5A, B, but the co-crystal structures were generated with compounds 21, 20, and 23. Why was no MS data included for compounds 20, 21, and 23? Would these compounds show the presence of a secondary binding site? Can authors include the MS data?

      In the main body of the manuscript in Figure 5A we only present MS data on CAXIII with compound 12. It is only an example that confirms covalent interaction. In the supplementary we have MS data for compound 12 with all carbonic anhydrase isozymes and compound 20 with almost all (except CAVI) CA isozymes. There are also MS data provided with numerous compounds (3, 9, 13, and other) and CA isozymes that serve as a control or confirmation of covalent bond formation.

      (9) The coordination between the zinc ion and NH of the ligand is mentioned in the enzyme schematic in Figure 3. Can the distances and coordination with Zinc be illustrated in ligand-bound structures in Figure 4?

      We considered and decided that picture which shows the numerous distances between ligand atoms and protein residues would be difficult to follow. The structures provided to the PDB could be analyzed for every aspect of the complex structure.

      (10) A key difference between covalent (compound 12) and its non-covalent counterpart, compound 5, is the two oxygens attached to sulfur in compound 12. Do protein side chains or water interact with these oxygens? Are these oxygen atoms exposed to solvent? Can authors show the interactions or clarify if there is no interaction?

      The two oxygens in the ligand molecule serve several purposes. First, they pull out electrons and diminish the pKa of the sulfonamide, thus making interaction stronger. Second, the oxygen atoms may make contacts, hydrogen bonds with the protein molecule and may also be important for covalent bond formation. Exact energy contributions cannot be determined from the structure directly. Thus, we decided to not yet explore and delve into this area.

      (11) Fix the font size of the text in lines 355-356.

      The font has been corrected.

    1. Reviewer #2 (Public review):

      The manuscript "Spatial frequency adaptation modulates population receptive field sizes" is a heroic attempt to untangle a number of visual phenomena related to spatial frequency using a combination of psychophysical experiments and functional MRI. While the paper clearly offers an interesting and clever set of measurements supporting the authors' hypothesis, my enthusiasm for its findings is somewhat dampened by the small number of subjects, high noise, and lack of transparency in the report. Despite several of the methods being somewhat heuristically and/or difficult to understand, the authors do not appear to have released the data or source code nor to have committed to doing so, and the particular figures in the paper and supplements give a view of the data that I am not confident is a complete one. If either data or source code for the analyses and figures were provided, this concern could be largely mitigated, but the explanation of the methods is not sufficient for me to be anywhere near confident that an expert could reproduce these results, even starting from the authors' data files.

      Major Concerns:

      I feel that the authors did a nice job with the writing overall and that their explanation of the topic of spatial frequency (SF) preferences and pRFs in the Introduction was quite nice. One relatively small critique is that there is not enough explanation as to how SF adaptation would lead to changes in pRF size theoretically. In a population RF, my assumption is that neurons with both small and large RFs are approximately uniformly distributed around the center of the population. (This distribution is obviously not uniform globally, but at least locally, within a population like a voxel, we wouldn't expect the small RFs to be on average nearer the voxel's center than the voxel's edges.) Why then would adaptation to a low SF (which the authors hypothesize results in higher relative responses from the neurons with smaller RFs) lead to a smaller pRF? The pRF size will not be a function of the mean of the neural RF sizes in the population (at least not the neural RF sizes alone). A signal driven by smaller RFs is not the same as a signal driven by RFs closer to the center of the population, which would more clearly result in a reduction of pRF size. The illustration in Figure 1A implies that this is because there won't be as many small RFs close to the edge of the population, but there is clearly space in the illustration for more small RFs further from the population center that the authors did not draw. On the other hand, if the point of the illustration is that some neurons will have large RFs that fall outside of the population center, then this ignores the fact that such RFs will have low responses when the stimulus partially overlaps them. This is not at all to say that I think the authors are wrong (I don't) - just that I think the text of the manuscript presents a bit of visual intuition in place of a clear model for one of the central motivations of the paper.

      The fMRI methods are clear enough to follow, but I find it frustrating that throughout the paper, the authors report only normalized R2 values. The fMRI stimulus is a very interesting one, and it is thus interesting to know how well pRF models capture it. This is entirely invisible due to the normalization. This normalization choice likely leads to additional confusion, such as why it appears that the R2 in V1 is nearly 0 while the confidence in areas like V3A is nearly 1 (Figure S2). I deduced from the identical underlying curvature maps in Figures 4 and S2 that the subject in Figure 4 is in fact Participant 002 of Figure S2, and, assuming this deduction is correct, I'm wondering why the only high R2 in that participant's V1 (per Figure S2) seems to correspond to what looks like noise and/or signal dropout to me in Figure 4. If anything, the most surprising finding of this whole fMRI experiment is that SF adaptation seems to result in a very poor fit of the pRF model in V1 but a good fit elsewhere; this observation is the complete opposite of my expectations for a typical pRF stimulus (which, in fairness, this manuscript's stimulus is not). Given how surprising this is, it should be explained/discussed. It would be very helpful if the authors showed a map of average R2 on the fsaverage surface somewhere along with a map of average normalized R2 (or maps of each individual subject).

      On page 11, the authors assert that "Figure 4c clearly shows a difference between the two conditions, which is evident in all regions." To be honest, I did not find this to be clear or evident in any of the highlighted regions in that figure, though close inspection leads me to believe it could be true. This is a very central point, though, and an unclear figure of one subject is not enough to support it. The plots in Figure 5 are better, but there are many details missing. What thresholding was used? Could the results in V1 be due to the apparently small number of data points that survive thresholding (per Figure S2)? I would very much like to see a kernel density plot of the high-adapted (x-axis) versus low-adapted (y-axis) pRF sizes for each visual area. This seems like the most natural way to evaluate the central hypothesis, but it's notably missing.

      Regarding Figure 4, I was curious why the authors didn't provide a plot of the difference between the PRF size maps for the high-adapted and low-adapted conditions in order to highlight these apparent differences for readers. So I cut the image in half (top from bottom), aligned the top and bottom halves of the figure, and examined their subtraction. (This was easy to do because the boundary lines on the figure disappear in the difference figure when they are aligned correctly.) While this is hardly a scientific analysis (the difference in pixel colors is not the difference in the data) what I noticed was surprising: There are differences in the top and bottom PRF size maps, but they appear to correlate spatially with two things: (1) blobs in the PRF size maps that appear to be noise and (2) shifts in the eccentricity maps between conditions. In fact, I suspect that the difference in PRF size across voxels correlates very strongly with the difference in eccentricity across voxels. Could the results of this paper in fact be due not to shifts in PRF size but shifts in eccentricity? Without a better analysis of the changes in eccentricity and a more thorough discussion of how the data were thresholded and compared, this is hard to say.

      While I don't consider myself an expert on psychophysics methods, I found the sections on both psychophysical experiments easy to follow and the figures easy to understand. The one major exception to this is the last paragraph of section 4.1.2, which I am having trouble following. I do not think I could reproduce this particular analysis based on the text, and I'm having a hard time imagining what kind of data would result in a particular PSE. This needs to be clearer, ideally by providing the data and analysis code.

      Overall, I think the paper has good bones and provides interesting and possibly important data for the field to consider. However, I'm not convinced that this study will replicate in larger datasets - in part because it is a small study that appears to contain substantially noisy data but also because the methods are not clear enough. If the authors can rewrite this paper to include clearer depictions of the data, such as low- and high-adapted pRF size maps for each subject, per visual-area 2D kernel density estimates of low- versus high-adapted pRF sizes for each voxel/vertex, clear R2 and normalized-R2 maps, this could be much more convincing.

    1. primary political, ethical, and critical objective of this dissertation

      I know this is the introduction/prospectus, but can it be accessible from the keywords in the bar at the top? It's buried right now and is too damn good to be buried. I'm not sure what the keyword would be--- so many possibilities though. Maybe it's just called "keywords" since that part here is dope too.

    1. At Halo Salon in George Town, clients know they’re in the hands of a true expert. With over a decade of experience, Jackie Soriano has earned her reputation as the trusted name in luxury hair care. Whether it’s a precision cut, a rich color transformation, or advanced techniques that set the standard, her work consistently embodies quality and attention to detail. Jackie approaches each client with care, taking the time to understand their unique style and needs. Every service is thoughtfully tailored to ensure that the result feels natural and effortless. In her chair, you experience not just a service, but the consistency and personalized care that makes every visit exceptional.

      Same as above

    2. At Jackie Soriano, we don’t just provide services; we create experiences that embody luxury and sophistication. Each facet of our work, whether it’s in the salon, on your wedding day, or behind the scenes of a production, is meticulously crafted with an eye for detail and a passion for excellence. At Jackie Soriano, we don’t just provide services; we create experiences that embody luxury and sophistication. Each facet of our work, whether it’s in the salon, on your wedding day, or behind the scenes of a production, is meticulously crafted with an eye for detail and a passion for excellence.

      Not sure why but this font is looking thicker/darker than in the mockup

    1. Personally, social media has impacted me in a way that it makes me feel the need to detox from it every now nd then, as it's an unhealthy feeling to just be doom scrolling on it.

    2. You know, it forces kids to not just live their experience but be nostalgic for their experience while they’re living it,

      This is a very accurate comment, relating to how all young people acting almost in hindsight - less focused on what they want to do or say now and more focused on what their future selves will be able to reflect upon. I think that's why gen z has such a fixation around nostalgia in the way we always pull from the past when it comes to fashion, trends and jokes, etc. When you have that disconnect from your actions and yourself, it's much harder to feel human and so we feel like we have to mimic what we've seen before.

    1. The first of the swords was by all accounts a fine sword, however, it is a blood-thirsty, evil blade, as it does not discriminate as to who or what it will cut. It may just as well be cutting down butterflies as severing heads. The second was by far the finer of the two, as it does not needlessly cut that which is innocent and undeserving.

      this is "all swords" and makes no sense in this context; all swords do not discriminate "who they cut down" ... it's pretty clear the change is moronic; and that this original legend was of a sword that was deisgned to "discriminate and specifically ...

      oh its the second one

    1. I tend to thinkwhen you’re at the top of the food chain it’s okay to flaunt it, because Idon’t see anything complicated about drawing a moral boundarybetween us and other animals, and in fact find it offensive to womenand people of color that all of a sudden there’s talk of extendinghuman-rights-like legal protections to chimps, apes, and octopuses,just a generation or two after we finally broke the white-malemonopoly on legal personhood.

      They view humans as superior over nature.

    2. hen my father was born in 1938—among his first memories the news of Pearl Harbor and the mythic airforce of the industrial propaganda films that followed—the climatesystem appeared, to most human observers, steady.

      Change in just a few generations. It's no longer a stable climate.

    Annotators

    1. Most communication scholars believe that discussion among members has a significant effect on the quality of group decisions. Traditional wisdom suggests that talk is the medium, channel, or conduit through which information travels between participants.13 Verbal interaction makes it possible for members to (1) distribute and pool information, (2) catch and remedy errors, and (3) influence each other. But distractions and nonproductive conversation create channel noise, causing a loss of information. Group researcher Ivan Steiner claimed that14 Actual Group Productivity = Potential Productivity − Losses Due to Processes It follows that communication is best when it doesn’t obstruct or distort the free flow of ideas. Page 265 While not rejecting this traditional view, Hirokawa believes that communication plays a more active role in crafting quality decisions. Like social constructionists (see Chapters 5 and 11), he regards group discussion as a tool or instrument that group members use to create the social reality in which decisions are made.15 Discussion exerts its own impact on the end product of the group.

      The role of communication in group decision-making is essential, impacting both the quality and process of group decisions.Some key points are that communication is seen as a medium/channel through which information flows.This enables members to identify and correct mistakes, and influence one another’s opinions. While acknowledging traditional ideas, Hirokawa sees communication as more than a conduit; it’s instrumental in shaping reality within a group. His perspective claims that communication does not just transfer information but helps create a social reality where decisions take shape. overall, for group communication to enhance decision quality, members should focus on meaningful discussion, minimize distractions, and engage actively to co-construct ideas, rather than merely exchange information.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      With socioeconomic development, more and more people are obese which is an important reason for sub-fertility and infertility. Maternal obesity reduces oocyte quality which may be a reason for the high risk of metabolic diseases for offspring in adulthood. Yet the underlying mechanisms are not well elucidated. Here the authors examined the effects of maternal obesity on oocyte methylation. Hyper-methylation in oocytes was reported by the authors, and the altered methylation in oocytes may be partially transmitted to F2. The authors further explored the association between the metabolome of serum and the altered methylation in oocytes. The authors identified decreased melatonin. Melatonin is involved in regulating the hyper-methylation of high-fat diet (HFD) oocytes, via increasing the expression of DNMTs which is mediated by the cAMP/PKA/CREB pathway.

      Strengths:

      This study is interesting and should have significant implications for the understanding of the transgenerational inheritance of GDM in humans.

      Thank you for your positive comments to our manuscript.

      Weaknesses:

      The link between altered DNA methylation and offspring metabolic disorders is not well elucidated; how the altered DNA methylation in oocytes escapes reprogramming in transgenerational inheritance is also unclear.

      Thanks. These are very good questions. There is a long way to completely elucidate the relationship between methylation and offspring metabolic disorders, and the underlying mechanisms of obtained methylation escaping the reprogramming during development. We would like to explore these in the future.

      Reviewer #2 (Public Review):

      This manuscript offers significant insights into the impact of maternal obesity on oocyte methylation and its transgenerational effects. The study employs comprehensive methodologies, including transgenerational breeding experiments, whole genome bisulfite sequencing, and metabolomics analysis, to explore how high-fat diet (HFD)-induced obesity alters genomic methylation in oocytes and how these changes are inherited by subsequent generations. The findings suggest that maternal obesity induces hyper-methylation in oocytes, which is partly transmitted to F1 and F2 oocytes and livers, potentially contributing to metabolic disorders in offspring. Notably, the study identifies melatonin as a key regulator of this hyper-methylation process, mediated through the cAMP/PKA/CREB pathway.

      Strengths:

      The study employs comprehensive methodologies, including transgenerational breeding experiments, whole genome bisulfite sequencing, and metabolomics analysis, and provides convincing data.

      Thank you for your positive comments to our manuscript.

      Weaknesses:

      The description in the results section is somewhat verbose. This section (lines 126~227) utilized transgenerational breeding experiments and methylation analysis to demonstrate that maternal obesity-induced alterations in oocyte methylation (including hyper-DMRs and hypo-DMRs) can be partially transmitted to F1 and F2 oocytes and livers. The authors should consider condensing and revising this section for clarity and brevity.

      Thanks for your suggestions. We have re-written this parts in the revised manuscript.

      There is a contradiction with Reference 3, but the discrepancy is not discussed. In this study, the authors observed an increase in global methylation in oocytes from HFD mice, whereas Reference 3 indicates Stella insufficiency in oocytes from HFD mice. This Stella insufficiency should lead to decreased methylation (Reference 33). There should be a discussion of how this discrepancy can be reconciled with the authors' findings.

      Thanks for your suggestions. As reported by Reference 33, STELLA prevents hypermethylation in oocytes by sequestering UHRF1 from the nuclei which recruits DNMT1 into nuclei. Han et al. reported that obesity induced by high-fat diet reduces STELLA level in oocytes. These indicate that STELLA insufficiency might induce hypermethylation in oocytes, although significant hypermethylation in obese oocytes is not reported by Han et al. using immunofluorescence. This contradiction may be caused by the limited sample sizes (n=14) used by Han et al. We have added a brief discussion in the revised manuscript.

      Reviewer #3 (Public Review):

      Summary:

      Maternal obesity is a health problem for both pregnant women and their offspring. Previous works including work from this group have shown significant DNA methylation changes for offspring of obese pregnancies in mice. In this manuscript, Chao et al digested the potential mechanisms behind the DNA methylation changes. The major observations of the work include transgenerational DNA methylation changes in offspring of maternal obesity, and metabolites such as methionine and melatonin correlated with the above epigenetic changes. Exogenous melatonin treatment could reverse the effects of obesity. The authors further hypothesized that the linkage may be mediated by the cAMP/PKA/CREB pathway to regulate the expression of DNMTs.

      Strengths:

      The transgenerational change of DNA methylation following HFD is of great interest for future research to follow. The metabolic treatment that could change the DNA methylation in oocytes is also interesting and has potential relevance to future clinical practice.

      Thank you for your positive comments to our manuscript.

      Weaknesses:

      The HFD oocytes have more 5mC signal based on staining and sequencing (Fig 1A-1F). However, the authors also identified almost equal numbers of hyper- and hypo-DMRs, which raises questions regarding where these hypo-DMRs were located and how to interpret their behaviors and functions. These questions are also critical to address in the following mechanistic dissections as the metabolic treatments may also induce bi-directional changes of DNA methylation. The authors should carefully assess these conflicts to make the conclusions solid.

      Thanks for the helpful comments and suggestions. As presented in Fig. 1F, there is an increase of methylation level in promoter and exon regions and there is a decrease in intron, utr3 and repeat regions. According to the suggestions, we further analyzed the distribution of DMRs, and found that hypo-DMRs were mainly distributed at utr3, intron, repeat, and tes regions compared with hyper-DMRs (Fig. S3). These suggest that the distribution of DMRs in genome is not random.

      The transgenerational epigenetic modifications are controversial. Even for F0 offspring under maternal obesity, there were different observations compared to this work (Hou, YJ., et al. Sci Rep, 2016). The authors should discuss the inconsistencies with previous works.

      Thanks for the suggestions. There are contradictions on the whole genome DNA methylation of oocytes in obese mice. Hou YJ et al. in 2016 reported that obesity reduces the whole genome DNA methylation of NSN GV oocytes using immunofluorescence. In 2018, Han LS et al. reported that the whole genome 5mC of oocytes is not significantly influenced by obesity using immunofluorescence, but they find the Stella level is reduced in oocytes by obesity. Stella locates in the cytoplasm and nuclei of oocytes and sequesters Uhrf1 from the nuclei. Stella knockout in oocytes results in about twofold increase of global methylation in MII oocytes via recruiting more DNMT1 into nuclei. These suggest that the global methylation of oocytes in obese mice should be increased, but the similar methylation in oocytes between obese and non-obese mice is reported by Han LS et al. Thus, the contradiction may be induced by the different sample size in our manuscript and previous studies, and Hou YJ and colleagues just examined the methylation of NSN GV oocytes. As present in Stella+/- oocytes, the global methylation of oocytes is normal, which suggest that the insufficiency of Stella may be not the main reason for the increased methylation of oocytes in obese mice. We have added a brief discussion in the revised manuscript.

      In addition to the above inconsistencies, the DNA methylation analysis in this work was not carefully evaluated. Several previous works were evaluating the DNA methylation in mice oocytes, which showed global methylation levels of around 50% (Shirane K, et al. PLoS Genet, 2013; Wang L., et al, Cell, 2014). In Figure 1E, the overall methylation level is about 23% in control, which is significantly different from previous works. The authors should provide more details regarding the WGBS procedure, including but not limited to sequencing coverage, bisulfite conversion rate, etc.

      Thanks for the good questions. Smallwood et al. reported the the CG methylation of MII oocyte is about 33.1% (Smallwood et al. Nature Methods, 2014) using single-cell genome-wide bisulfite sequencing. Shirane K et al. reported that the average methylation level of GV oocytes is 37.9%. Kobayashi H et al. Reported that the CG methylation in GV oocytes is about 40% (Kobayashi H et al. Plos Genet. 2012). CG methylation in fully grown oocytes is about 38.7% (Maenohara S et al. Plos Genet. 2017). The variation of methylation in oocytes is associated with sequencing methods, sequencing depth, and mapping rates. In the present study, whole genome bisulfite sequencing (WGBS) for small sample and methylation analysis were performed by NovoGene. The reads are 31613641 to 37359643, unique mapping rate is ≥32.88%,  conversation rate is > 99.44%, and sequencing depth is 2.45 to 2.75. Relative information is presented in Table S1. The sequencing depth might be a reason for the inconsistence. But we further confirmed our sequencing results using bisulfite sequencing (BS), and the result is similar between BS and WGBS results. These findings suggest that our results are reliable.

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):

      (1) Since the results show that melatonin may play a role in hyper-methylation, the authors need to give some basic information in the Introduction section.

      Thanks. We added more information in the section of Introduction.

      (2) There are many differential metabolites identified. Besides melatonin, other differential metabolites are involved in the altered methylation in oocytes

      These is a good question. We firstly filtered the differential metabolites which may be involved in methylation, and then further filtered these metabolites according to the relative DNA methylation pathways and published papers. After that, we confirmed the concentrations of relative metabolites in the serum using ELISA. Certainly, we can not completely exclude all the metabolites which might involved in regulating DNA methylation.

      (3) The altered methylation would be found in the F1 tissues. Did the authors examine the other parts besides the liver?

      Thank you. In the present study, we didn’t examined the DNA methylation in the other tissues besides the liver. We agree that the altered methylation should be observed in the other tissues.

      (4) Did the authors try or guess how many generations the maternal obesity-induced genomic methylation alterations can be transmitted?

      Thanks. This is a good question. Takahashi Y and colleagues reported that obtained DNA methylation at CpG island can be transmitted across multiple generations using DNA methylation-edited mouse (Takahashi Y et al. 2023, cell). Similar inheritance is also reported by other studies using different models.

      (5) The F2 is indirectly affected by maternal obesity, so the evidence is not enough to prove the transgenerational inheritance of the altered methylation.

      Thanks. We find the altered DNA methylation in F2 tissue and oocytes is similar to that in F1 oocytes. These suggest the altered DNA methylation in F2 oocytes should be at least partly transmitted to F3. Previous paper (Takahashi Y et al. 2023, cell) confirms that obtain DNA methylation in CpG island can be transmitted across several generations through paternal and maternal germ lines. Certainly, it’s better if it is examined in F3 tissues.

      Reviewer #2 (Recommendations For The Authors):

      (1) Figure Font Size: The font sizes in the figures are quite inconsistent. Please try to uniform the font size of similar types of text.

      Thanks for your suggestions. We re-edited the relative figures in the revised manuscript.

      (2) Figure Clarity: Ensure that all critical information in the figures is clearly visible, such as in Figure 3C.

      Thank you. We revised this figure.

      (3) Figure 1B, C: The position of the asterisks ("**") is not centered in the corresponding columns, and the font size is too small. Please correct this and address similar issues in other figures.

      Thank you for your suggestions. We re-edited these in the revised figures.

      (4) Line 126: The current expression is confusing. It may be revised to: "Both the oocyte quality and the uterine environment can contribute to adult diseases, which may be mediated by epigenetic modifications."

      Thanks. We revised this sentence in the revised manuscript.

      (5) Missing Panel in Figure 3: Figure 3 is missing panel 3N.

      Thank you so much. We corrected it in the revised manuscript.

      (6) Figure Panel Order: Please adjust the order of the panels in the figures to follow a logical reading sequence.

      Thank you. We changed the orders in the revised manuscript.

      (7) Line 493: Correct "inthe" to "in the".

      Thank you. We revised it.

      (8) Lines 102-106: Polish the wording and expression, an example as follows: "We analyzed the differentially methylated regions (DMRs) in oocytes from both HFD and CD groups and identified 4,340 DMRs. These DMRs were defined by the criteria: number of CG sites {greater than or equal to} 4 and absolute methylation difference {greater than or equal to} 0.2. Among these, 2,013 were hyper-DMRs (46.38%) and 2,327 were hypo-DMRs (53.62%) (Fig. 1G). These DMRs were distributed across all chromosomes (Fig. 1H). "

      Thank you! We re-wrote these parts in the revised manuscript.

      Reviewer #3 (Recommendations For The Authors):

      The sample numbers should be annotated in the figure legend for all the bar plots using Image J. The lines in Figures 2B and 2C were without error bars. How many mice were used for these plots?

      Thanks for your suggestions. We added the sample size in the revised manuscript. We made a mistake when we prepared the pictures for figure 2B and figure 2C, which resulted in missing the error bars. We have corrected these pictures. Thanks again!

      The authors should revise the panel arrangement of the figures (Figure 2, Figure 5, etc) to make them more clear and readable.

      Thank you! We have revised these in the revised manuscript.

      The writing should be improved since there were multiple typos and unclear expressions. AI tools like Grammarly or ChatGPT may help.

      Thank you! We have re-edited the language in the revised manuscript using AI tools.

      Please recheck the immunofluorescence images for clear interpretability. For example, in Figure 5F (H89 treated), the GV is all the way at the edge of the oocyte, and the oocyte in the DIC image appears like it is partially lysed. The DIC images and the DAPI images are not clear enough.

      Thanks for your suggestions. We have re-edited these pictures in the revised manuscript.

      Another concern is that the Methods describes the immunofluorescence preparation for 5mC and 5hmC staining as a simple fixation in 4% paraformaldehyde followed by permeabilization with .5% TritonX-100, but there is no antigen exposure step described, a step that is normally required for visualizing these DNA modifications (e.g., 4N HCl).

      Thanks. Sorry for that we didn’t describe the methods clearly. We have added more information about the methods in the revised manuscript.

      The metabolomic analysis revealed a highly significant increase in dibutylphthalate, genistein, and daidzein in the control mice. The presence of these exogenous metabolites suggests that the diets differed in many aspects, not just fat content, so it would be very difficult to interpret the results as related to a high-fat diet alone. Both daidzein and genistein are phytoestrogens and dibutylphthalate is a plasticizer, suggesting differences in the diet and/or in the materials used to collect the samples for analysis from the mice. The Methods define the high-fat diet adequately, as the formulation can be found online using the catalog number. However, the control diet is just listed as "normal diet", so one has no idea what is in it

      Thank you for your good questions. The daidzein and genistein may be from the diets and the dibutylthalate may be from the materials used to collect samples. If so, these should be similar between groups. Thus, we added the formulation of normal diet in the revised manuscript. The raw materials of normal diet include corn, bean pulp, fish meal, flour, yeast powder, plant oil, salt, vitamins, and mineral elements. According to the suggestions, we re-checked the data about these metabolites, and found that the abundance of these metabolites was low. And the result of these metabolites was at a low confidence level because the iron of these metabolites was only mapped to ChemSpider(HMDB,KEGG,LIPID MAPS). To further confirm these results, we examined these metabolites in serum using ELISA, and results revealed that the concentrations of genistein and dibutylthalate were similar between groups. These results suggest that these metabolites may be not involved in the altered methylation of oocytes induced by obesity.

    1. Welcome back.

      This is part two of this lesson.

      We're going to continue immediately from the end of part one.

      So let's get started.

      So let's go back to the instance now.

      Just press enter a few times to make sure it hasn't timed out.

      That's good.

      Now there's a small bug fix that we need to do before we move on.

      The CloudWatch agent expects a piece of system software to be installed called CollectD.

      And on Amazon Linux that is not installed.

      So we need to do two things.

      The first is to create a directory that the agent expects to exist.

      So run that command.

      And the second is to create a database file that the agent also expects to exist.

      And we can do that by running this command.

      Now at this point we're ready to move on to the final step.

      So we've installed the agent and we've run the configuration wizard to generate the agent configuration.

      And we're now safely stored inside the parameter store.

      The final step is to start up the CloudWatch agent and provide it with the configuration that's stored inside the parameter store.

      And by doing that the agent can access the configuration.

      It can download it.

      It can configure itself as per that configuration.

      And then because we've got an attached instance role that has the permissions required, it can also inject all of the logging data for the web server and the system into CloudWatch logs.

      So the final step is to run this command.

      So this essentially runs the Amazon hyphen CloudWatch hyphen agent hyphen CTL.

      And it specifies a command line option to fetch the configuration.

      And it uses hyphen C and specifies SSM colon and then the parameter store parameter name.

      Essentially what this command does is to start up the agent, pull the config from the parameter store, make sure the agent is running and then it will start capturing that logging data and start injecting it into CloudWatch logs.

      So at this point if it's functioning correctly, what you should be able to do is go back to the AWS console, go to services, type CloudWatch and then select CloudWatch to move to the CloudWatch console.

      Then if we go to log groups, now you might see a lot of log groups here.

      That's fine.

      Every time you apply the animals for life VPC templates, it's actually using a Lambda function which we haven't covered yet to apply an IP version six workaround, which I'll explain later in the course when we cover Lambda.

      What you should find though is if you just scroll down all the way to the bottom, you should see either one, two or three of the log groups that we've created.

      In this example on screen now, you can see that I have /var/log/httbd/error_log.

      Now these logs will start to appear when they start getting new entries and those entries are sent into CloudWatch.

      So right now you can see that I only have the error log.

      Now if you don't see access underscore log, what you can do is go back to the EC2 consoles, select the WordPress instance that you've created using the one click deployment and then copy the public IP version four address into your clipboard.

      Don't use this link, just copy the IP address and then open that in a new tab.

      Now by doing that, it will generate some activity within the Apache web server and that will put some log items into the access log and that will mean that that logging information will then be injected into CloudWatch logs using the CloudWatch agent.

      So if we move back to CloudWatch logs and then refresh, scroll down to the bottom.

      Now we can see the access underscore log file.

      Open the log stream for the EC2 instance.

      This log file details any accesses to the web server on the EC2 instance.

      You won't have a lot of entries in this.

      Likewise, if you go back to log groups and look for the error log, that will detail any errors, any accesses which weren't successfully served.

      So if you try to access a web page which doesn't exist, if there's a server error or any module errors, these will show inside this log group.

      Now also, because we're using the CloudWatch agent, we also have access to some metrics inside the EC2 instance that we otherwise would not have had.

      If we click on metrics and just drag this up slightly so we can see it, you'll see the AWS namespaces.

      So these are namespaces with metrics inside that you would have had access to before, but there'll also be the CWAgent namespace and inside here, just maybe select the image ID, instance ID, instance type name.

      Inside there, you'll see all of the metrics that you now have access to because you have the CloudWatch agent installed on this EC2 instance.

      So these are detailed operating system level metrics such as disk, IO, read and write, and you would have not had access to these before installing the agent.

      If we select another one, image ID, instance ID, instance type CPU, we'll be able to see the CPU cores that are on this instance together with the IO weight and the user values.

      Again, these are things that you would not have had access to at this level of detail without the CloudWatch agent being installed.

      Now I do encourage you to explore all of the different metrics that you now have access to as well as to how the log groups and log streams look with this agent installed.

      But this is the end of what I had planned for this demo lesson.

      So as always, we want to clear up all of the infrastructure that we've created within this demo lesson.

      So to do that, I want you to move back to the EC2 console, right click on this instance, go down to security, select modify IAM role, and then remove the CloudWatch role from this instance.

      You'll need to confirm that by following the instructions.

      So to detach that role, then click on services and move back to IAM, click on roles.

      And I want you to remove the CloudWatch role that you created earlier in this demo.

      So select it and then click delete role.

      You'll need to confirm that deletion.

      What we're not going to do is delete the parameter value that we've created.

      So if we go to services and then move back to systems manager, go to parameter store, because this is a standard parameter.

      This won't incur any charges.

      And we're going to be using this later on in future lessons of this course and other courses.

      So this is a standard configuration for the CloudWatch agent, which we'll be using elsewhere in the course.

      So we're going to leave this in place.

      The last piece of cleanup that you'll need to do is to go back to the CloudFormation console.

      You should have the single CW agent stack in place that you created at the start of the demo using the one click deployment.

      Go ahead and select the stack, click on delete, and then confirm that deletion.

      And once that's completed, all of the infrastructure you've used in this demo will be removed and the account will be back in the same state as it was at the start of this demo.

      Now that's everything that I wanted you to do in this demo.

      I just wanted to give you a brief overview of how to manually install the CloudWatch agent within an EC2 instance.

      Now there are other ways to perform this installation.

      You can use systems manager or bake it into AMIs or you can bootstrap it in using the process that you've seen earlier in the course.

      We're going to be using the CloudWatch agent during future demos of the course to get access to this rich metric and logging information.

      So most of the demos which follow in the course will include the CloudWatch agent configuration.

      At this point though, that is everything I wanted you to do in this demo.

      Go ahead, complete this video, and when you're ready, I'll look forward to you joining me in the next. in the next.

    1. Welcome back and welcome to this demo where together we'll be installing the CloudWatch agent to capture and inject logging data for three different log files into CloudWatch logs as well as giving us access to some metrics inside the OS that we wouldn't have otherwise had visibility of.

      So it's going to be a really good demonstration to show you the power of CloudWatch and CloudWatch logs when combined with the CloudWatch agent.

      Now in order to do this demo you're going to need to deploy some infrastructure.

      To do so just make sure that you're logged in to the general AWS account, so the management account of the organization and as always make sure you've got the Northern Virginia region selected.

      Now attached to this lesson is a one-click deployment URL which will deploy the infrastructure that you'll be using during this demo.

      So go ahead and click on that link.

      This will take you to a quick create stack screen.

      The stack name should be pre-populated with CW agent.

      You just need to scroll all the way down to the bottom, acknowledge the capabilities and click on create stack.

      Also attached to this lesson is a lesson commands document which will contain all of the commands you'll be using during this demo lesson.

      So go ahead and open that in a new tab.

      Now you're going to need to let this cloud formation stack move into a create complete state before you continue the demo.

      So go ahead and pause the video, wait for the status to change to create complete and then you're good to continue.

      Okay so now this stack is in a create complete state then we good to continue the demo.

      Now during this demo lesson you're going to be installing the cloud watch agent on an EC2 instance and this EC2 instance has been provisioned by this one-click deployment.

      So the first thing that we need to do is to move across to the EC2 console and connect to this instance.

      Once you're at the EC2 console click on instances running.

      You should see one single EC2 instance called A4L WordPress.

      Just go ahead and select this, right-click on it, select connect.

      We're going to connect into this instance using EC2 instance connect so make sure that's selected.

      Make sure also that the username is set to EC2-user and then connect into the instance.

      Now if everything's working as it should be you should see the animals for life custom login banner when you log into the instance.

      In my case I do see that and that means everything's working as expected.

      So this demonstration is going to have a number of steps.

      First we need to download the cloud watch agent then we need to install the agent then we need to generate the configuration file that this install of the agent as well as any future installs of the agent could use and then we need to get the cloud watch agent to read this config and start capturing and injecting those logs into cloud watch logs.

      So step one is to download and install the agent and the command to do this is inside the lesson commands document which is attached to this lesson and that will install the agent but crucially it won't start it.

      What we need to do before we can start the agent is to generate the config file that we'll use to configure this and any future agents but because we also want to store that config file inside the parameter store and because we also want to give this instance permissions to interact with cloud watch logs before we continue we need to attach an IAM role to this instance an EC2 instance role so that's the next step.

      So we need to move back to the EC2 console click on services and then open the IAM console because we'll be creating an IAM role to attach to this instance you'll need to go to roles create role it'll be an AWS service role using EC2 so select EC2 and then click on next we'll need to attach two managed policies to this role so I've included the names of those managed policies in the lesson commands document the first is cloud watch agent server policy so make sure you type that in the filters policy and then check that box and the second is Amazon SSM full access so type that in the box select that policy and then scroll down and click on next and we'll call this role cloud watch role so enter cloud watch role and click on create role and once we've done that we can attach this role to our EC2 instance so we need to move back to the EC2 console go to instances right click on the instance go to security and then modify IAM role and then click on the drop down and select cloud watch role which is the role that you've just created then click update IAM role now that we've allocated that instance with the permissions that it needs to perform the next set of steps go ahead and connect to that instance again the tab may have timed out that you previously had open so if it doesn't respond you need to close it down and reopen it if it does respond that's fine keep the existing tab once we back in the terminal for the instance we need to start the cloud watch agent configuration wizard and the command to do that is also in the lesson commands document attached to this lesson so go ahead and paste that in and press enter and that will start off the configuration wizard for the cloud watch agent now for most of these values we can accept the defaults but we need to be careful because there are a number of them that we can't so press enter and accept the default for the operating system it should automatically detect Linux press enter it should automatically detect that it's running on an EC2 instance press enter to use the root user again that should be the default press enter for stats D press enter for the stats D port press enter for the interval press enter for the aggregation interval press enter to monitor metrics from collect D again that's the default press enter to monitor host metrics so CPU and memory press enter to monitor CPU metrics per core press enter for the additional dimensions press enter to aggregate EC2 dimensions press enter for the default resolution so 60 seconds for the default metric config that you want the default will be basic go ahead and enter 3 for advanced this captures additional operating system metrics that we might actually want so use 3 for this value press enter to indicate that we satisfied with the above config next we'll move to the log configuration part of this wizard so press enter for the default of no we don't have an existing cloud watch log agent config to import press enter which is the default for yes we do want to monitor log files you'll be asked for the log file path to monitor so the first one that we want to monitor and again these are in the lesson commands document so the first log path is forward slash var forward slash log forward slash secure press enter you'll be asked for the log group name the default is just the log name itself so secure but we're going to enter the full path I always prefer using the full path for the log group names for any system logs so going to enter var log secure again you'll be asked for the log stream name remembering the theory part of this lesson I talked about how a log stream will be named after the instance which is injecting those logs so the default choice is to do that to use the instance ID so press enter it's here where you can specify a log group retention in days we're just going to accept the default for the log group retention value the default will be yes we do want to specify additional log files so press enter the log file path for this one will be var log HTTP d access underscore log so enter that the log group name again will default to the name of the actual log we want the full path so enter the full path again press enter the log stream name the default for this is again the instance ID which is fine just press enter go ahead and accept the default for the log group retention in days press enter again we've got one more log file that we want to enter this time the log file path is var log HTTP d error underscore log again the log group name will default the name of the actual log we want to use the full path so enter the same thing again the default choice for log stream name will be again the instance ID that's fine press enter go ahead and accept the default for the log group retention in days and now we finished adding log files we won't want to log any additional files so press 2 and that will complete this logging section of this wizard it's asking us to confirm that we're happy with this configuration file and it's telling us that the configuration file is stored at forward slash opt forward slash aws forward slash amazon hyphen cloud watch hyphen agent forward slash bin and then config dot json in that folder now that's where it stores it on the local file system but we can also elect to store this json configuration inside the parameter store and I thought that since we've previously talked about the theory of the parameter store and done a little bit of interaction it would be useful for you to see exactly how it can be used in a more production like setting so the default is to store the configuration in the parameter store so we're going to allow that press enter it'll ask us for the parameter name to use and the default is Amazon cloud watch hyphen linux and that's fine so press enter it'll ask us for the region to use because parameter store is like many other services a regional service and the default region is the one where the instance is in so it automatically detects that we're in us east one which is northern Virginia so go ahead and accept that default choice it'll ask us for the credentials that it can use to send that configuration into the parameter store now these credentials will be obtained from the role that we've attached to this instance in the previous step so you can accept the default choice it'll use those credentials to store that configuration inside the parameter store and if we move back to the ec2 console and we switch back to the parameter store so just type SSM to move to systems manager which is the parent product of parameter store if we go down to the parameter store item on the menu on the left we'll be able to see this single parameter Amazon cloud watch hyphen linux and if we open that up and just scroll down we can see that the value is a JSON document with the full configuration of the cloud watch agent so we can now use this parameter to configure the agent on this ec2 instance as well as any other ec2 instances we want to deploy so if you create the cloud watch configuration once and then store it into parameter store then when you create ec2 instances at scale as you'll see how to do later in the course when we talk about auto scaling groups then you can use the parameter store to deploy this type of configuration at scale in a secure way okay so this is the end of part one of this lesson it was getting a little bit on the long side and so I wanted to add a break it's an opportunity just to take a rest or grab a coffee part 2 will be continuing immediately from the end of part one so go ahead complete the video and when you're ready join me in part 2.

    1. The gendered nature of the classroom compromise can be subtle and is often ignored. Male students frequently control classroom conversa-tion. They ask and answer more questions. They receive more praise for the intellectual quality of their ideas. They get criticized more publicly and harshly when they break a rule. They get help when they are con-fused. They are the heart and center of interaction. Watch how boys dominate the discussion about presidents in this upper elementary class. The fifth-grade class is almost out of control. "Just a minute," the teacher admonishes. "There are too many of us here to all shout out at once. I want you to raise your hands, and then I'll call on you. If you shout out, I'll pick somebody else." Order is restored. Then Stephen, enthusiastic to make his point, calls out.

      Reading this highlights the subtle yet pervasive ways in which gender dynamics play out in the classroom. The fact that male students often dominate discussions, ask and answer more questions, and receive more praise for their ideas speaks volumes about the underlying gendered expectations that shape classroom interactions. It’s easy to overlook these patterns, but they’re clearly there, reinforcing a system where boys are often seen as the primary intellectual contributors.

    1. as you unfold your unfolding changes of Planet the whole of the planet is at different levels of initiation all of those are part of a single planetary intelligence first principles run through every stage of development the people in power don't understand development so they don't build deliberately developmental processes into culture Western civilizations suppressed access to the astral plane the dream plane and as you mature in your systems thinking you're actually beginning to think in such a way way that you deconstruct the solidity of your experience because the mind understands that everything is actually into connected the AI has been coded from the level of mind that produces the problems that we were in in the first place what we're talking about with AI is the Awakening of mineral intelligence it's like the Awakening of cosmic Kundalini the fourth turning is the integration of these contemplative Technologies a platform that allows the intelligence of the earth into the technology so that it can synchronically unfold Evolution based on how things spontaneously unfold this is how you change the planet we've seen what an effect the kind of Internet Revolution and social media and this kind of technology has had and you know this is nothing compared to you know what's about to unfold so that's the and I think we're all kind of aware of that we're on the edge of of our seats of like what's going to happen What deity are we going to invoke with AI like who are we welcoming right are we welcoming who the hell is is is coming um so that's the fourth Industrial Revolution and you can think of that really as outer technology very very sophisticated outter technology now you know when when I'm referring when we refer to the fourth turning of the Dharma we're we're kind of we we're really using a a Buddhist model but it can be but it's a but it's a universal model so the first turning of the Dharma in the Buddhist tradition let's kind of universalize it it really refers to um practice related to understanding your own individual suffering so this has got to do with our friends who are listening you begin to realize that you are struggling that you are suffering and you begin to realize that there are causes of this suffering related to usually like childhood conditioning multigenerational trauma and you begin the process of cleaning up and you begin the process of mindfulness practice and developing psychological uh reflection and You Begin your journey but you but your mainly motivated because you're limping and not having much fun right and then as you mature and you do your work which can take some time or even lifetimes as you clear that Karma that negative momentum you begin to realize that there's a bigger there's a bigger game and the second turning of the Dharma relates to the fact that in any path any spiritual path at a certain point on that Journey the aspirant the soul will begin to recognize that they are part of something larger and so it isn't just about alleviating their own personal suffering it's also about alleviating Universal suffering so this is where the the bodh satra or the Christ or those kinds of archetypes about being concerned about the whole and realizing that nobody gets out of here alone so that second turning is often called the Great turning the Great Wheel but you see that in all the traditions in every tradition and even if you're not in a tradition if you're motivated for the general well-being of everybody on this planet you can say that you're in that second turning and there's of all kinds of practices related to opening up the heart mind's experience of interconnectivity of a unified field of compassion and of a kind of a resonant field where everything is interconnected right that's the second turning and so there's all sorts of Technologies contemplative practices that help facilitate that and then the third turning is well if if you have kind of built a vessel of mindfulness and sematic awareness that you're present to your own reactivity and you've opened up a motivation to want to be of service for the whole then the third turning was related to the um well two things first thing is particularly potent practices that speed up that process so you can think of any sort of technical practice because the first and second turnings aren't so technical they're they're more psychological things like breath work psychedelics sexual yogas all the things that are actually techniques that often people get to in the beginning but actually they're actually Advanced Techniques in the sense that they're done to speed up the process once the individual and the collective are kind of aligned so the third turning referred to those kinds of special techniques Kundalini Yoga hat yoga right the Paradox in our culture is we've we've got all of this mixed up because we don't really have a curriculum and we've kind of imported stuff from Asia but it's a bit here and it's a bit there and maybe a little bit from Native Americans or a little bit of whatever an Aboriginal I mean it's a bit of a schmoger port so that's kind of where we and that's so that's kind of where we are in particularly in terms of Asian practices and they kind of fall into those three those three camps and you can also look in terms of Western practices as well you know practices related to individual trauma practices related to developing compassion and a sense of interconnectivity and then practices related to um speeding up that process by uncovering the Shadow or blasting through blockages you know that kind of stuff so what do we mean by fourth turning uh well we need this a number of things here first thing is um we can say the fourth turning is where all of the psycho Technologies of the East and the psycho Technologies of the West meet right so East and West a meeting right for the first really for the first time I mean the process has been happening over the last you know hundred years but the understanding of of how we go about integrating these in a very deep way that is that is more recent so that's the F that's the first understanding the fourth turning the second understanding of the fourth turning is is then related to the nature of time because time is the fourth is the fourth dimension so to speak so by that what we mean is is well what time is it like and by by time I don't mean time less time I mean relative time like what how are we measuring how this is unfolding because whatever is unfolding unfolds in time so if we don't know what time it is we we so essentially understanding time is related to understanding our relationship with the planet because it's the planet that is unfolding through time and we're so disconnected to our mother to this Earth being that we have no idea what the season is I mean yes we do hear like oh we're we to changing of a cycle well what does that mean when we change cycles and when did that last happen and so that whole question around the our process is not separate actually from a planetary process so that planetary perspective that's that is that is is a new thing when it comes to the Traditions to recognize that the journey that we're making that Humanity that you and I are part of a Humanity that is actually has a function within the larger field of this Earth being and that all of the heavens and the hells and these Altered States of dimensions and out of body experiences and all of this stuff is happening inside of the ecosystem of this planetary being and this planetary being is is something is happening so part of that fourth turning is like recognizing that a lot synchronicity synchronization is very important CU When We synchronize you're synchronizing with that time whatever that time is so the so the other thing about what the fourth turning is about is about understanding the path that unfolds from the soul so the Traditions will talk about there being well depends on the tradition but you can cut the cake you know seven slices or 12 slices or five slices but if you cut the cake in seven slices the fourth slice the middle slice of the planes of our planet is the soul right now just like the seven chakras the mid chakra is the heart so there's a relationship between coming into the heart and maturing into Soul Consciousness and as a doctor of psychology Soul Consciousness is actually something quite precise it's not a vague kind of term it means something actual in terms of our development in terms of our Evolution and so the soul functions through pure synchronicity so and synchronicity is related to time so the fourth turning right is related to beginning to function from a different level of development from a different energetic center from a different way of learning how to do things and because the soul is also a collective being right so you know you have to have done your own individual work so to speak before you do that because otherwise you're going to have conflicts with the with the collective because you know if you're not yet individuated you're going to have issues with a collective because you have to be paradoxically an individual in order to actually fully function within a collective without being swallowed

      The same as Bohmain Theory for the Indellective.

    2. once you realize that the world isn't what you think it is it's very easy to grab onto something else and grab onto some kind of weird conspiracy well that's the thing you've been describing thus far as well sorry to in just say but like the openness requires structure

      for - quote conspiracy theories - lizard people - first stage of initiation - if reality isn't as it appears, it's easy to latch onto something else - John Churchill

    3. it isn't just about alleviating their own personal suffering it's also about alleviating Universal suffering so this is where the the bodh satra or the Christ or those kinds of archetypes about being concerned about the whole

      for - example - individual's evolutionary learning journey - new self revisiting old self and gaining new insight - universal compassion of Buddhism and the individual / collective gestalt - adjacency - the universal compassion of the bodhisattva - Deep humanity idea of the individual / collective gestalt - the Deep Humanity Common Human Denominators (CHD) as pointing to the self / other fundamental identity - Freud, Winnicott, Kline's idea of the self formed by relationship with the other, in particular the mOTHER (Deep Humanity), the Most significant OTHER

      adjacency - between - the universal compassion of the bodhisattva - Deep humanity idea of the individual / collective gestalt - the Deep Humanity Common Human Denominators (CHD) as pointing to the self / other fundamental identity - Freud, Winnicott, Kline's idea of the self formed by relationship with the other, in particular the mOTHER (Deep Humanity), the Most significant OTHER - adjacency relationship - When I heard John Churchill explain the second turning, - the Mahayana approach, - I was already familiar with it from my many decades of Buddhist teaching but with - those teachings in the rear view mirror of my life and - developing an open source, non-denominational spirituality (Deep Humanity) - Hearing these old teachings again, mixed with the new ideas of the individual / collective gestalt - This becomes an example of Indyweb idea of recording our individual evolutionary learning journey and - the present self meeting the old self - When this happens, new adjacencies can often surface - In this case, due to my own situatedness in life, the universal compassion of the bodhisattva can be articulated from a Deep Humanity perspective: - The Freudian, Klinian, Winnicott and Becker perspective of the individual as being constructed out of the early childhood social interactions with the mOTHER, - a Deep Humanity re-interpretation of "mother" to "mOTHER" to mean "the Most significant OTHER" of the newly born neonate. - A deep realization that OUR OWN SELF IDENTITY WAS CONSTRUCTED out of a SOCIAL RELATIONSHIP with mOTHER demonstrates our intertwingled individual/collective and self/other - The Deep Humanity "Common Human Denominators" (CHD) are a way to deeply APPRECIATE those qualities human beings have in common with each other - Later on, Churchill talks about how the sacred is lost in western modernity - A first step in that direction is treating other humans as sacred, then after that, to treat ALL life as sacred - Using tools like the CHD help us to find fundamental similarities while divisive differences might be polarizing and driving us apart - A universal compassion is only possible if we vividly see how we are constructed of the other - Another way to say this is that we see others not from an individual level, but from a species level

    1. Welcome back and in this demo lesson, I'm just wanting to give you some practical experience with interacting with the parameter store inside AWS.

      So to do that, make sure you're logged into the IAM admin user of the management account of the organization and you'll need to have the Northern Virginia region selected.

      Now there's also a lesson commands document linked to this lesson, which contain all of the commands that you'll need for this lessons demonstration.

      So before we start interacting with the parameter store from the command line, we need to create some parameters.

      And the way that we do that is first move to systems manager.

      So the parameter store is actually a sub product of systems manager.

      So move over to the systems manager console.

      And once you're there, you'll need to select the parameter store from the menu on the left.

      So it should be about halfway down on the left and it's under application management.

      So go ahead and select parameter store.

      Now once you're in parameter store, the first thing that you'll need to do to remove this default welcome screen logically is to create a parameter.

      So go ahead and click on create parameter.

      Now when you create a parameter, you're able to pick between standard or advanced.

      Standard is the default and that meets most of the needs that most people have for the product.

      And you can create up to 10,000 parameters using the standard tier.

      With the advanced tier, you can create more than 10,000 parameters.

      The parameter value can be longer at eight kilobytes versus the four kilobytes of standard.

      And you do gain access to some additional features.

      But in most cases, most parameters are fine using the default, which is the standard tier.

      With the standard tier, there's no additional charge to use this up to the limit of 10,000 parameters.

      The only point at which parameter store costs any extra is if you use the faster throughput options or make use of this advanced tier.

      And we won't be doing that at any point throughout the course.

      We'll only be using standard.

      And so there won't be any extra parameter store related charges on your bill.

      Now I mentioned that a parameter is essentially a parameter name and a parameter value.

      And it's here where you set both of those.

      There's an optional description that you can use and you can set the type of the parameter.

      The options being string, string list, which is a comma separated list of individual strings and then secure string, which utilizes encryption.

      So we're gonna go ahead at this point and create some parameters that we're then going to interact with from the command line.

      So the first one we'll create is one that's called forward slash my-cat-app forward slash DB string.

      So this is the name of a parameter and it will also establish a hierarchy.

      So anytime we use forward slashers, we're establishing a hierarchy inside the parameter store.

      So imagine this being a directory structure.

      Imagine this being the root of the structure.

      Imagine my-cat-app being the top level folder and inside there, imagine that we've got a file called DB string.

      So we're going to store this, we're going to store this hierarchy and we need to set its value.

      So we'll keep this for now as a string and this is going to be the database connection string for my-cat-app.

      So we'll just enter the value that's in the lesson commands document.

      So DB dot all the cats dot com colon 3306.

      And 3306 of course is the my SQL standard port number.

      At this point, we could enter an optional description.

      So let's go ahead and do that.

      Connection string for cat application.

      So just type in a description here.

      It doesn't matter really what you type and then scroll down and hit create parameter.

      So that's created our first parameter, my-cat-app forward slash DB string.

      Now we're going to go ahead and do the same thing but for DB user.

      So click on create parameter and then just click in this name box and notice how it presents you with this hierarchy.

      So now we've got two levels of this hierarchical structure.

      We've got the my-cat-app at the top and then we've got the actual parameter that we created at the bottom here.

      So this has already established this structure.

      So let's go ahead and create a new parameter.

      This time it's going to be forward slash my-cat-app forward slash DB user.

      We'll not bother with the description for this one.

      We'll keep it at the default of standard and it will also be a string.

      And then for the value, it'll be boss cat.

      So enter all that and click on create parameter.

      Next let's create a parameter again.

      If we click in this name this time, we've got this hierarchy that's ever expanding.

      So we've got the top level at the top and then below it two additional parameters, DB string and DB user.

      And we're going to create a third one at this level.

      So this time it's going to be called forward slash my-cat-app forward slash DB password.

      This time though, instead of type string, it's going to be a secure string so that it encrypts this parameter.

      And it's going to use KMS to encrypt the parameter.

      And because it's using KMS, we'll need to select the key to use to perform the cryptographic operations.

      We can either select a key from the current account, so the account that we're in, or we can select another AWS account.

      And in either case, we'll need to pick the key ID to use and by default, it uses the product default key for SSM.

      So that's using alias forward slash AWS forward slash SSM.

      And you always have the option of clicking on this dropdown and changing it if you want to benefit from the extra functionality that you get by using a customer managed KMS key.

      This is an AWS managed one.

      So you won't be able to configure rotation and you won't be able to set these advanced key policies.

      But in most cases, you can use this default key.

      So at this point, we'll leave it as the default and we'll enter our super secret password, amazing secret password, 1337, and then click create parameter.

      We're not finished yet though, click on create parameter again.

      And I like to be inclusive, so not everything in my course is going to be about cats.

      We're going to create another parameter, my-dog-app forward slash DB string.

      We'll keep standard, we'll keep the type as string and then the value for connecting to the my-dog application.

      So the DB string is going to be DB if we really must have dogs.com colon 3306.

      So type that in and then click on create parameter.

      And then lastly, we're going to create one more parameter.

      This time the name is going to be forward slash rate my lizard.

      So rate hyphen my hyphen lizard forward slash DB string.

      The tier is going to be standard again.

      The type is going to be string.

      And for the value, it will be DB.

      This is pretty random.com colon 3306.

      So type that in and then click on create parameter.

      So now we've created a total of five parameters.

      We've created the DB string, the DB user, and the DB password for the cat application.

      And then the DB string for the dog application as well as the rate my lizard application.

      So a total of five parameters and one of them is using encryption.

      So that's the DB password for the my cat application.

      So now let's switch over to the command line and interact with these parameters.

      And to keep things simple, we're going to use the cloud shell.

      So this is a relatively new feature made available by AWS.

      And this means that we don't have to interact with AWS using our local machine.

      We can do it directly from the AWS console.

      So click on the cloud shell icon on the menu on the top.

      This will take a few moments to provision because this is creating a dedicated environment for you to interact with AWS using the command line interface.

      So you'll need to wait for this process to complete.

      So go ahead and pause the video and wait until this logs you into the cloud shell environment at which point you can resume the video and we're good to continue.

      It'll say preparing your terminal and then you'll see a familiar looking shell much like you would if you were connected to a Linux instance.

      And now you'll be able to interact with AWS using the command line interface, using the credentials that you're currently logged in with.

      Now to interact with parameter store using the command line, we start by using AWS and then a space, SSM, and then a space, and then the command that we're going to use is get-parameters.

      Now by default, what we need to provide the get-parameters command with is the path to a parameter.

      So in this case, if we wanted to retrieve the database connection string for the rate my lizard application, then we could provide it with this name.

      So forward slash rate-my-lizard/dvstring.

      And this directly maps back through to the parameter that we've just created inside the parameter store.

      So this parameter.

      So if you go ahead and type that and press enter, it's going to return a JSON object.

      Inside that JSON object is going to be a list of parameters and then for each parameter, so everything inside these inner curly braces, we're going to see the name of the parameter that we wanted to retrieve, the type of the parameter, the value of the parameter.

      In this case, db.this_is_pretty_random.com/3306, the version number of the parameter because we can have different version numbers, the last modified date, the data type, and then the unique arn of this specific parameter.

      And so this is an effective way that you can store and retrieve configuration information from AWS.

      Now we can also use the same structure of command to retrieve all of those other parameters that we stored within the parameter store.

      So if we wanted to get the db string for the my-dog-app, then we could use this command.

      And again, it would return the same data structure.

      So a JSON object containing a list of parameters and each of those parameters would contain all of this information.

      I'll clear the screen to keep this easy to see.

      We could do the same for the my-cat-app, retrieving its database connection string.

      And again, it would return the same JSON object with the parameters list.

      And then for each parameter, this familiar data structure.

      Now what you can also do, and I'm going to clear the screen before I run this, is instead of providing a specific path to a parameter.

      So if you remember, we had almost a hierarchy that we created with these different names.

      So we have the my-cat-app hierarchy and then inside there db-pastword, db-string, db-user.

      We have my-dog-app and inside there db-string and then rate-my-lizard and also db-string.

      So rather than having to retrieve each of these individual parameters by specifying the exact name, we can actually use get parameters by path.

      So let's demonstrate exactly how that works.

      So with this command, we're doing a get-parameters-by-path and we're specifying a path to a group of a number of parameters.

      So in this case, my-cat-app is actually going to be the first part of the path of db-pastword, db-string and db-user.

      So by creating a hierarchical structure inside the parameter store, we can retrieve multiple parameters at once.

      So this time we're returning a JSON structure.

      Inside this JSON structure, we have a list of parameters and then we're retrieving three different parameters, db-pastword, db-string and db-user.

      Now note how db-pastword is actually a type of secure string and by default, if we don't specify anything, we return the encrypted version of this parameter.

      So the ciphertext version of this parameter.

      This ensures that we can interact with parameters without actually decrypting them and this offers several security advantages.

      Now I've cleared the screen to make this next part easy to see because it's very important.

      Because we're using KMS to encrypt parameters, the permissions to access KMS keys to perform this decryption, this is separate than the permissions to access the parameter store.

      So if this user, I am admin in this case, has the necessary permissions to interact with KMS to use the keys to decrypt these parameters, then we can also ask the parameter store to perform that decryption whilst we retrieve the parameters.

      The important thing to understand is the permissions to interact with the parameter store are separate than the permissions to interact with KMS.

      So to perform a decryption whilst we're retrieving the parameters, we would use this command.

      So it's the same command as before, aws, SSM, get-parameters-by-path, and then we're specifying the my-cat-app part of the hierarchy.

      So remember, this represents these three parameters.

      Now, if we ran just this part on its own, which was the command we previously ran, this would retrieve the parameters without performing decryption.

      But by adding this last part, this is the part that performs the decryption on any parameter types which are encrypted.

      And if you recall, one of the parameters that we created was this DB password, which is encrypted.

      So if we run this command, this time it's going to retrieve the /my-cat-app/db password parameter, but it's going to decrypt it as part of that retrieval operation and return the plain text version of this parameter.

      And just to reiterate, that requires both the permissions to interact with the parameter store, as well as the permissions to interact with the KMS key that we chose when creating this parameter.

      Now, we're logged in as the IAM admin user, which has admin permissions, and so we do have permissions on both of those, on SSM and on KMS, so we can perform this decryption operation.

      Now, you're going to be using the parameter store extensively for the rest of the course and my other courses.

      It's a great way of providing configuration information to applications, both AWS and bespoke applications within the AWS platform.

      It's a much better way to inject configuration information when you're automatically building applications or you need applications to retrieve their own configuration information.

      It's much better to retrieve it from the parameter store than to pass it in using other methods.

      So we're going to use it in various different lessons as we move throughout the course.

      In this demo lesson, I just wanted to give you a brief experience of working with the product and the different types of parameters.

      But at this point, let's go ahead and clear up all of the things that we've created inside this demo lesson.

      So close down this tab, open to Cloud Shell.

      Back at the parameter store console, just go ahead and check the box at the top to select all of these existing parameters.

      If you do have any other parameters, apart from the ones that you've created within this demo lesson, then do make sure that you uncheck them.

      You should be using an account dedicated for this training, so you shouldn't have any others at this point.

      But if you do, make sure you uncheck them.

      You should only be deleting ones for 8-mile lizard, my dog app, and my cat app.

      So make sure that all of those are selected and then click on delete to delete those parameters, and you'll need to confirm that deletion process.

      And at this point, that's everything that I wanted you to do in this lesson.

      You've cleared up the account back to the point it was at the start of this demo lesson.

      So go ahead, complete this video, and when you're ready, I'll look forward to you joining me in the next.

    1. Welcome to this demo lesson where you're going to get the experience of working with EC2 and EC2 instance roles.

      Now as you learned in the previous theory lesson, an instance role is essentially a specific type of IAM role designed so that it can be assumed by an EC2 instance.

      When an instance assumes a role which happens automatically when the two of them are linked, that instance and any applications running on that instance can gain access to the temporary security credentials that that role provides.

      And in this demo lesson you're going to get the experience of working through that process.

      Now to get started you're going to need some infrastructure.

      Make sure that you're logged in to the general AWS account, so that's the management account of the organization and as always you'll need to be within the Northern Virginia region.

      Assuming you are, there's a one click deployment link which is attached to this lesson so go ahead and click that link.

      That will take you to a quick create stack page.

      The stack name will be pre-populated with IAM role demo and all you need to do is to scroll down to the bottom, check this capabilities box and then click on create stack.

      This one click deployment will create the animals for live VPC and EC2 instance and an S3 bucket.

      Now in order to continue with this demo we're going to need this stack to be in a create complete state.

      So go ahead and pause the video and then when the stack moves into a create complete status then we're good to continue.

      Okay so this stacks now in a create complete state and we're good to continue.

      So to do so go ahead and click on the services drop down and then type EC2, locate it, right click and then open that in a new tab.

      Once you're at the EC2 console click on instances running and you should be able to see that we only have the one single EC2 instance.

      Now we're going to connect to this to perform all the tasks as part of this demo.

      So right click on this instance, select connect, we're going to use EC2 instance connect.

      Just verify that the username does say EC2-user and then click on connect.

      Now the AMI that we use to launch this instance is just the standard Amazon Linux 2 AMI.

      And so if we type AWS and press enter it comes with the standard install of the AWS CLI version 2.

      Now it's important to understand that right now this instance has no attached instance role and it's not been configured in any way.

      It's the native Amazon Linux 2 AMI that's been used to launch this instance.

      And so if we attempt to interact with AWS using the command line utilities, for example by running an AWS S3 LS, the CLI tools will tell us that there are no credentials configured on this instance and will be prompted to provide long term credentials using AWS Configure.

      Now this is the method that you've used to provide credentials to your own installed copy of the CLI tools running on your local machine.

      So you've used AWS Configure and set up two named configuration profiles.

      And the way that you provide these with authentication information is using access keys.

      Now this instance has no access keys configured on it and so it has no method of interacting with AWS.

      We could use AWS Configure and provide these credentials but that's not best practice for an EC2 instance.

      What we're going to do instead is use an instance role.

      So to do that you're going to need to move back to the AWS console.

      And once you're there click on services and in the search box type IAM.

      We're going to move to the IAM console so right click and open that in a new tab.

      As I mentioned earlier an instance role is just a specific type of IAM role.

      So we're going to go ahead and create an IAM role which our instance can assume.

      So click on roles and then we're going to go ahead and click on create role.

      Now the create role process presents us with a few common scenarios.

      We can create a role that's used by an AWS service, another AWS account, a web identity or a role designed for SAML 2.0 Federation.

      In our case we want a role which can be assumed by an AWS service specifically EC2.

      So we'll select the type of trusted entity to be an AWS service then we'll click on EC2 and then we'll click on next.

      Now for the permissions in this search box just go ahead and type S3 and we're looking for the Amazon S3 read only access.

      So there's a managed policy that we're going to associate with this role.

      So check the box next to Amazon S3 read only access and then we'll click on next.

      And then under role name we're going to call this role A4L instance role.

      So it's easy to distinguish from any other roles we have in the account.

      So go ahead and enter that and click on create role.

      Now as I mentioned in the theory lesson about instance roles when we do this from the user interface.

      It's actually created a role and an instance profile of the same name and it's the instance profile that we're going to be attaching to the EC2 instance.

      Now from a UI perspective both of these are the same thing.

      You're not exposed to the role and the instance profile as separate entities but they do exist.

      So now we're going to move back to the EC2 console and remember currently this instance has no attached instance role and we're unable to interact with AWS using this EC2 instance.

      To attach an instance role using the console UI right click, go down to security and then modify IAM role.

      Select that and we'll need to choose a new IAM role.

      You have the option of creating one directly from this screen but we've already created the one that we want to apply.

      So click in the drop down and select the role that you've just created.

      In this case A4L instance role.

      So select that and then click on save.

      Now if we select the instance and then click on security you'll be able to confirm that it does have an IAM role attached to this instance.

      So this is the instance role that this EC2 instance can now utilize.

      So now we're going to interact with this instance again from the operating system.

      Now if it's been a few minutes since you've last used instance connect you might find when you go back it appears to have frozen up.

      If that's the case that's no problem just close down these tabs that you've got connected to that instance.

      Right click on the instance again, select connect, make sure the username is EC2-user and then click on connect.

      And this will reconnect you to that instance.

      Now if you recall last time we were connected we attempted to run AWS S3LS and the command line tools informed us that we had no credentials configured.

      Let's attempt that process again.

      AWS space S3 space LS and press enter.

      And now because we have the instance role associated with this EC2 instance the command line tools can use the temporary credentials that that role generates.

      Now the way that this works and I'm going to demonstrate this using the curl utility these credentials are actually provided to the command line tools via the metadata.

      So this is actually the metadata path that the command line tools use in order to get the security credentials.

      So the temporary credentials that a role provides when it's assumed.

      So if I use this command and press enter you'll see that it's actually using this role name.

      So you'll see a list of any roles which are associated with this instance.

      If we use the curl command again but this time on the end of security credentials we specify the name of the role that's attached to this instance and press enter.

      Now we can see the credentials that command line tools are using.

      So we have the access key ID the secret access key and the token and all of these have been generated by this EC2 instance assuming this role because these are temporary credentials.

      They also have an expiry date.

      So in my case here we can see that these credentials expire on the 7th of May 2022 at 552 47 UTC.

      And that really is all I wanted to show you in this demo lesson about instance roles.

      Essentially you just need to create an instance role and then attach it to an instance.

      And once you do that instance is capable of assuming that role gaining access to temporary credentials and then any applications installed on that instance, including the command line utilities are capable of interacting with AWS using those credentials.

      Now the process of renewing these credentials is automatic.

      So as long as the application that's running on the instance periodically checks the metadata service, it will always have access to up to date and valid credentials.

      The EC2 service once this expiry date closes in and once the expiry date is in the past, these credentials will be renewed and a new valid set of credentials will automatically be presented via the metadata service to any applications running on this EC2 instance.

      Now just one more thing that I do want to show you before we finish up with this demo lesson.

      And I have made sure that I've attached this link to the lesson.

      This link shows the configuration settings and precedence that the command line utilities use in order to interact with AWS.

      So whenever you use the command line interface, each of these is checked in order.

      First, it looks at command line options.

      Then it looks at environment variables to check whether any credentials are stored within environment variables.

      Then it checks the command line interface credentials file.

      So this is stored within the dot AWS folder within your home folder and then a file called credentials.

      Next, it checks the CLI configuration file.

      Next, it checks container credentials.

      And then finally, it checks instance profile credentials.

      And these are what we've just demonstrated.

      Now, this does mean that if you manually configure any long term credentials for the CLI tools as part of using AWS Configure, then they will be used as a priority over an instance profile.

      But you can use an instance profile and attach this to many different instances as a best practice way of providing them with access into AWS products and services.

      So that's really critical to understand.

      But at this point, that is everything that I wanted to cover in this demo lesson.

      And all that remains is for us to tidy up the infrastructure that we've used as part of this demo.

      So to tidy up this infrastructure, I want you to go back to the IAM console.

      I want you to click on roles and I want you to delete the A4L instance role that you've just created.

      So select it and then click on delete role.

      Once you've deleted that role, go back to the EC2 console, click on instances, right click on public EC2, go to security, modify IAM role.

      Now, even though you've deleted the IAM role, note how it's still listed.

      That's because this is an instance profile.

      This is showing the instance profile that gets created with the role, not the role itself.

      So what we're going to do, and I just wanted to do this to demonstrate how this works, we're just going to select no IAM role and then click on save.

      We'll need to confirm that.

      So to do that, we need to type detach into this box and then confirm it by clicking detach.

      That removes the instance role entirely from the instance.

      And then we can finish up the tidy process by moving back to the cloud formation console.

      Selecting the IAM role demo stack and then clicking on delete and confirming that deletion.

      And that will put the account back in the same state as it was at the start of this demo lesson.

      So this has been a very brief demo.

      I just wanted to give you a little bit of experience of working with instance roles.

      So that's EC2 instances combined with IAM roles in order to give an instance and any applications running on that instance, the ability to interact with AWS products and services.

      And this is something that you're going to be using fairly often throughout the course, specifically when you're configuring any AWS services to interact with any other services on your behalf.

      That's a common use case for using IAM roles and we'll be using instance roles extensively to allow our EC2 instances to interact with other AWS products and services.

      But at this point, that is everything that I wanted to cover in this demo lesson.

      So go ahead, complete the video and when you're ready, I'll look forward to you joining me in the next.

    1. Welcome back and I've mentioned a few times now within the course that I am roles are the best practice way that AWS services can be granted permissions to other AWS services on your behalf.

      Allowing a service to assume a role grants the service the permissions that that role has.

      EC2 instance roles are roles that an instance can assume and anything running in that instance has the permissions that that role grants and there is some detail involved which matters so let's take a look at how this feature of EC2 works architecturally.

      Instance role architecture isn't really all that complicated it starts off with an I am role and that role has a permissions policy attached to it so whoever assumes the role gets temporary credentials generated and those temporary credentials give the permissions that that permissions policy would grant.

      Now an EC2 instance role allows the EC2 service to assume that role which means there's an EC2 instance itself can assume it and gain access to those credentials but we need some way of delivering those credentials into the EC2 instance so that applications running inside that instance can use the permissions that the role provides so there's an intermediate piece of architecture the instance profile and this is a wrapper around an I am role and the instance profile is the thing that allows the permissions to get inside the instance when you create an instance role in the console an instance profile is created with the same name but if you use the command line or cloud formation you need to create these two things separately when using the UI and you think you're attaching an instance role direct to an instance you're not you're attaching an instance profile of the same name it's the instance profile that's attached to an EC2 instance.

      We know by now that when I am roles are assumed you're provided with temporary security credentials which expire but these credentials grant permissions based on the roles permissions policy will inside an EC2 instance these credentials are delivered via the instance metadata.

      An application running inside the instance can access these credentials and use them to access AWS resources such as S3.

      One of the great things about this architecture is that the credentials available inside the metadata they're always valid EC2 and the secure token service liaise with each other to ensure that the credentials are always renewed before they expire as long as your application inside the EC2 instance keeps checking the metadata it will never be in a position where it has expired credentials.

      So to summarize when you use EC2 instance roles the credentials are delivered via the instance metadata specifically inside the metadata there's an IAM tree in there there's a security credentials part and then in there is the role name and if you access this you'll get access to these temporary security credentials and they're always rotated they're always valid as long as that instance role remains attached to the instance anything running in the instance will always have access to these valid credentials applications running in the instance of course need to be careful about caching these credentials and just check the metadata before the credentials expire or do it periodically.

      You should always use roles where possible I'm going to keep stressing that throughout the course it's important for the exam roles are always preferable than storing long-term credentials so access keys into an EC2 instance it's never a good idea to store long-term credentials such as access keys anywhere which aren't securely stored so for example on your local machine.

      In fact the AWS tooling such as the CLI tools will use instance role credentials automatically so as long as the instance role is attached to the EC2 instance any command line tools running inside that instance can automatically make use of those credentials.

      So at this point that's everything I wanted to cover thanks for watching go ahead and complete this video and when you're ready join me in the next lesson.

    1. Welcome back and in this brief demonstration you'll have the opportunity to create an EC2 instance with WordPress bootstrapped in ready and waiting to be configured.

      But this time you'll be using an enhanced CloudFormation template which uses CFN init and creation policies rather than the simple user data that you used in the previous demonstration.

      To get started just make sure you are logged in to the general AWS account as the I am admin user and as always make sure you've got the northern Virginia region selected.

      Now attached to this lesson are two one click deployment links.

      Go ahead and use the first one which is the VPC link.

      Everything should be pre-populated.

      All you'll need to do is scroll down to the bottom, check the acknowledgement box and click on create stack.

      Once it's moved into a create complete status you can resume and we'll carry on with the demo.

      I'll assume that that's now in a create complete status and now we're going to apply another CloudFormation template.

      This is the template that we'll be using.

      It's just an enhancement of the one that you used in the previous lesson.

      This time instead of using a set of procedural instructions, so a script that are passed into the user data, this uses the CFN init system and creation policies.

      So let's have a look at exactly what that means.

      If I scroll down and locate the EC2 instance logical resource, then here we've got this creation policy.

      This means that CloudFormation is going to create a hold point.

      It's not going to allow this resource to move into a create complete status until it receives a signal.

      And it's going to wait 15 minutes for this signal.

      So a timeout of 15 minutes.

      Now scrolling down and looking at the user data, the only things we do in a procedural way, we use the CFN init command to begin the desired state configuration.

      That will either succeed or not.

      And based on that we use the CFN signal command to pass that success or failure state back to the CloudFormation stack.

      And that's what uses this creation policy.

      So the creation policy will wait for a signal and it's this command which provides that signal, either a success signal or a failure signal.

      Now what we're interested in specifically for this demo lesson is this CFN init command.

      So this is the thing that pulls the desired state configuration from the metadata of this logical resource.

      I'll talk all about that in a second.

      But it pulls that down by being given the stack ID and it uses this substitution command.

      So instead of this being passed into the instance, what's actually passed instead of this variable name, so the stack ID variable name, is the actual stack ID.

      And then likewise, instead of this variable name, aws colon region is passed to the actual region that this template is being applied into.

      So that's what the substitution function does.

      It replaces any variable or parameter names with the values of those variables or parameters.

      So the CFN init process is then able to consult the CloudFormation stack and retrieve the configuration information.

      That's all stored in the metadata section of this logical resource.

      Now I just want to draw your attention to this double hyphen config sets wordpress underscore install.

      This tells us what set of instructions we want CFN init to run.

      So if I just expand the metadata section here, we've got one or more config sets defined.

      In this case, we've only got the one which is wordpress underscore install.

      And this config set runs five individual items, one after the other.

      And these are called config keys.

      So install CFN, software install, configure instance, install wordpress and configure wordpress.

      Now these reference the config keys defined below.

      So you'll see that the same name install CFN, software install, configure instance, install wordpress and configure wordpress.

      You'll recognize a lot of the commands used because they're the same commands that install and configure wordpress.

      So in the software install config key, we're using the DNF package manager to install various software packages that we need for this installation, such as WGet, MariaDB, the Apache web server and various other utilities.

      Then another part is services and we're specifying that we want these services to be enabled and to be running.

      So this means that the service will be set to start up on instance boot and it will make sure that it's running right now.

      The next config key is configure instance.

      The files component of this can create files with a certain content.

      So we're creating a file called etc update-motd.d/40-cow.

      This is the part that we had to do manually before and this is the thing that adds the cow say banner.

      Then we're running some more procedural commands to set the database root password and to update this banner.

      Then we've got install wordpress, which uses a sources option to expand whatever is specified here into this directory.

      So this automatically handles the download and the unjzip and untarring of this archive into this folder and it can even do that with authentication if needed.

      We're creating another file this time to perform the configuration of wordpress and another file this time to create the database for wordpress.

      Then finally we've got the configure wordpress which fixes up the permissions and creates these databases.

      So this is doing the same thing as the procedural example in the previous demo.

      Instead of running all of these commands one by one, this is just using desired state.

      Now there is one more thing that I wanted to point out right at the top.

      This is the part that configures CFN init to keep watching the logical resource configuration inside the cloud formation stack.

      And if it notices that the metadata for EC2 instance inside the stack changes, then it will run CFN init again.

      Remember how in the theory lesson I mentioned that this process could cope with stack updates.

      So it doesn't only run once like user data does.

      Well, this is how it does that.

      This configures this automatic update that keeps an eye on the cloud formation stack and reruns CFN init whenever any changes occur.

      This is well beyond what you need for the associate exam.

      I just want you to be aware of what this is and how it works.

      Essentially we're setting up a process called CFN hop and making it watch the cloud formation stack for any configuration changes.

      And then we're setting it up so that the CFN hop process is enabled and running so that it can watch the resource configuration constantly.

      So that's it for this template.

      What we'll do now is apply it.

      So go ahead and click on the second one click deployment link attached to this lesson.

      It should be called A4LEC2CFN init.

      So click that link.

      All you'll need to do is scroll down to the bottom and then click on create stack.

      Now this time remember we're using a creation policy.

      So cloud formation is not going to move this logical ID and to create complete when EC2 signals that the launch process is completed.

      Instead it's going to wait until the instance itself signals the successful completion of the CFN init process.

      So because we're using this creation policy it's going to hold until the instance operating system using CFN-signal provide a signal to cloud formation to say yep everything's okay and at that point the logical resource will move into create complete.

      So that's going to take a couple of minutes.

      The EC2 instance will need to actually launch itself and pass its status checks and then the CFN init process will run, perform all of the configuration required and then assuming the status code of that is okay then CFN-signal will take that status code and respond to the cloud formation stack with a successful completion and then the process will move on then cloud formation will mark the particular resources complete and the stack is complete.

      Now that will take a few minutes so just keep hitting refresh and you should see the status update after two to three minutes but go ahead and pause the video and resume it once your stack moves into the create complete status.

      And there we go at this point the stack has moved into the create complete status and I just want to draw your attention to this line.

      You won't have seen this before.

      This is the line where our EC2 instance has run the CFN init process successfully and then the CFN signal command has taken that success signal and delivered it to the cloud formation stack.

      So this is the signal that cloud formation was waiting for before moving this resource into a create complete status and that's what's needed before the stack itself could move into a create complete status.

      So now we explicitly know that the configuration of this instance has actually been completed.

      So we're not relying on EC2 telling us that the instance status is now running with two out of two checks.

      Instead the operating system itself the CFN init process that's completed successfully and the CFN signal process has explicitly indicated to cloud formation that that whole process has been completed.

      So if we move across to the EC2 console we should be able to connect to the instance exactly as we've done before.

      Look for the running instance and select it.

      Copy the public IP version 4 IP address and open that in a new tab.

      All being well you should see the familiar WordPress installation screen.

      If you're right click on that instance and put connect.

      Go to instance connect and hit connect that will connect you into the instance and you should be greeted by the cow themed login banner.

      This time if we use curl to show us the contents of user data this time it's only a small number of lines because the only thing that runs is the CFN init process and the CFN signal process.

      Notice though how all of these variable names have been replaced with their values so the stack IDs and the region.

      So this is how it knows to communicate with the right stack in the right region inside cloud formation.

      If we do a CD space forward slash var forward slash log and then do a listing we've still got these original two files so cloud hyphen init dot log and cloud hyphen init hyphen output dot log.

      So these are primarily associated with the user data output.

      But now we've also got these new log files so CFN hyphen init hyphen CMD dot log and that is an output of the CFN init process.

      So if we cat that so shudu space cat space and then the name of that log file this will show us an output of the CFN init process itself.

      So we can see each of the individual config keys running and what individual operations are being performed inside each of those keys.

      So it's a more complex but a more powerful process.

      And at this point that's everything I wanted to cover.

      It was just to give you practical exposure to an alternative to raw user data and that was CFN hyphen init.

      It's a much more powerful system especially when combined with cloud formation creation policies which allow us to pause the progress of a cloud formation stack waiting for the resource itself to explicitly say yes I finished off all of my bootstrapping process you're good to carry on and that's done using the CFN hyphen signal command.

      Now at this point let's just clean up the account move back to cloud formation.

      Once you there go ahead and delete the EC2 CFN init stack wait for that process to complete and once you've done that go ahead and delete the A4L VPC stack and that will return the AWS account into the state that you had it at the start of this demo.

      At that point thanks for doing this demo I hope it was useful.

      You can go ahead and complete this video now and when you're ready you can join me in the next.

    1. Reviewer #1 (Public review):

      Summary:

      Madigan et al. assembled an interesting study investigating the role of the MuSK-BMP signaling pathway in maintaining adult mouse muscle stem cell (MuSC) quiescence and muscle function before and after trauma. Using a full body and MuSC-specific genetic knockout system, they demonstrate that MuSK is expressed on MuSCs and that eliminating the BMP binding domain from the MuSK gene (i.e., MuSK-IgG KO) in mice at homeostasis leads to reduced PAX7+ cells, increased myonuclear number, and increase myofiber size, which may be due to a deficit in maintaining quiescence. Additionally, after BaCl2 injury, MuSK-IgG KO mice display accelerated repair after 7 days post-injury (dpi) in males only. Finally, RNA profiling using nCounter technology showed that MuSK-IgG KO MuSCs express genes that may be associated with the activated state.

      Strengths:

      Overall, the biology regulating MuSC quiescence is still relatively unexplored, and thus, this work provides a new mechanism controlling this process. The experiments discussed in the paper are technically sound with great complementary mouse models (full body versus tissue-specific mouse KO) used to validate their hypothesis. Additionally, the paper is well written with all the necessary information in the legends, methods, and figures being reported.

      Weaknesses:

      While the data largely supports the author's conclusions, I do have a few points to consider when reading this paper.

      (1) For Figure 1, while I appreciate the author's confirming MuSK RNA and protein in MuSCs, I do think they should (a) quantify the RNA using qPCR and (b) determine the percentage of MuSCs expressing MuSK protein in their single fiber system in multiple biological replicates. This information will help us understand if MuSK is expressed in 1/10 or 10/10 PAX7-expressing MuSCs. Also, it will help place their phenotypes into the right context, especially when considering how much of the PAX7-pool is expressing MuSK from the beginning.

      (2) Throughout the paper the argument is made that MuSK-IgG KO (full body and MuSC-specific KOs) are more activated and/or break quiescence more readily, but there is no attempt to test directly. Therefore, the authors should consider measuring the activation dynamics (i.e., break from quiescence) of MuSCs directly (EdU assays or live-cell imaging) in culture and/or in muscle in vivo (EdU assays) using their various genetic mouse models.

      (3) For Figure 2, given that mice are considered adults by 3 months, it is really surprising how just two months later they are starting to see a phenotype (i.e., reduced PAX7-cells, increased number of myonuclei, and increased myofiber size)-which correlates with getting older. Given that aged MuSCs have activation defects (i.e., stuck somewhere in the quiescence cycle), a pending question is whether their phenotype gets stronger in aged mice, like 18-24 months. If yes, the argument that this pathway should be used in a therapeutic sense would be strengthened.

      (4) For Figure 4, the same question as in point (2), the increase in fiber sizes by 7dpi in MuSK-IgG KO males is minimal (going from ~23 to 27 by eye) and no difference at a later time point when compared to WT mice. However, if older mice are used (18-24 months old) - which are known to have repair deficits-will the regenerative phenotype in MuSK-IgG KO mice be more substantial and longer lasting?

      (5) For Figure 6, this gene set is not glaringly obvious as being markers of MuSC activation (i.e., no MyoD), so it's hard for the readers to know if this gene set is truly an activation signature. Also, the Shcherbina et al. data presented as a column with * being up or down (i.e. differentially expressed) is not helpful, since you don't know whether those mRNAs in that dataset are going up with the activation process. Addressing this point as well as my point (1) will further strengthen the author's conclusions about the MuSK-IgG KO MuSCs not being able to maintain quiescence as effectively.

    2. Author response:

      Reviewer #1 (Public review):

      Summary:

      Madigan et al. assembled an interesting study investigating the role of the MuSK-BMP signaling pathway in maintaining adult mouse muscle stem cell (MuSC) quiescence and muscle function before and after trauma. Using a full body and MuSC-specific genetic knockout system, they demonstrate that MuSK is expressed on MuSCs and that eliminating the BMP binding domain from the MuSK gene (i.e., MuSK-IgG KO) in mice at homeostasis leads to reduced PAX7+ cells, increased myonuclear number, and increase myofiber size, which may be due to a deficit in maintaining quiescence. Additionally, after BaCl2 injury, MuSK-IgG KO mice display accelerated repair after 7 days post-injury (dpi) in males only. Finally, RNA profiling using nCounter technology showed that MuSK-IgG KO MuSCs express genes that may be associated with the activated state.

      Strengths:

      Overall, the biology regulating MuSC quiescence is still relatively unexplored, and thus, this work provides a new mechanism controlling this process. The experiments discussed in the paper are technically sound with great complementary mouse models (full body versus tissue-specific mouse KO) used to validate their hypothesis. Additionally, the paper is well written with all the necessary information in the legends, methods, and figures being reported.

      Weaknesses:

      While the data largely supports the author's conclusions, I do have a few points to consider when reading this paper.

      (1) For Figure 1, while I appreciate the author's confirming MuSK RNA and protein in MuSCs, I do think they should (a) quantify the RNA using qPCR and (b) determine the percentage of MuSCs expressing MuSK protein in their single fiber system in multiple biological replicates. This information will help us understand if MuSK is expressed in 1/10 or 10/10 PAX7-expressing MuSCs. Also, it will help place their phenotypes into the right context, especially when considering how much of the PAX7-pool is expressing MuSK from the beginning.

      The quantification is a reasonable point; however, we don’t believe that this information is necessary for supporting the interpretation of the findings.

      We agree that determining the proportion of SCs that expressing MuSK is useful information and we will address this question in the Revision.

      (2) Throughout the paper the argument is made that MuSK-IgG KO (full body and MuSC-specific KOs) are more activated and/or break quiescence more readily, but there is no attempt to test directly. Therefore, the authors should consider measuring the activation dynamics (i.e., break from quiescence) of MuSCs directly (EdU assays or live-cell imaging) in culture and/or in muscle in vivo (EdU assays) using their various genetic mouse models

      We agree that this point is of interest and we plan to address it in future studies.

      (3) For Figure 2, given that mice are considered adults by 3 months, it is really surprising how just two months later they are starting to see a phenotype (i.e., reduced PAX7-cells, increased number of myonuclei, and increased myofiber size)-which correlates with getting older. Given that aged MuSCs have activation defects (i.e., stuck somewhere in the quiescence cycle), a pending question is whether their phenotype gets stronger in aged mice, like 18-24 months. If yes, the argument that this pathway should be used in a therapeutic sense would be strengthened.

      We agree that the potential role of the MuSK-BMP pathway in aged SCs is of import and could shed new light on SC dynamics in this context. However, we note that the activation observed between 3-5 months results in improved muscle quality (increased myofiber size and grip strength), which is opposite of what is observed with aging. We agree that activating the MuSK-BMP pathway in aged animals has the potential to activate SCs, promote muscle growth and counter sarcopenia.  Pharmacological and genetic approaches to test that question are underway, but given the time frame they are beyond the scope of the current manuscript.

      (4) For Figure 4, the same question as in point (2), the increase in fiber sizes by 7dpi in MuSK-IgG KO males is minimal (going from ~23 to 27 by eye) and no difference at a later time point when compared to WT mice. However, if older mice are used (18-24 months old) - which are known to have repair deficits-will the regenerative phenotype in MuSK-IgG KO mice be more substantial and longer lasting?

      Again, an interesting point that will be addressed in future studies. 

      (5) For Figure 6, this gene set is not glaringly obvious as being markers of MuSC activation (i.e., no MyoD), so it's hard for the readers to know if this gene set is truly an activation signature. Also, the Shcherbina et al. data presented as a column with * being up or down (i.e. differentially expressed) is not helpful, since you don't know whether those mRNAs in that dataset are going up with the activation process. Addressing this point as well as my point (1) will further strengthen the author's conclusions about the MuSK-IgG KO MuSCs not being able to maintain quiescence as effectively.

      We agree that this Figure should include more information and be formatted in a way more readily convey the point. We will provide these changes in the Revision.

      Reviewer #2 (Public review):

      Summary:

      The work by Madigan et al. provides evidence that the signaling of BMPs via the Ig3 domain of MuSK plays a role during muscle postnatal development and regeneration, ultimately resulting in enhanced contractile force generation in the absence of the MuSK Ig3 domain. They demonstrate that MuSK is expressed in satellite cells initially post-isolation of muscle single fibers both in WT and whole-body deletion of the BMP binding domain of MuSK (ΔIg3-MuSK). In developing mice, ΔIg3-MuSK results in increased muscle fiber size, a reduction in Pax7+ cells, and increased muscle contractile force in 5-month-old, but not 3-month-old, mice. These data are complemented by a model in which the kinetics of regeneration appear to be accelerated at early time points. Of note, the authors demonstrate muscle tibialis anterior (TA) weights and fiber feret are increased during development in a Pax7CreERT2;MuSK-Ig3loxp/loxp model in which satellite cells specifically lack the MuSK BMP binding domain. Finally, using Nanostring transcriptional the authors identified a short list of genes that differ between the WT and ΔIg3-MuSK SCs. These data provide the field with new evidence of signaling pathways that regulate satellite cell activation/quiescence in the context of skeletal muscle development and regeneration.

      On the whole, the findings in this paper are well supported, however additional validation of key satellite cell markers and data analysis need to be conducted given the current claims.

      (1) The Pax7CreERT2;MuSK-Ig3loxp/loxp model is the appropriate model to conduct studies to assess satellite cell involvement in MuSK/BMP regulation. Validation of changes to muscle force production is currently absent using this model, as is quantification of Pax7+ tdT+ cells in 5-month muscle. Given that MuSK is also expressed on mature myofibers at NMJs, these data would further inform the conclusions proposed in the paper.

      As reported in the manuscript, we observed increased myofiber size, length and TA weight in the conditional mutants at five months of age. We did not assess grip strength in those experiments. 

      We demonstrated highly efficient MuSK Ig3-domain recombination by PCR analysis of FACS-sorted SCs from these conditional mutants (Supplemental Fig. S3). However, while we checked for Pax7+ tdT+ cells in 5-month SCs, we did not quantify this finding.

      (2) All Pax7 quantification in the paper would benefit from high magnification images including staining for laminin demonstrating the cells are under the basal lamina.

      The point is reasonable, we observed that these Pax7+ cells were under the basal lamina, but we did not acquire images at higher magnification.   

      (3) The nanostring dataset could be further analyzed and clarified. In Figure 6b, it is not initially apparent what genes are upregulated or downregulated in young and aged SCs and how this compares with your data. Pathway analysis geared toward genes involved in the TGFb superfamily would be informative.

      We agree that further analysis and information regarding the data in this Figure is warranted and we will include it in the Revision.

      (4) Characterizing MuSK expression on perfusion-fixed EDL fibers would be more conclusive to determine if MuSK is expressed in quiescent SCs. Additional characterization using MyoD, MyoG, and Fos staining of SCs on EDL fibers would help inform on their state of activation/quiescent.

      These are all valid points that we intend to address in future experiments.

      (5) Finally, the treatment of fibers in the presence or absence of recombinant BMP proteins would inform the claims of the paper.

      As reported in Jaime et al (2024) we have extensively characterized the differences in BMP response in both cultured WT and DIg3-MuSK myofibers and myoblasts at the level of signaling (pSMAD 1/5/8 nuclear localization and phosphorylation) and gene expression (qRT-PCR).

      Reviewer #3 (Public review):

      Summary:

      Understanding the molecular regulation of muscle stem cell quiescence. The authors evaluated the role of the MuSK-BMP pathway in regulating adult SC quiescence by the deletion of the BMP-binding MuSK Ig3 domain ('ΔIg3-MuSK').

      Strengths:

      A novel mouse model to interrogate muscle stem cell molecular regulators. The authors have developed a nice mouse model to interrogate the role of MuSK signaling in muscle stem cells and myofibers and have unique tools to do this.

      Weaknesses:

      Only minor technical questions remain and there is a need for additional data to support the conclusions.

      (1) The authors claim that dIg3-MuSK satellite cells break quiescence and start fusing, based on the reduction of Pax7+ and increase of nuclei/fiber (Fig 2-3), and maybe the gene expression (Fig6). However, direct evidence is needed to support these findings such as quantifying quiescent (Pax7+Ki67-) or activated (Pax7+Ki67+) satellite cells (and maybe proliferating progenitors Pax7-Ki67+) in the dIg3-MuSK muscle.

      We believe that the data presented strongly supports the conclusion that the SCs break quiescence, activate, and fuse into myofibers in uninjured muscle.  As noted above, the mechanistic studies suggested are of interest and we will address them in future work.

      (2) It is not clear if the MuSK-BMP pathway is required to maintain satellite cell quiescence, by the end of the regeneration (29dpi), how Pax7+ numbers are comparable to the WT (Fig4d). I would expect to have less Pax7+, as in uninjured muscle. Can the authors evaluate this in more detail?

      The reviewer makes an important point. Our current interpretation of the findings is that quiescence is broken in SCs in uninjured muscle, but that ‘stemness’ is preserved, allowing for efficient muscle regeneration and restoration of the SC pool. Whether such properties reflect SC heterogeneity (as suggested in the comments of the other reviewers) and/or different states along a continuum is of particular interest and will be the focus of future studies. 

      (2) Figure 4 claims that regeneration is accelerated, but to claim this at a minimum they need to look at MYH3+ fibers, in addition to fiber size.

      We did not examine MYH3+ fibers in this study. However, we did observe increased in Pax7+ cells at 5dpi (male and female) as well as larger myofiber size (Feret diameter) at 7dpi in the male animals.  In addition, the panels in Figure 4 b,c (H&E and laminin, respectively) showing accelerated differentiation were selected to be representative of the experimental group. 

      (3) The Pax7 specific dIg3-MuSK (Fig5) is very exciting. However, it will be important to quantify the Pax7+ number. Could the authors check the reduction of Pax7+ in this model since it would confirm the importance of MuSK in quiescence?

      In Figure 5c, we assessed the number of Pax7+ cells in the conditional mutant during the course of regeneration (at 3, 5, 7, 14, 22 and 29 dpi). As discussed above, these results confirmed the findings of the constitutive mutant (reduction of Pax7+ cells in uninjured 5-month-old muscle) as well as showing the increased number at 5dpi and return to WT levels at 29 dpi.

      (3) Rescue of the BMP pathway in the model would be further supportive of the authors' findings.

      This point is valid. In a parallel study examining the role of the MuSK-BMP pathway at the NMJ, we have observed that BMP+/- (hypomorphs) recapitulate key phenotypes observed in DIg3-MuSK  NMJs (Fish et al., bioRxiv, 2023). This point will be included in the Revision. 

      (4) Is the stem cell pool maintained long term in the deleted dIg3-MuSK SCs? Or would they be lost with extended treatment since they are reduced at the 5-month experiments? This is an important point and should be considered/discussed relevant to thinking about these data therapeutically.

      We agree that this is an important point for future studies. 

      (5) Without the Pax7-specific targeting, when you target dIg3-MuSK in the entire muscle, what happens to the neuromuscular nuclei?

      A manuscript describing the phenotype of the NMJ in DIg3-MuSK constitutive mice is in bioRxiv (Fish et al., 2024) and is in Revision at another journal.  We anticipate discussing the findings in the Revised version of the current manuscript. 

      (6) Why were differences seen in males and not females? Is XIST downregulation occurring in both sexes? Could the authors explain these findings in more detail?

      The male and female difference in myofiber size is of interest.  The nanostring experiments,  which showed the XIST reduction, were only performed in male mice.

    1. “Republicans came in with a higher court and said this is illegal, it’s voter fraud, and that they’re trying to steal the election.”

      I was raised in a democratic household my mom very democratic I know republicans and democratic have their issues and this just makes me remember the time my mom coworker was upset with my mom because she is a democratic after Biden won thinking of voting fraud

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public reviews:

      Reviewer #1:

      (1) Which allele is alr1, the one upstream of mazEF or the one in the lysine biosynthetic operon?

      Alr1 is encoded by SAUSA300_2027 and is the gene upstream to mazEF. We have now incorporated this information in the manuscript (Line# 127).

      (2) Figure 3B. Where does the C3N2 species come from in the WT and why is it absent in the mutants? It is about 25% of the total dipeptide pool.

      In Figure 3B, C3N2 species results from the combination of C3N1 (from Alr1) and C0N1 (from Dat). The reason this species is completely absent in either of the two mutants is because it requires one D-Ala from both Alr1 and Dat proteins to generate C3N2 D-Ala-D-Ala.

      (3) Figure 3D could perhaps be omitted. I understand that the authors attained statistical significance in the fitness defect, but biologically this difference is very minor. One would have to look at the isotopomer distribution in the Dat overexpressing strain to make sure that increased flux actually occurred since there are other means of affecting activity (e.g. allosteric modulators).

      Thank you for the suggestion. We agree with the reviewer that the fitness defect observed after increased dat expression is relatively minor and have moved this figure to the supplementary section as Figure 3-figure supplement 1.

      Although we attempted to amplify the fitness defect of dat expression by cloning dat on to a multicopy vector, we couldn't maintain its stable expression in S. aureus. This instability may be due to the depletion of D-Ala when dat is overexpressed. As a result, we switched to expressing dat from a single additional copy integrated into the SaPI locus, which was sufficient to cause the expected fitness defect, albeit a minor one.

      (4) In Figure 4A, why is the complete subunit UDP-NAM-AEKAA increasing in each strain upon acetate challenge if there was such a stark reduction in D-Ala-D-Ala, particularly in the ∆alr1 mutant? For that matter, why are the levels of UDP-NAM-AEKAA in the ∆alr1 mutant identical to that of WT with/out acetate?

      Thank you for raising this important point. We have addressed this in line# 299-302 and 451-455 of the revised manuscript. In short, we believe that the inhibition of Ddl by acetate significantly increases the intracellular pool of the tripeptide UDP-NAM-AEK, which then outcompetes the substrate (pentapeptide; UDP-NAM-AEKAA) of MraY. As a result, the intracellular concentration of the pentapeptide increases since it is no longer efficiently consumed by MraY. This explanation is also supported by a kinetic study conducted in Ref (1), where the competition between UDP-NAM-AEKAA and UDP-NAM-AEK as substrates for MraY is demonstrated.

      (5) Figure 4B. Is there no significant difference between ddl and murF transcripts between WT and ∆alr1 under acetate stress? This comparison was not labeled if the tests were done.

      Thank you for suggesting this comparison. The ddl and murF transcripts between WT and alr1 under acetate stress were significantly different. We have added this comparison to Figure 4B.

      (6) Although tricky, it is possible to measure intracellular acetate. It might be of interest to know where in the Ddl inhibition curve the cells actually are.

      Thank you for the suggestion. We agree this would have been an excellent addition to the manuscript. However, accurately measuring intracellular acetate would require the use of radiolabeled acetate (2), and we currently lack the expertise to do this experiment. However, since our study clearly shows that acetate-mediated growth impairment is due to Ddl inhibition, and the IC50 of acetate for Ddl is around 400 mM, we predict that the intracellular concentration must be close to or above this IC50 to observe the growth phenotypes we report.

      Reviewer #2:

      Although the authors have conclusively shown that Ddl is the target of acetic acid, it appears that the acetic acid concentration used in the experiments may not truly reflect the concentration range S. aureus would experience in its environment. Moreover, Ddl is only significantly inhibited at a very high acetate concentration (>400 mM). Thus, additional experiments showing growth phenotypes at lower organic acid concentrations may be beneficial.

      Thank you for the suggestion. In response to the reviewer, we have measured growth at various acetate concentrations and demonstrate a concentration-dependent effect (Figure 1C).

      We use 20 mM acetic acid in our study. In the gut, where S. aureus colonizes, acetate levels can reach up to 100 mM, so we believe our concentrations are physiologically relevant. When S. aureus encounters 20 mM acetate, the intracellular concentration can rise to 600 mM if the transmembrane pH gradient is 1.5 units, which is well above the ~400 mM IC50 we report for Ddl.

      Another aspect not adequately discussed is the presence of D-ala in the gut environment, which may be protective against acetate toxicity based on the model provided.

      Thank you for pointing this out. We agree that D-Ala from the gut microbiota could protect against acetate toxicity, and we’ve included this in the discussion. However, our study clearly indicates that S. aureus itself maintains high intracellular D-Ala levels through Alr1 activity which is sufficient to counter acetate anion intoxication.

      Recommendation for the authors:

      Reviewer #2:

      Major Comments:

      (1) In Line 85, authors indicate S. aureus may encounter a high concentration of ~100 mM acetic acid (extracellular?). Could the authors cite more (and recent) references indicating S. aureus encounters >100 mM acetic acid in the environment?

      To the best of our knowledge, no studies have specifically examined whether S. aureus encounters high mM concentration of acetate in the gut. Line 85 was surmised from multiple studies: recent findings that S. aureus colonizes the gut (3, 4) and that the gut environment has high acetate levels (~100 mM) (5). In response to the reviewers request, more recent references supporting high acetate concentrations in the gut (6, 7) have been added in Line# 86.

      (2) In Line 117, it is mentioned that S. aureus when grown in vitro at 20 mM acetic acid can accumulate ~600 mM acetic acid in the cytoplasm.

      a. Does the intracellular concentration go up proportionally if grown in 100 mM acetic acid? Given the IC50 of acetic acid-mediated inhibition of Ddl is ~400 mM, I wonder how physiologically relevant this finding presented here is.

      Thank you for the opportunity to explain this further. If S. aureus encounters a concentration of 100 mM acetate and its transmembrane pH gradient (pHin-pHout) is held at 1.5, the intracellular concentration of acetate could theoretically increase up to 3 M based on Ref (8). However, previous studies have shown that bacteria can lower the magnitude of transmembrane pH gradient by decreasing their intracellular pH to limit accumulation of anions within cells (9, 10).

      Although our study shows that the IC50 of Ddl inhibition by acetate is relatively high (~400 mM), we believe it’s still relevant because just 20 mM of environmental acetate at a pH of 6.0 can raise the intracellular concentration of acetate to over 600 mM, which is well above the IC50 we report for Ddl. Moreover, since S. aureus may encounter high concentrations of acetate during gut colonization, we believe our findings are physiologically relevant.

      b. Could the authors show concentration-dependent growth inhibition in alr::tn by titrating a range of acetic acid concentrations (for example 0, 0.5, 1, 5, 10, 20 mM)? Measuring intracellular acetate concentration may be beneficial as well.

      Thank you for this question. We now provide data to support that acetate-mediated inhibition of the alr1 mutant is concentration-dependent (see Figure 1C).

      c. It appears that there may be excess D-ala in the gut environment (PMIDs: 30559391; 35816159), which could counter the high acetate based on the model presented here. Could the authors clarify and/or include this information in the manuscript?

      This is an excellent point, and we have now included it in the discussion (Line# 470-475). It is indeed possible that D-Ala produced by the gut microbiome may further enhance S. aureus resistance to organic acid anions, in addition to the inherent contribution of Alr1 activity.

      (3) The following is not needed; however, it would be interesting if the authors could show that S. aureus cells grown in the presence of acetate are highly sensitive to cycloserine (which targets Alr and Ddl) compared to cells grown in the absence of acetate.

      Thank you for the suggestion. We are currently studying D-cycloserine (DCS) resistance in S. aureus. Although we provide the data below for clarification, it is not included in the current manuscript as it is part of a separate study.

      As the reviewer speculated, S. aureus is more susceptible to DCS when grown in the presence of acetate (see figure below). Normally, complete growth inhibition occurs at 32 µg/ml of DCS. However, with 20 mM acetic acid present, complete inhibition is achieved at just 8 µg/ml of DCS. Furthermore, the growth inhibition is completely rescued when externally supplemented with 5 mM D-Ala. We believe that DCS works synergistically with acetate to inhibit Ddl activity, and we are conducting additional studies to explore this further.

      Minor Comments:

      (1) Many commas are missing.

      Missing commas are now incorporated.

      (2) Line 77: disassociate --> dissociate

      Corrected.

      (3) Line 103: that --> which

      Corrected.

      (4) Lines 199-203: authors could have used gfp/luciferase reporter to test their hypotheses.

      Thank you for the suggestion. Initially, we created GFP translational fusions for all the mutants mentioned in Line# 199-203. However, the fluorescence intensity was too low to test the hypothesis, as these were single-copy fusions inserted at the SaPI site of the S. aureus genome. Because of this limitation, we took advantage of the essentiality of D-Ala-D-Ala in S. aureus to report on various mutants instead of a fluorescent reporter. In hindsight, a LacZ reporter assay might have been equally effective.

      (5) Line 339: It would be beneficial to introduce that Ddl has two independent ATP and D-ala binding sites.

      We have now added that information (Line# 338-339).

      (6) Is ddl an essential gene? If so, explicitly mention that.

      Yes, ddl is an essential gene and we have now incorporated this information in Line 103.

      (7) Line 354: shows a difference in density?

      The use of the term “difference density” is a technical crystallographic term commonly used to connote density observed for ligands in X-ray crystal structures. In this case, it simply refers to the observed density that corresponds to the two acetate ions bound within the Ddl active site.

      (8) Line 498: "Thus." Typo, change period to comma.

      We have corrected as suggested in Line 496.

      (9) Figure 1 legend says "was screen" instead of screened.

      This is now corrected.

      (10) Figure 1- Figure Supplement 1B: including data for alr2::tn dat::tn may ensure no redundancy (Lines 171-172). It is currently missing.

      Thank you for the suggestion. We now include both alr2dat double mutant and the alr1alr2dat triple mutant in Figure 1 - Figure Supplement 1B. In addition we also show that the alr1alr2dat mutant is resuced by the addition of D-Ala in Figure 1 - Figure Supplement 1C. The mutant information is also added to Table S5.

      (11) Figure 7: pentaglycine coming off of NAM is misleading. Remove untethered pentaglycine bridges.

      We thank you for pointing this out. We have modified the figure in the manuscript as suggested by the reviewer.

      (12) Are alr1/ddl cells (with limited 4-3 PG crosslink) less sensitive to vancomycin?

      On the contrary, the alr1 mutant is slightly more sensitive to vancomycin compared to the wild-type strain (see Figure below). We believe this happens because the alr1 mutant incorporates less D-Ala-D-Ala into the peptidoglycan, reducing the number of targets for vancomycin. As a result, vancomycin may be able to saturate the available D-Ala-D-Ala targets on the cell wall at a lower concentration in the alr1 mutant than in the wild type strain, leading to increased sensitivity. We haven’t included this data in the manuscript as it is part of a separate study.

      (13) Based on the structural studies, could the authors mutate the residues of Ddl involved in acetic acid binding, thereby making it resistant to acetic acid stress?

      The residues that the acetate anion interacts with are located within the ATP-binding and D-Ala-binding sites of Ddl. Since these residues are essential for Ddl function, we are unable to mutate them.

      (14) Microscopy to show the cell morphologies of wild-type and mutants exposed to acetic acid (and with D-ala supplementation) could be potentially interesting.

      Thank you for the suggestion. We did perform microscopy, expecting changes in cell shape or size, but the results were unremarkable and not included in the manuscript.

      References:

      (1) Hammes WP & Neuhaus FC (1974) On the specificity of phospho-N-acetylmuramyl-pentapeptide translocase. The peptide subunit of uridine diphosphate-N-actylmuramyl-pentapeptide. J Biol Chem 249(10):3140-3150.

      (2) Roe AJ, McLaggan D, Davidson I, O'Byrne C, & Booth IR (1998) Perturbation of anion balance during inhibition of growth of Escherichia coli by weak acids. J Bacteriol 180(4):767-772.

      (3) Acton DS, Plat-Sinnige MJ, van Wamel W, de Groot N, & van Belkum A (2009) Intestinal carriage of Staphylococcus aureus: how does its frequency compare with that of nasal carriage and what is its clinical impact? Eur J Clin Microbiol Infect Dis 28(2):115-127.

      (4) Piewngam P_, et al. (2023) Probiotic for pathogen-specific _Staphylococcus aureus decolonisation in Thailand: a phase 2, double-blind, randomised, placebo-controlled trial. Lancet Microbe 4(2):e75-e83.

      (5) Cummings JH, Pomare EW, Branch WJ, Naylor CP, & Macfarlane GT (1987) Short chain fatty acids in human large intestine, portal, hepatic and venous blood. Gut 28(10):1221-1227.

      (6) Correa-Oliveira R, Fachi JL, Vieira A, Sato FT, & Vinolo MA (2016) Regulation of immune cell function by short-chain fatty acids. Clin Transl Immunology 5(4):e73.

      (7) Hosmer J, McEwan AG, & Kappler U (2024) Bacterial acetate metabolism and its influence on human epithelia. Emerg Top Life Sci 8(1):1-13.

      (8) Carpenter CE & Broadbent JR (2009) External concentration of organic acid anions and pH: key independent variables for studying how organic acids inhibit growth of bacteria in mildly acidic foods. J Food Sci 74(1):R12-15.

      (9) Russell JB (1992) Another explanation for the toxicity of fermentation acids at low pH: anion accumulation versus uncoupling. Journal of Applied Bacteriology 73(5):363-370.

      (10) Russell JB & Diez-Gonzalez F (1998) The effects of fermentation acids on bacterial growth. Adv Microb Physiol 39:205-234.

    1. 12.1.2. Memes# In the 1976 book The Selfish Gene, evolutionary biologist Richard Dawkins1 said rather than looking at the evolution of organisms, it made even more sense to look at the evolution of the genes of those organisms (sections of DNA that perform some functions and are inherited). For example, if a bee protects its nest by stinging an attacking animal and dying, then it can’t reproduce and it might look like a failure of evolution. But if the gene that told the bee to die protecting the nest was shared by the other bees in the nest, then that one bee dying allows the gene to keep being replicated, so the gene is successful evolutionarily. Since genes contained information about how organisms would grow and live, then biological evolution could be considered to be evolving information. Dawkins then took this idea of the evolution of information and applied it to culture, coining the term “meme” (intended to sound like “gene”). A meme is a piece of culture that might reproduce in an evolutionary fashion, like a hummable tune that someone hears and starts humming to themselves, perhaps changing it, and then others overhearing next. In this view, any piece of human culture can be considered a meme that is spreading (or failing to spread) according to evolutionary forces. So we can use an evolutionary perspective to consider the spread of: Technology (languages, weapons, medicine, writing, math, computers, etc.), religions philosophies political ideas (democracy, authoritarianism, etc.) art organizations etc. We can even consider the evolutionary forces that play in the spread of true and false information (like an old saying: “A lie is halfway around the world before the truth has got its boots on.”)

      I think this idea of memes is pretty cool because it explains why some trends, like popular music or viral videos, catch on quickly and become huge, while others fade away. It's like there’s a “survival of the fittest” happening with ideas, just like with genes in nature. For example, technology or social media could be seen as memes that keep evolving and becoming more advanced because people keep building on what came before. This chapter made me realize that culture isn’t just about what we create; it’s also about how these creations spread and stick around. It's interesting to think that we’re not just passing down physical traits but also ideas and beliefs that shape the future.

    2. Natural Selection Some characteristics make it more or less likely for an organism to compete for resources, survive, and make copies of itself

      I studied this in biology class during my high school years, and what impressed me the most was the part of Natural selection, I found that just like Darwin's theory of evolution says, any living thing is the survival of the fittest following genetic optimization trying to survive from one generation to the next, and that's what our world is all about, it's brutal but also realistic.

    1. I suppose these orgs don’t have an issue with cave paintings, which are just another form of social media. Books? Ok. eBooks? Ok. Book reviews? Ok. A book review on a social platform? Someone’s comment on a book review? A number which represents the number of people who agree with that book review?Where exactly would you like to draw the line between kosher and caustic?

      I think it's really funny and self-incriminating to use the word "kosher" as though that didn't evoke thousand of years of thoughtful community discourse about where lines should be drawn and why

  3. static1.squarespace.com static1.squarespace.com
    1. Indian spirits are demons

      It's really telling just how effective colonization was, if it was able to convince some people in later generations that spirits that are part of their heritage and culture are evil.

    1. Conal Elliott introduces 'Denotational Design' as his central paradigm for software and library design.

      Quote: "I call it denotational design."

      He emphasizes that the primary job of a software designer is to build precise abstractions, focusing on 'what' rather than 'how'.

      Quote: "So I want to start out by talking about what I see as the main job of a software designer, which is to build abstractions."

      He references Edsger Dijkstra's perspective on abstraction to highlight the need for precision in software design.

      Quote: "This is a quote I like very much from a man I respect very much, Edgar Dykstra, and he said the purpose of abstraction is not to be vague... it's to create a whole new semantic level in which one can be absolutely precise."

      He identifies a common issue in software development: the focus on precision about implementation ('how') rather than specification ('what').

      Quote: "So I'm going to say something that may be a little jarring, which is that the state of the... commonly practiced state of the art in software is something that is precise only about how, not about what."

      He stresses the importance of making specifications precise to avoid self-deception in software development.

      Quote: "So the reason I harp onto precision is because it's so easy to fool ourselves and precision is what keeps us away from doing that."

      He cites Bertrand Russell's observation on the inherent vagueness of concepts until made precise.

      Quote: "Everything is vague to a degree you do not realize until you've tried to make it precise."

      He discusses the inadequacy of the term 'functional programming' and introduces 'denotational programming' as a better-defined alternative, referencing Peter Landin's work.

      Quote: "Peter Landon suggested term denotated... having three properties... every expression denotes something... that something depends only on the denotations of the sub-expressions."

      He defines 'Denotational Design' as a methodology that provides precise, simple, and compelling specifications, and helps avoid abstraction leaks.

      Quote: "I call it denotational design... It gives us precise, simple, and compelling specifications... you do not have abstraction leaks."

      He outlines three goals in software projects: building precise, elegant, and reusable abstractions; creating fast, correct, and maintainable implementations; and producing simple, clear, and accurate documentation.

      Quote: "So I suggest there are three goals... I want my abstractions to be precise, elegant, and reusable... My implementation, I'd like it to be fast... correct... maintainable... and the documentation should also be simple and... accurate."

      He demonstrates Denotational Design through an example of designing a library for image synthesis and manipulation, engaging the audience in defining what an image is.

      Quote: "So an example I want to talk about is image synthesis and manipulation... What is an image?"

      He considers various definitions of an image, including arrays of pixels, functions over space, and collections of shapes, before settling on a mathematical model.

      Quote: "My answer is: it's an assignment of colors to 2D locations... there's a simple precise way to say that which is the function from location to colors."

      He applies the denotational approach to define the meanings of types and operations in his image library, emphasizing the importance of compositionality.

      Quote: "So now I'm giving a denotation... So the meaning of over top bot is... mu of top and mu of bot... Note the compositionality of mu."

      He improves the API by generalizing operations and types, introducing type parameters to increase flexibility and simplicity.

      Quote: "So let's generalize... instead of saying an image which is a single type, let's say an image of a... we'll make it be parameterized by its output."

      He introduces standard abstractions like Monoid, Functor, and Applicative, showing how his image type and operations fit into these abstractions, leveraging their laws and properties.

      Quote: "Now we can also look at a couple of other interfaces: monad and comonad."

      He explains the 'Semantic Type Class Morphism' principle, stating that the instance's meaning follows the meaning's instance, ensuring that standard abstractions' laws hold for his types.

      Quote: "This leads to this principle that I call the semantic type class morphism principle... The instance's meaning follows the meaning's instance."

      He demonstrates that by following this principle, his implementations are necessarily correct and free of abstraction leaks, as they preserve the laws of the standard abstractions.

      Quote: "These proofs always go through... There's nothing about imagery except the homomorphism property that makes these laws go through."

      He illustrates the principle with examples from his image library, such as showing that images form a Monoid and Functor due to their underlying semantics.

      Quote: "So images... Well, image has the right kind... Well, yes it is... Here's this operation we called lift one."

      He discusses how this approach allows for reusable and compositional reasoning, similar to how algebra uses abstract interfaces and laws.

      Quote: "So when I say laws hold, you should say what are you even talking about... So in order for a law to be satisfied... we have to say what equality means."

      He provides further examples of applying Denotational Design to other types, such as streams and linear transformations, showing the broad applicability of the approach.

      Quote: "Another example is... so we just follow these all through and they all work... linear transformations."

      He concludes by summarizing the benefits of Denotational Design, including precise specifications, correct implementations, and the elimination of abstraction leaks, and invites further discussion.

      Quote: "I think it's a good place to stop... I'm happy to take any questions... I'd love to hear from you."

    1. Saying and doing provocative, shocking, and offensive things can also be an effective political strategy, and getting viral attention through others’ negative reactions has been seen as a key component of Donald Trump’s political successes.

      Generally negative speech and rhetoric gains the most attention and motivates people to speak on an interact with posts, making the algorithm believe that the posts in question are one users like. It's just a never-ending cycle of negativity I think has greatly divided the political scene and the irrational behavior of social media users.

    1. It’s just that we’re looking for it,” Woodbury said. “As soon as other states start looking and doing testing, they’re going to find the same thing that Maine has: that there’s contaminated farmland, that they need to deal with it and it’s going to cost a lot of money.”

      Wonder how many will find out their land is contaminated.

    1. Its roots, though, don’t just lie in explicitly Christian tradition. In fact, it’s possible to trace the origins of the American prosperity gospel to the tradition of New Thought, a nineteenth-century spiritual movement popular with decidedly unorthodox thinkers like Ralph Waldo Emerson and William James. Practitioners of New Thought, not all of whom identified as Christian, generally held the divinity of the individual human being and the priority of mind over matter. In other words, if you could correctly channel your mental energy, you could harness its material results. New Thought, also known as the “mind cure,” took many forms: from interest in the occult to splinter-Christian denominations like Christian Science to the development of the “talking cure” at the root of psychotherapy. The upshot of New Thought, though, was the quintessentially American idea that the individual was responsible for his or her own happiness, health, and situation in life, and that applying mental energy in the appropriate direction was sufficient to cure any ills.
    1. One of the ways social media can be beneficial to mental health is in finding community (at least if it is a healthy one, and not toxic like in the last section). For example, if you are bullied at school (and by classmates on some social media platform), you might find a different online community online that supports you. Or take the example of Professor Casey Fiesler finding a community that shared her interests (see also her article):

      I’m so glad that this exists and people can find community and help when they are struggling. I myself had to find help from time to time and now have the honor and privilege of helping others. I guess it’s about balance just like the world itself there can be just as much good as there is bad out there and you can either one very easy.

    1. The CIA and DIA decided they should investigate and know as much about it as possible.

      "theres no reason to want it" "theres no reason to want it" "theres no reason to want it" "theres no reason to want it" "theres no reason to want it" "theres no reason to want it" "theres no reason to want it" "theres no reason to want it" "theres no reason to want it" "theres no reason to want it" "theres no reason to want it" "theres no reason to want it" "theres no reason to want it" "theres no reason to want it"

      Main menu

      Wikipedia The Free Encyclopedia

      Personal tools

      Contents

      Astrolabe

      Tools

      Appearance

      Text

      • Small

        Standard

        Large

      Width

      • Standard

        Wide

      Color (beta)

      • Automatic

        Light

        Dark

      From Wikipedia, the free encyclopedia

      For other pages with a similar name, see Astrolabe (disambiguation). Not to be confused with Cosmolabe.

      Planispheric Astrolabe made of brass, cast, with fretwork rete and surface engraving

      North African, 9th century AD, Planispheric Astrolabe. Khalili Collection.

      A modern astrolabe made in Tabriz, Iran in 2013.

      An astrolabe (Greek: ἀστρολάβος astrolábos, 'star-taker'; Arabic: ٱلأَسْطُرلاب al-Asṭurlāb; Persian: ستاره‌یاب Setāreyāb) is an astronomical instrument dating to ancient times. It serves as a star chart and physical model of visible heavenly bodies. Its various functions also make it an elaborate inclinometer and an analog calculation device capable of working out several kinds of problems in astronomy. In its simplest form it is a metal disc with a pattern of wires, cutouts, and perforations that allows a user to calculate astronomical positions precisely. It is able to measure the altitude above the horizon of a celestial body, day or night; it can be used to identify stars or planets, to determine local latitude given local time (and vice versa), to survey, or to triangulate. It was used in classical antiquity, the Islamic Golden Age, the European Middle Ages and the Age of Discovery for all these purposes.

      The astrolabe, which is a precursor to the sextant,^[1]^ is effective for determining latitude on land or calm seas. Although it is less reliable on the heaving deck of a ship in rough seas, the mariner's astrolabe was developed to solve that problem.

      Applications

      16th-century woodcut of measurement of a building's height with an astrolabe

      The 10th-century astronomer ʿAbd al-Raḥmān al-Ṣūfī wrote a massive text of 386 chapters on the astrolabe, which reportedly described more than 1,000 applications for the astrolabe's various functions.^[2]^ These ranged from the astrological, the astronomical and the religious, to navigation, seasonal and daily time-keeping, and tide tables. At the time of their use, astrology was widely considered as much of a serious science as astronomy, and study of the two went hand-in-hand. The astronomical interest varied between folk astronomy (of the pre-Islamic tradition in Arabia) which was concerned with celestial and seasonal observations, and mathematical astronomy, which would inform intellectual practices and precise calculations based on astronomical observations. In regard to the astrolabe's religious function, the demands of Islamic prayer times were to be astronomically determined to ensure precise daily timings, and the qibla, the direction of Mecca towards which Muslims must pray, could also be determined by this device. In addition to this, the lunar calendar that was informed by the calculations of the astrolabe was of great significance to the religion of Islam, given that it determines the dates of important religious observances such as Ramadan.^[citation needed]^

      Etymology

      The Oxford English Dictionary gives the translation "star-taker" for the English word astrolabe and traces it through medieval Latin to the Greek word ἀστρολάβος : astrolábos,^[3]^^[4]^ from ἄστρον : astron "star" and λαμβάνειν : lambanein "to take".^[5]^

      In the medieval Islamic world the Arabic word al-Asturlāb (i.e., astrolabe) was given various etymologies. In Arabic texts, the word is translated as ākhidhu al-Nujūm (Arabic: آخِذُ ٱلنُّجُومْ, lit. 'star-taker'), a direct translation of the Greek word.^[6]^

      Al-Biruni quotes and criticises medieval scientist Hamza al-Isfahani who stated:^[6]^ "asturlab is an arabisation of this Persian phrase" (sitara yab, meaning "taker of the stars").^[7]^ In medieval Islamic sources, there is also a folk etymology of the word as "lines of lab", where "Lab" refers to a certain son of Idris (Enoch). This etymology is mentioned by a 10th-century scientist named al-Qummi but rejected by al-Khwarizmi.^[8]^

      History

      Ancient era

      An astrolabe is essentially a plane (two-dimensional) version of an armillary sphere, which had already been invented in the Hellenistic period and probably been used by Hipparchus to produce his star catalogue. Theon of Alexandria (c. 335 -- c. 405) wrote a detailed treatise on the astrolabe.^[9]^ The invention of the plane astrolabe is sometimes wrongly attributed to Theon's daughter Hypatia (born c. 350--370; died AD 415),^[10]^^[11]^^[12]^^[13]^ but it's known to have been used much earlier.^[11]^^[12]^^[13]^ The misattribution comes from a misinterpretation of a statement in a letter written by Hypatia's pupil Synesius (c. 373 -- c. 414),^[11]^^[12]^^[13]^ which mentions that Hypatia had taught him how to construct a plane astrolabe, but does not say that she invented it.^[11]^^[12]^^[13]^ Lewis argues that Ptolemy used an astrolabe to make the astronomical observations recorded in the Tetrabiblos.^[9]^ However, Emilie Savage-Smith notes "there is no convincing evidence that Ptolemy or any of his predecessors knew about the planispheric astrolabe".^[14]^ In chapter 5,1 of the Almagest, Ptolemy describes the construction of an armillary sphere, and it is usually assumed that this was the instrument he used.

      Astrolabes continued to be used in the Byzantine Empire. Christian philosopher John Philoponus wrote a treatise (c. 550) on the astrolabe in Greek, which is the earliest extant treatise on the instrument.^[a]^ Mesopotamian bishop Severus Sebokht also wrote a treatise on the astrolabe in the Syriac language during the mid-7th century.^[b]^ Sebokht refers to the astrolabe as being made of brass in the introduction of his treatise, indicating that metal astrolabes were known in the Christian East well before they were developed in the Islamic world or in the Latin West.^[15]^

      Medieval era

      Astrolabes were further developed in the medieval Islamic world, where Muslim astronomers introduced angular scales to the design,^[16]^ adding circles indicating azimuths on the horizon.^[17]^ It was widely used throughout the Muslim world, chiefly as an aid to navigation and as a way of finding the Qibla, the direction of Mecca. Eighth-century mathematician Muhammad al-Fazari is the first person credited with building the astrolabe in the Islamic world.^[18]^

      The mathematical background was established by Muslim astronomer Albatenius in his treatise Kitab az-Zij (c. AD 920), which was translated into Latin by Plato Tiburtinus (De Motu Stellarum). The earliest surviving astrolabe is dated AH 315 (AD 927--928). In the Islamic world, astrolabes were used to find the times of sunrise and the rising of fixed stars, to help schedule morning prayers (salat). In the 10th century, al-Sufi first described over 1,000 different uses of an astrolabe, in areas as diverse as astronomy, astrology, navigation, surveying, timekeeping, prayer, Salat, Qibla, etc.^[19]^^[20]^

      An Arab astrolabe from 1208

      The spherical astrolabe was a variation of both the astrolabe and the armillary sphere, invented during the Middle Ages by astronomers and inventors in the Islamic world.^[c]^ The earliest description of the spherical astrolabe dates to Al-Nayrizi (fl. 892--902). In the 12th century, Sharaf al-Dīn al-Tūsī invented the linear astrolabe, sometimes called the "staff of al-Tusi", which was "a simple wooden rod with graduated markings but without sights. It was furnished with a plumb line and a double chord for making angular measurements and bore a perforated pointer".^[21]^ The geared mechanical astrolabe was invented by Abi Bakr of Isfahan in 1235.^[22]^

      The first known metal astrolabe in Western Europe is the Destombes astrolabe made from brass in the eleventh century in Portugal.^[23]^^[24]^ Metal astrolabes avoided the warping that large wooden ones were prone to, allowing the construction of larger and therefore more accurate instruments. Metal astrolabes were heavier than wooden instruments of the same size, making it difficult to use them in navigation.^[25]^

      Spherical astrolabe

      A depiction of Hermann of Reichenau with an astrolabe in a 13th-century manuscript by Matthew Paris

      Herman Contractus of Reichenau Abbey, examined the use of the astrolabe in Mensura Astrolai during the 11th century.^[26]^ Peter of Maricourt wrote a treatise on the construction and use of a universal astrolabe in the last half of the 13th century entitled Nova compositio astrolabii particularis. Universal astrolabes can be found at the History of Science Museum in Oxford.^[27]^ David A. King, historian of Islamic instrumentation, describes the universal astrolobe designed by Ibn al-Sarraj of Aleppo (aka Ahmad bin Abi Bakr; fl. 1328) as "the most sophisticated astronomical instrument from the entire Medieval and Renaissance periods".^[28]^

      English author Geoffrey Chaucer (c. 1343--1400) compiled A Treatise on the Astrolabe for his son, mainly based on a work by Messahalla or Ibn al-Saffar.^[29]^^[30]^ The same source was translated by French astronomer and astrologer Pélerin de Prusse and others. The first printed book on the astrolabe was Composition and Use of Astrolabe by Christian of Prachatice, also using Messahalla, but relatively original.

      Front of an Indian astrolabe now kept at the Royal Museum of Scotland at Edinburgh.

      In 1370, the first Indian treatise on the astrolabe was written by the Jain astronomer Mahendra Suri, titled Yantrarāja.^[31]^

      A simplified astrolabe, known as a balesilha, was used by sailors to get an accurate reading of latitude while at sea. The use of the balesilha was promoted by Prince Henry (1394--1460) while navigating for Portugal.^[32]^

      The astrolabe was almost certainly first brought north of the Pyrenees by Gerbert of Aurillac (future Pope Sylvester II), where it was integrated into the quadrivium at the school in Reims, France, sometime before the turn of the 11th century.^[33]^ In the 15th century, French instrument maker Jean Fusoris (c. 1365--1436) also started remaking and selling astrolabes in his shop in Paris, along with portable sundials and other popular scientific devices of the day.

      Astronomical Instrument Detail by Ieremias Palladas 1612

      Thirteen of his astrolabes survive to this day.^[34]^ One more special example of craftsmanship in early 15th-century Europe is the astrolabe designed by Antonius de Pacento and made by Dominicus de Lanzano, dated 1420.^[35]^

      In the 16th century, Johannes Stöffler published Elucidatio fabricae ususque astrolabii, a manual of the construction and use of the astrolabe. Four identical 16th-century astrolabes made by Georg Hartmann provide some of the earliest evidence for batch production by division of labor. In 1612, Greek painter Ieremias Palladas incorporated a sophisticated astrolabe in his painting depicting Catherine of Alexandria. The painting was entitled Catherine of Alexandria and featured a device called the System of the Universe (Σύστημα τοῦ Παντός). The device featured the planets with the names in Greek: Selene (Moon), Hermes (Mercury), Aphrodite (Venus), Helios (Sun), Ares (Mars), Zeus (Jupiter), and Chronos (Saturn). The device also featured celestial spheres following the Ptolemaic model and Earth was depicted as a blue sphere with circles of geographic coordinates. A complex line representing the axis of the Earth covered the entire instrument.^[36]^

      Medieval astrolabes

      Astrolabes and clocks

      Amerigo Vespucci observing the Southern Cross by looking over the top of an armillary sphere bizarrely held from the top as if it were an astrolabe; however, an astrolabe cannot be used by looking over its top. The page inexplicably contains the word astrolabium. By Jan Collaert II. Museum Plantin-Moretus, Antwerp, Belgium.

      Mechanical astronomical clocks were initially influenced by the astrolabe; they could be seen in many ways as clockwork astrolabes designed to produce a continual display of the current position of the sun, stars, and planets. For example, Richard of Wallingford's clock (c. 1330) consisted essentially of a star map rotating behind a fixed rete, similar to that of an astrolabe.^[37]^

      Many astronomical clocks use an astrolabe-style display, such as the famous clock at Prague, adopting a stereographic projection (see below) of the ecliptic plane. In recent times, astrolabe watches have become popular. For example, Swiss watchmaker Ludwig Oechslin designed and built an astrolabe wristwatch in conjunction with Ulysse Nardin in 1985.^[38]^ Dutch watchmaker Christaan van der Klauuw also manufactures astrolabe watches today.^[39]^

      Construction

      An astrolabe consists of a disk, called the mater (mother), which is deep enough to hold one or more flat plates called tympans, or climates. A tympan is made for a specific latitude and is engraved with a stereographic projection of circles denoting azimuth and altitude and representing the portion of the celestial sphere above the local horizon. The rim of the mater is typically graduated into hours of time, degrees of arc, or both.^[40]^

      Above the mater and tympan, the rete, a framework bearing a projection of the ecliptic plane and several pointers indicating the positions of the brightest stars, is free to rotate. These pointers are often just simple points, but depending on the skill of the craftsman can be very elaborate and artistic. There are examples of astrolabes with artistic pointers in the shape of balls, stars, snakes, hands, dogs' heads, and leaves, among others.^[40]^ The names of the indicated stars were often engraved on the pointers in Arabic or Latin.^[41]^ Some astrolabes have a narrow rule or label which rotates over the rete, and may be marked with a scale of declinations.

      The rete, representing the sky, functions as a star chart. When it is rotated, the stars and the ecliptic move over the projection of the coordinates on the tympan. One complete rotation corresponds to the passage of a day. The astrolabe is, therefore, a predecessor of the modern planisphere.

      On the back of the mater, there is often engraved a number of scales that are useful in the astrolabe's various applications. These vary from designer to designer, but might include curves for time conversions, a calendar for converting the day of the month to the sun's position on the ecliptic, trigonometric scales, and graduation of 360 degrees around the back edge. The alidade is attached to the back face. An alidade can be seen in the lower right illustration of the Persian astrolabe above. When the astrolabe is held vertically, the alidade can be rotated and the sun or a star sighted along its length, so that its altitude in degrees can be read ("taken") from the graduated edge of the astrolabe; hence the word's Greek roots: "astron" (ἄστρον) = star + "lab-" (λαβ-) = to take. The alidade had vertical and horizontal cross-hairs which plots locations on an azimuthal ring called an almucantar (altitude-distance circle).

      An arm called a radius connects from the center of the astrolabe to the optical axis which is parallel with another arm also called a radius. The other radius contains graduations of altitude and distance measurements.

      A shadow square also appears on the back of some astrolabes, developed by Muslim astrologists in the 9th Century, whereas devices of the Ancient Greek tradition featured only altitude scales on the back of the devices.^[42]^ This was used to convert shadow lengths and the altitude of the sun, the uses of which were various from surveying to measuring inaccessible heights.^[43]^

      Devices were usually signed by their maker with an inscription appearing on the back of the astrolabe, and if there was a patron of the object, their name would appear inscribed on the front, or in some cases, the name of the reigning sultan or the teacher of the astrolabist has also been found to appear inscribed in this place.^[44]^ The date of the astrolabe's construction was often also signed, which has allowed historians to determine that these devices are the second oldest scientific instrument in the world. The inscriptions on astrolabes also allowed historians to conclude that astronomers tended to make their own astrolabes, but that many were also made to order and kept in stock to sell, suggesting there was some contemporary market for the devices.^[44]^

      Construction of astrolabes

      • The Hartmann astrolabe in Yale collection. This instrument shows its rete and rule.

        The Hartmann astrolabe in Yale collection. This instrument shows its rete and rule.

      • Celestial Globe, Isfahan (?), Iran 1144. Shown at the Louvre Museum, this globe is the third oldest surviving in the world.

        Celestial Globe, Isfahan (?), Iran 1144. Shown at the Louvre Museum, this globe is the third oldest surviving in the world.

      • Computer-generated planispheric astrolabe

        Computer-generated planispheric astrolabe

      Mathematical basis

      The construction and design of astrolabes are based on the application of the stereographic projection of the celestial sphere. The point from which the projection is usually made is the South Pole. The plane onto which the projection is made is that of the Equator.^[45]^

      Designing a tympanum through stereographic projection

      Parts of an Astrolabe tympanum

      The tympanum captures the celestial coordinate axes upon which the rete will rotate. It is the component that will enable the precise determination of a star's position at a specific time of day and year.

      Therefore, it should project:

      1. The zenith, which will vary depending on the latitude of the astrolabe user.
      2. The horizon line and almucantar or circles parallel to the horizon, which will allow for the determination of a celestial body's altitude (from the horizon to the zenith).
      3. The celestial meridian (north-south meridian, passing through the zenith) and secondary meridians (circles intersecting the north-south meridian at the zenith), which will enable the measurement of azimuth for a celestial body.
      4. The three main circles of latitude (Capricorn, Equator, and Cancer) to determine the exact moments of solstices and equinoxes throughout the year.

      The tropics and the equator define the tympanum

      Stereographic projection of Earth's tropics and equator from the South Pole.

      On the right side of the image above:

      1. The blue sphere represents the celestial sphere.
      2. The blue arrow indicates the direction of true north (the North Star).
      3. The central blue point represents Earth (the observer's location).
      4. The geographic south of the celestial sphere acts as the projection pole.
      5. The celestial equatorial plane serves as the projection plane.
      6. Three parallel circles represent the projection on the celestial sphere of Earth's main circles of latitude:

      When projecting onto the celestial equatorial plane, three concentric circles correspond to the celestial sphere's three circles of latitude (left side of the image). The largest of these, the projection on the celestial equatorial plane of the celestial Tropic of Capricorn, defines the size of the astrolabe's tympanum. The center of the tympanum (and the center of the three circles) is actually the north-south axis around which Earth rotates, and therefore, the rete of the astrolabe will rotate around this point as the hours of the day pass (due to Earth's rotational motion).

      The three concentric circles on the tympanum are useful for determining the exact moments of solstices and equinoxes throughout the year: if the sun's altitude at noon on the rete is known and coincides with the outer circle of the tympanum (Tropic of Capricorn), it signifies the winter solstice (the sun will be at the zenith for an observer at the Tropic of Capricorn, meaning summer in the southern hemisphere and winter in the northern hemisphere). If, on the other hand, its altitude coincides with the inner circle (Tropic of Cancer), it indicates the summer solstice. If its altitude is on the middle circle (equator), it corresponds to one of the two equinoxes.

      The horizon and the measurement of altitude

      Stereographic projection of an observer's horizon at a specific latitude

      On the right side of the image above:

      1. The blue arrow indicates the direction of true north (the North Star).
      2. The central blue point represents Earth (the observer's location).
      3. The black arrow represents the zenith direction for the observer (which would vary depending on the observer's latitude).
      4. The two black circles represent the horizon surrounding the observer, which is perpendicular to the zenith vector and defines the portion of the celestial sphere visible to the observer, and its projection on the celestial equatorial plane.
      5. The geographic south of the celestial sphere acts as the projection pole.
      6. The celestial equatorial plane serves as the projection plane.

      When projecting the horizon onto the celestial equatorial plane, it transforms into an ellipse upward-shifted relatively to the center of the tympanum (both the observer and the projection of the north-south axis). This implies that a portion of the celestial sphere will fall outside the outer circle of the tympanum (the projection of the celestial Tropic of Capricorn) and, therefore, won't be represented.

      Stereographic projection of the horizon and an almucantar.

      Additionally, when drawing circles parallel to the horizon up to the zenith (almucantar), and projecting them on the celestial equatorial plane, as in the image above, a grid of consecutive ellipses is constructed, allowing for the determination of a star's altitude when its rete overlaps with the designed tympanum.

      The meridians and the measurement of azimuth

      Stereographic projection of the north-south meridian and a meridian 40° E on the tympanum of an astrolabe

      On the right side of the image above:

      1. The blue arrow indicates the direction of true north (the North Star).
      2. The central blue point represents Earth (the observer's location).
      3. The black arrow represents the zenith direction for the observer (which would vary depending on the observer's latitude).
      4. The two black circles represent the horizon surrounding the observer, which is perpendicular to the zenith vector and defines the portion of the celestial sphere visible to the observer, and its projection on the celestial equatorial plane.
      5. The five red dots represent the zenith, the nadir (the point on the celestial sphere opposite the zenith with respect to the observer), their projections on the celestial equatorial plane, and the center (with no physical meaning attached) of the circle obtained by projecting the secondary meridian (see below) on the celestial equatorial plane.
      6. The orange circle represents the celestial meridian (or meridian that goes, for the observer, from the north of the horizon to the south of the horizon passing through the zenith).
      7. The two red circles represent a secondary meridian with an azimuth of 40° East relative to the observer's horizon (which, like all secondary meridians, intersects the principal meridian at the zenith and nadir), and its projection on the celestial equatorial plane.
      8. The geographic south of the celestial sphere acts as the projection pole.
      9. The celestial equatorial plane serves as the projection plane.

      When projecting the celestial meridian, it results in a straight line that overlaps with the vertical axis of the tympanum, where the zenith and nadir are located. However, when projecting the 40° E meridian, another circle is obtained that passes through both the zenith and nadir projections, so its center is located on the perpendicular bisection of the segment connecting both points. In deed, the projection of the celestial meridian can be considered as a circle with an infinite radius (a straight line) whose center is on this bisection and at an infinite distance from these two points.

      If successive meridians that divide the celestial sphere into equal sectors (like "orange slices" radiating from the zenith) are projected, a family of curves passing through the zenith projection on the tympanum is obtained. These curves, once overlaid with the rete containing the major stars, allow for determining the azimuth of a star located on the rete and rotated for a specific time of day.

      See also

      References

      Footnotes

      1.

      1. Savage-Smith, Emilie (1993). "Book Reviews". Journal of Islamic Studies. 4 (2): 296--299. doi:10.1093/jis/4.2.296. There is no evidence for the Hellenistic origin of the spherical astrolabe, but rather evidence so far available suggests that it may have been an early but distinctly Islamic development with no Greek antecedents.

      Notes

      1.

      1. Gentili, Graziano; Simonutti, Luisa; Struppa, Daniele C. (2020). "The Mathematics of the Astrolabe and Its History". Journal of Humanistic Mathematics. 10: 101--144. doi:10.5642/jhummath.202001.07. hdl:2158/1182616. S2CID 211008813.

      Bibliography

      • Evans, James (1998), The History and Practice of Ancient Astronomy, Oxford University Press, ISBN 0-19-509539-1
      • Stöffler, Johannes (2007) [First published 1513], Stoeffler's Elucidatio -- The Construction and Use of the Astrolabe [Elucidatio Fabricae Ususque Astrolabii], translated by Gunella, Alessandro; Lamprey, John, John Lamprey, ISBN 978-1-4243-3502-2
      • King, D. A. (1981), "The Origin of the Astrolabe According to the Medieval Islamic Sources", Journal for the History of Arabic Science, 5: 43--83
      • King, Henry (1978), Geared to the Stars: the Evolution of Planetariums, Orreries, and Astronomical Clocks, University of Toronto Press, ISBN 978-0-8020-2312-4
      • Krebs, Robert E.; Krebs, Carolyn A. (2003), Groundbreaking Scientific Experiments, Inventions, and Discoveries of the Ancient World, Greenwood Press, ISBN 978-0-313-31342-4
      • Laird, Edgar (1997), Carol Poster and Richard Utz (ed.), "Astrolabes and the Construction of Time in the Late Middle Ages", Constructions of Time in the Late Middle Ages, Evanston, Illinois: Northwestern University Press: 51--69
      • Laird, Edgar; Fischer, Robert, eds. (1995), "Critical edition of Pélerin de Prusse on the Astrolabe (translation of Practique de Astralabe)", Medieval & Renaissance Texts & Studies, Binghamton, New York, ISBN 0-86698-132-2
      • Lewis, M. J. T. (2001), Surveying Instruments of Greece and Rome, Cambridge University Press, ISBN 978-0-511-48303-5
      • Morrison, James E. (2007), The Astrolabe, Janus, ISBN 978-0-939320-30-1
      • Neugebauer, Otto E. (1975), A History of Ancient Mathematical Astronomy, Springer, ISBN 978-3-642-61912-0
      • North, John David (2005), God's Clockmaker: Richard of Wallingford and the Invention of Time, Continuum International Publishing Group, ISBN 978-1-85285-451-5

      External links

      Wikimedia Commons has media related to:\ Astrolabe (category)

      Wikisource has the text of the 1911 Encyclopædia Britannica article "Astrolabe".

      Look up astrolabe in Wiktionary, the free dictionary.

      |\ |

      |

      Astronomy in the medieval Islamic world

      |

      |\ |

      |

      Ancient Greek astronomy

      |

      Portals:

      |\ |

      |

      Authority control databases Edit this at Wikidata

      |

      Categories:

      Wikipedia The Free Encyclopedia

      Donate
      
      Create account
      Log in
      

      Personal tools

      Contents

      (Top)
      Navigational sextants
      Design
      Taking a sight
      Adjustment
      See also
      Notes
      References
      External links
      

      Sextant

      Article
      Talk
      
      Read
      Edit
      View history
      

      Tools

      Appearance Text

      Small
      Standard
      Large
      

      Width

      Standard
      Wide
      

      Color (beta)

      Automatic
      Light
      Dark
      

      From Wikipedia, the free encyclopedia This article is about the sextant as used for navigation. For other uses, see Sextant (disambiguation). Not to be confused with Sexton (disambiguation). A sextant

      A sextant is a doubly reflecting navigation instrument that measures the angular distance between two visible objects. The primary use of a sextant is to measure the angle between an astronomical object and the horizon for the purposes of celestial navigation.

      The estimation of this angle, the altitude, is known as sighting or shooting the object, or taking a sight. The angle, and the time when it was measured, can be used to calculate a position line on a nautical or aeronautical chart—for example, sighting the Sun at noon or Polaris at night (in the Northern Hemisphere) to estimate latitude (with sight reduction). Sighting the height of a landmark can give a measure of distance off and, held horizontally, a sextant can measure angles between objects for a position on a chart.[1] A sextant can also be used to measure the lunar distance between the moon and another celestial object (such as a star or planet) in order to determine Greenwich Mean Time and hence longitude.

      The principle of the instrument was first implemented around 1731 by John Hadley (1682–1744) and Thomas Godfrey (1704–1749), but it was also found later in the unpublished writings of Isaac Newton (1643–1727).

      In 1922, it was modified for aeronautical navigation by Portuguese navigator and naval officer Gago Coutinho. Navigational sextants

      Like the Davis quadrant, the sextant allows celestial objects to be measured relative to the horizon, rather than relative to the instrument. This allows excellent precision. Also, unlike the backstaff, the sextant allows direct observations of stars. This permits the use of the sextant at night when a backstaff is difficult to use. For solar observations, filters allow direct observation of the Sun.

      Since the measurement is relative to the horizon, the measuring pointer is a beam of light that reaches to the horizon. The measurement is thus limited by the angular accuracy of the instrument and not the sine error of the length of an alidade, as it is in a mariner's astrolabe or similar older instrument.

      A sextant does not require a completely steady aim, because it measures a relative angle. For example, when a sextant is used on a moving ship, the image of both horizon and celestial object will move around in the field of view. However, the relative position of the two images will remain steady, and as long as the user can determine when the celestial object touches the horizon, the accuracy of the measurement will remain high compared to the magnitude of the movement.

      The sextant is not dependent upon electricity (unlike many forms of modern navigation) or any human-controlled signals (such as GPS). For these reasons it is considered to be an eminently practical back-up navigation tool for ships. Design

      The frame of a sextant is in the shape of a sector which is approximately 1⁄6 of a circle (60°),[2] hence its name (sextāns, sextantis is the Latin word for "one sixth"). Both smaller and larger instruments are (or were) in use: the octant, quintant (or pentant) and the (doubly reflecting) quadrant[3] span sectors of approximately 1⁄8 of a circle (45°), 1⁄5 of a circle (72°) and 1⁄4 of a circle (90°), respectively. All of these instruments may be termed "sextants". Marine sextant Using the sextant to measure the altitude of the Sun above the horizon Sextants can also be used by navigators to measure horizontal angles between objects.

      Attached to the frame are the "horizon mirror", an index arm which moves the index mirror, a sighting telescope, Sun shades, a graduated scale and a micrometer drum gauge for accurate measurements. The scale must be graduated so that the marked degree divisions register twice the angle through which the index arm turns. The scales of the octant, sextant, quintant and quadrant are graduated from below zero to 90°, 120°, 140° and 180° respectively. For example, the sextant illustrated has a scale graduated from −10° to 142°, which is basically a quintant: the frame is a sector of a circle subtending an angle of 76° at the pivot of the index arm.

      The necessity for the doubled scale reading follows from consideration of the relations of the fixed ray (between the mirrors), the object ray (from the sighted object) and the direction of the normal perpendicular to the index mirror. When the index arm moves by an angle, say 20°, the angle between the fixed ray and the normal also increases by 20°. But the angle of incidence equals the angle of reflection so the angle between the object ray and the normal must also increase by 20°. The angle between the fixed ray and the object ray must therefore increase by 40°. This is the case shown in the graphic.

      There are two types of horizon mirrors on the market today. Both types give good results.

      Traditional sextants have a half-horizon mirror, which divides the field of view in two. On one side, there is a view of the horizon; on the other side, a view of the celestial object. The advantage of this type is that both the horizon and celestial object are bright and as clear as possible. This is superior at night and in haze, when the horizon and/or a star being sighted can be difficult to see. However, one has to sweep the celestial object to ensure that the lowest limb of the celestial object touches the horizon.

      Whole-horizon sextants use a half-silvered horizon mirror to provide a full view of the horizon. This makes it easy to see when the bottom limb of a celestial object touches the horizon. Since most sights are of the Sun or Moon, and haze is rare without overcast, the low-light advantages of the half-horizon mirror are rarely important in practice.

      In both types, larger mirrors give a larger field of view, and thus make it easier to find a celestial object. Modern sextants often have 5 cm or larger mirrors, while 19th-century sextants rarely had a mirror larger than 2.5 cm (one inch). In large part, this is because precision flat mirrors have grown less expensive to manufacture and to silver.

      An artificial horizon is useful when the horizon is invisible, as occurs in fog, on moonless nights, in a calm, when sighting through a window or on land surrounded by trees or buildings. There are two common designs of artificial horizon. An artificial horizon can consist simply of a pool of water shielded from the wind, allowing the user to measure the distance between the body and its reflection, and divide by two. Another design allows the mounting of a fluid-filled tube with bubble directly to the sextant.

      Most sextants also have filters for use when viewing the Sun and reducing the effects of haze. The filters usually consist of a series of progressively darker glasses that can be used singly or in combination to reduce haze and the Sun's brightness. However, sextants with adjustable polarizing filters have also been manufactured, where the degree of darkness is adjusted by twisting the frame of the filter.

      Most sextants mount a 1 or 3-power monocular for viewing. Many users prefer a simple sighting tube, which has a wider, brighter field of view and is easier to use at night. Some navigators mount a light-amplifying monocular to help see the horizon on moonless nights. Others prefer to use a lit artificial horizon.[citation needed]

      Professional sextants use a click-stop degree measure and a worm adjustment that reads to a minute, 1/60 of a degree. Most sextants also include a vernier on the worm dial that reads to 0.1 minute. Since 1 minute of error is about a nautical mile, the best possible accuracy of celestial navigation is about 0.1 nautical miles (190 m). At sea, results within several nautical miles, well within visual range, are acceptable. A highly skilled and experienced navigator can determine position to an accuracy of about 0.25-nautical-mile (460 m).[4]

      A change in temperature can warp the arc, creating inaccuracies. Many navigators purchase weatherproof cases so that their sextant can be placed outside the cabin to come to equilibrium with outside temperatures. The standard frame designs (see illustration) are supposed to equalise differential angular error from temperature changes. The handle is separated from the arc and frame so that body heat does not warp the frame. Sextants for tropical use are often painted white to reflect sunlight and remain relatively cool. High-precision sextants have an invar (a special low-expansion steel) frame and arc. Some scientific sextants have been constructed of quartz or ceramics with even lower expansions. Many commercial sextants use low-expansion brass or aluminium. Brass is lower-expansion than aluminium, but aluminium sextants are lighter and less tiring to use. Some say they are more accurate because one's hand trembles less. Solid brass frame sextants are less susceptible to wobbling in high winds or when the vessel is working in heavy seas, but as noted are substantially heavier. Sextants with aluminum frames and brass arcs have also been manufactured. Essentially, a sextant is intensely personal to each navigator, and they will choose whichever model has the features which suit them best.

      Aircraft sextants are now out of production, but had special features. Most had artificial horizons to permit taking a sight through a flush overhead window. Some also had mechanical averagers to make hundreds of measurements per sight for compensation of random accelerations in the artificial horizon's fluid. Older aircraft sextants had two visual paths, one standard and the other designed for use in open-cockpit aircraft that let one view from directly over the sextant in one's lap. More modern aircraft sextants were periscopic with only a small projection above the fuselage. With these, the navigator pre-computed their sight and then noted the difference in observed versus predicted height of the body to determine their position. Taking a sight

      A sight (or measure) of the angle between the Sun, a star, or a planet, and the horizon is done with the 'star telescope' fitted to the sextant using a visible horizon. On a vessel at sea even on misty days a sight may be done from a low height above the water to give a more definite, better horizon. Navigators hold the sextant by its handle in the right hand, avoiding touching the arc with the fingers.[5]

      For a Sun sight, a filter is used to overcome the glare such as "shades" covering both index mirror and the horizon mirror designed to prevent eye damage. Initially, with the index bar set to zero and the shades covering both mirrors, the sextant is aimed at the sun until it can be viewed on both mirrors through the telescope, then lowered vertically until the portion of the horizon directly below it is viewed on both mirrors. It is necessary to flip back the horizon mirror shade to be able to see the horizon more clearly on it. Releasing the index bar (either by releasing a clamping screw, or on modern instruments, using the quick-release button), and moving it towards higher values of the scale, eventually the image of the Sun will reappear on the index mirror and can be aligned to about the level of the horizon on the horizon mirror. Then the fine adjustment screw on the end of the index bar is turned until the bottom curve (the lower limb) of the Sun just touches the horizon. "Swinging" the sextant about the axis of the telescope ensures that the reading is being taken with the instrument held vertically. The angle of the sight is then read from the scale on the arc, making use of the micrometer or vernier scale provided. The exact time of the sight must also be noted simultaneously, and the height of the eye above sea-level recorded.[5]

      An alternative method is to estimate the current altitude (angle) of the Sun from navigation tables, then set the index bar to that angle on the arc, apply suitable shades only to the index mirror, and point the instrument directly at the horizon, sweeping it from side to side until a flash of the Sun's rays are seen in the telescope. Fine adjustments are then made as above. This method is less likely to be successful for sighting stars and planets.[5]

      Star and planet sights are normally taken during nautical twilight at dawn or dusk, while both the heavenly bodies and the sea horizon are visible. There is no need to use shades or to distinguish the lower limb as the body appears as a mere point in the telescope. The Moon can be sighted, but it appears to move very fast, appears to have different sizes at different times, and sometimes only the lower or upper limb can be distinguished due to its phase.[5]

      After a sight is taken, it is reduced to a position by looking at several mathematical procedures. The simplest sight reduction is to draw the equal-altitude circle of the sighted celestial object on a globe. The intersection of that circle with a dead-reckoning track, or another sighting, gives a more precise location.

      Sextants can be used very accurately to measure other visible angles, for example between one heavenly body and another and between landmarks ashore. Used horizontally, a sextant can measure the apparent angle between two landmarks such as a lighthouse and a church spire, which can then be used to find the distance off or out to sea (provided the distance between the two landmarks is known). Used vertically, a measurement of the angle between the lantern of a lighthouse of known height and the sea level at its base can also be used for distance off.[5] Adjustment

      Due to the sensitivity of the instrument it is easy to knock the mirrors out of adjustment. For this reason a sextant should be checked frequently for errors and adjusted accordingly.

      There are four errors that can be adjusted by the navigator, and they should be removed in the following order.

      Perpendicularity error This is when the index mirror is not perpendicular to the frame of the sextant. To test for this, place the index arm at about 60° on the arc and hold the sextant horizontally with the arc away from you at arm's length and look into the index mirror. The arc of the sextant should appear to continue unbroken into the mirror. If there is an error, then the two views will appear to be broken. Adjust the mirror until the reflection and direct view of the arc appear to be continuous. Side error This occurs when the horizon glass/mirror is not perpendicular to the plane of the instrument. To test for this, first zero the index arm then observe a star through the sextant. Then rotate the tangent screw back and forth so that the reflected image passes alternately above and below the direct view. If in changing from one position to another, the reflected image passes directly over the unreflected image, no side error exists. If it passes to one side, side error exists. Alternatively, the user can hold the sextant on its side and observe the horizon to check the sextant during the day. If there are two horizons there is side error. In both cases, adjust the horizon glass/mirror until respectively the star or the horizon dual images merge into one. Side error is generally inconsequential for observations and can be ignored or reduced to a level that is merely inconvenient. Collimation error This is when the telescope or monocular is not parallel to the plane of the sextant. To check for this you need to observe two stars 90° or more apart. Bring the two stars into coincidence either to the left or the right of the field of view. Move the sextant slightly so that the stars move to the other side of the field of view. If they separate there is collimation error. As modern sextants rarely use adjustable telescopes, they do not need to be corrected for collimation error. Index error This occurs when the index and horizon mirrors are not parallel to each other when the index arm is set to zero. To test for index error, zero the index arm and observe the horizon. If the reflected and direct image of the horizon are in line there is no index error. If one is above the other adjust the index mirror until the two horizons merge. Alternatively, the same procedure can be done at night using a star or the Moon instead of the horizon.

      See also

      Astrolabe
      Bris sextant
      Davis quadrant
      Gago Coutinho
      Harold Gatty
      History of longitude
      Intercept method
      Latitude
      Longitude
      Longitude by chronometer
      Mariner's astrolabe
      Navigation
      Octant (instrument)
      Quadrant (instrument)
      Sextant (astronomy)
      

      Notes

      Seddon, J. Carl (June 1968). "Line of Position from a Horizontal Angle". Journal of Navigation. 21 (3): 367–369. doi:10.1017/S0373463300024838. ISSN 1469-7785. A.), McPhee, John (John; NSW., Museums and Galleries (2008). Great Collections : treasures from Art Gallery of NSW, Australian Museum, Botanic Gardens Trust, Historic Houses Trust of NSW, Museum of Contemporary Art, Powerhouse Museum, State Library of NSW, State Records NSW. Museums & Galleries NSW. p. 56. ISBN 9780646496030. OCLC 302147838. This article treats the doubly reflecting quadrant, not its predecessor described at quadrant. Dutton's Navigation and Piloting, 12th edition. G.D. Dunlap and H.H. Shufeldt, eds. Naval Institute Press 1972, ISBN 0-87021-163-3

      Dixon, Conrad (1968). "5. Using the sextant". Basic Astro Navigation. Adlard Coles. ISBN 0-229-11740-6.
      

      References

      Bowditch, Nathaniel (2002). The American Practical Navigator. Bethesda, MD: National Imagery and Mapping Agency. ISBN 0-939837-54-4. Archived from the original on 2007-06-24.
      Chisholm, Hugh, ed. (1911). "Sextant" . Encyclopædia Britannica. Vol. 24 (11th ed.). Cambridge University Press. pp. 765–767.
      Cutler, Thomas J. (December 2003). Dutton's Nautical Navigation (15th ed.). Annapolis, MD: Naval Institute Press. ISBN 978-1-55750-248-3.
      Department of the Air Force (March 2001). Air Navigation (PDF). Department of the Air Force. Retrieved 2014-12-28.
      Great Britain Ministry of Defence (Navy) (1995). Admiralty Manual of Seamanship. The Stationery Office. ISBN 0-11-772696-6.
      Maloney, Elbert S. (December 2003). Chapman Piloting and Seamanship (64th ed.). New York: Hearst Communications. ISBN 1-58816-089-0.
      Martin, William Robert (1911). "Navigation" . In Chisholm, Hugh (ed.). Encyclopædia Britannica. Vol. 19 (11th ed.). Cambridge University Press. pp. 284–298.
      

      External links Look up sextant in Wiktionary, the free dictionary. Wikimedia Commons has media related to Sextant.

      Her Majesty's Nautical Almanac Office Archived 2011-02-21 at the Wayback Machine
      The History of HM Nautical Almanac Office Archived 2016-06-24 at the Wayback Machine
      Chapter 17 from the online edition of Nathaniel Bowditch's American Practical Navigator
      Understand difference in Antique & Replica Sextant Archived 2017-08-17 at the Wayback Machine
      CD-Sextant - Build your own sextant Simple do-it-yourself project.
      Lunars web site. online calculation
      Complete celnav theory book, including Lunars
      

      Portals:

      Earth sciences
      Astronomy
      icon Stars
      Spaceflight
      icon Science
      

      Authority control databases Edit this at Wikidata National

      GermanyUnited StatesFranceBnF dataIsrael
      

      Other

      NARA
      

      Categories:

      Navigational equipmentCelestial navigation1731 introductionsAstronomical instrumentsAngle measuring instruments
      
      This page was last edited on 28 June 2024, at 10:00 (UTC).
      Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.
      
      Privacy policy
      About Wikipedia
      Disclaimers
      Contact Wikipedia
      Code of Conduct
      Developers
      Statistics
      Cookie statement
      Mobile view
      
      Wikimedia Foundation
      Powered by MediaWiki
      

      dsabs.harvard.edu/abs/1956iatw.book.....M).*

    2. Gondola Wish',

      BEGINNINGS WITH "THE THE" AND LL and EL ELtelitenkigniYONNA

      This article is about the U.S. government project on psychics. For other uses, see Stargate (disambiguation). Part of a series on the Paranormal Main articles Skepticism Parapsychology Related

      vte
      

      The Stargate Project was a secret U.S. Army unit established in 1977[1][2] at Fort Meade, Maryland, by the Defense Intelligence Agency (DIA) and SRI International (a California contractor) to investigate the potential for psychic phenomena in military and domestic intelligence applications. The project, and its precursors and sister projects, originally went by various code names – 'Gondola Wish', 'Stargate', 'Grill Flame', 'Center Lane', 'Project CF', 'Sun Streak', 'Scanate' – until 1991 when they were consolidated and rechristened as the "Stargate Project".

      The Stargate Project's work primarily involved remote viewing, the purported ability to psychically "see" events, sites, or information from a great distance.[3] The project was overseen until 1987 by Lt. Frederick Holmes "Skip" Atwater, an aide and "psychic headhunter" to Maj. Gen. Albert Stubblebine, and later president of the Monroe Institute.[4] The unit was small scale, comprising about 15 to 20 individuals, and was run out of "an old, leaky wooden barracks".[5]

      The Stargate Project was terminated and declassified in 1995 after a CIA report concluded that it was never useful in any intelligence operation. Information provided by the program was vague and included irrelevant and erroneous data, and there were suspicions of inter-judge reliability.[6]: 5–4  The program was featured in the 2004 book and 2009 film, both titled The Men Who Stare at Goats,[7][8][9][10] although neither mentions it by name. George Stephanopoulos, in his 2024 book The Situation Room, mentions the project by the name Grill Flame, in discussing a May 8, 1980, Situation Room briefing for President Carter, after Carter's failed hostage rescue mission in Iran on April 24, 1980.[11] Background

      The CIA and DIA decided they should investigate and know as much about it as possible. Various programs were approved yearly and re-funded accordingly. Reviews were made semi-annually at the Senate and House select committee level. Work results were reviewed, and remote viewing was attempted with the results being kept secret from the "viewer". It was thought that if the viewer was shown they were incorrect it would damage the viewer's confidence and skill. This was standard operating procedure throughout the years of military and domestic remote viewing programs. Feedback to the remote viewer of any kind was rare; it was kept classified and secret.[12]

      Remote viewing attempts to sense unknown information about places or events. Normally it is performed to detect current events, but during military and domestic intelligence applications viewers claimed to sense things in the future, experiencing precognition.[13] History 1970s

      In 1970 United States intelligence sources believed that the Soviet Union was spending 60 million roubles annually on "psychotronic" research. In response to claims that the Soviet program had produced results, the CIA initiated funding for a new program known as SCANATE ("scan by coordinate") in the same year.[14] Remote viewing research began in 1972 at the Stanford Research Institute (SRI) in Menlo Park, California.[14][15] Proponents (Russell Targ and Harold Puthoff) of the research said that a minimum accuracy rate of 65% required by the clients was often exceeded in the later experiments.[14]

      Physicists Targ and Puthoff began testing psychics for SRI in 1972, including one who would later become an international celebrity, Israeli Uri Geller. Their apparently successful results garnered interest within the U.S. Department of Defense. Ray Hyman, professor of psychology at the University of Oregon, was asked by Air Force psychologist Lt. Col. Austin W. Kibler (1930–2008) – then Director of Behavioral Research for ARPA – to go to SRI and investigate. He was to specifically evaluate Geller. Hyman's report to the government was that Geller was a "complete fraud" and as a consequence Targ and Puthoff lost their government contract to work further with him. The result was a publicity tour for Geller, Targ, and Puthoff to seek private funding for further research work on Geller.[16]

      One of the project's successes was the location of a lost Soviet spy plane in 1976 by Rosemary Smith, a young administrative assistant recruited by project director Dale Graff.[17]

      In 1977 the Army Assistant Chief of Staff for Intelligence (ACSI) Systems Exploitation Detachment (SED) started the Gondola Wish program to "evaluate potential adversary applications of remote viewing".[14] Army Intelligence then formalized this in mid-1978 as an operational program Grill Flame, based in buildings 2560 and 2561 at Fort Meade, in Maryland (INSCOM "Detachment G").[14] 1980s

      In early 1979 the research at SRI was integrated into 'Grill Flame', which was redesignated INSCOM 'Center Lane' Project (ICLP) in 1983. In 1984 the existence of the program was reported by Jack Anderson, and in that year it was unfavorably received by the National Academy of Sciences National Research Council. In late 1985 the Army funding was terminated, but the program was redesignated 'Sun Streak' and funded by the DIA's Scientific and Technical Intelligence Directorate (office code DT-S).[14] 1990s

      In 1991 most of the contracting for the program was transferred from SRI to Science Applications International Corporation (SAIC), with Edwin May controlling 70% of the contractor funds and 85% of the data. Its security was altered from Special Access Program (SAP) to Limited Dissemination (LIMDIS), and it was given its final name, STARGATE.[14] Closure (1995)

      In 1995 the defense appropriations bill directed that the program be transferred from DIA to CIA oversight. The CIA commissioned a report by the American Institutes for Research (AIR) that found that remote viewing had not been proved to work by a psychic mechanism, and said it had not been used operationally.[6]: 5–4  The CIA subsequently cancelled and declassified the program.[14]

      In 1995 the project was transferred to the CIA and a retrospective evaluation of the results was done. The appointed panel consisted primarily of Jessica Utts, Meena Shah and Ray Hyman. Hyman had produced an unflattering report on Uri Geller and SRI for the government two decades earlier, but the psychologist David Marks found Utts' appointment to the review panel "puzzling" given that she had published papers with Edwin May, considering this joint research likely to make her "less than [im]partial".[3] A report by Utts claimed the results were evidence of psychic functioning; however, Hyman in his report argued Utts's conclusion that ESP had been proven to exist, especially precognition, was premature and the findings had not been independently replicated.[18] Hyman came to the conclusion:

      Psychologists, such as myself, who study subjective validation find nothing striking or surprising in the reported matching of reports against targets in the Stargate data. The overwhelming amount of data generated by the viewers is vague, general, and way off target. The few apparent hits are just what we would expect if nothing other than reasonable guessing and subjective validation are operating.[19]
      

      A later report by AIR came to a negative conclusion. Joe Nickell has written:

      Other evaluators – two psychologists from AIR – assessed the potential intelligence-gathering usefulness of remote viewing. They concluded that the alleged psychic technique was of dubious value and lacked the concreteness and reliability necessary for it to be used as a basis for making decisions or taking action. The final report found "reason to suspect" that in "some well publicised cases of dramatic hits" the remote viewers might have had "substantially more background information" than might otherwise be apparent.[20]
      

      According to AIR, which performed a review of the project, no remote viewing report ever provided actionable information for any intelligence operation.[21][6]: 5–4 

      Based upon the collected findings, which recommended a higher level of critical research and tighter controls, the CIA terminated the 20 million dollar project, citing a lack of documented evidence that the program had any value to the intelligence community. Time magazine stated in 1995 three full-time psychics were still working on a $500,000-a-year budget out of Fort Meade, Maryland, which would soon close.[21]

      David Marks in his book The Psychology of the Psychic (2000) discussed the flaws in the Stargate Project in detail.[3] Marks wrote that there were six negative design features of the experiments. The possibility of cues or sensory leakage was not ruled out, no independent replication, some experiments were conducted in secret, making peer-review impossible. Marks noted that the judge Edwin May was also the principal investigator for the project and this was problematic, making a huge conflict of interest with collusion, cuing and fraud being possible. Marks concluded the project was nothing more than a "subjective delusion" and after two decades of research it had failed to provide any scientific evidence for the legitimacy of remote viewing.[3]

      The Stargate Project was terminated in 1995 following an independent review which concluded:

      The foregoing observations provide a compelling argument against continuation of the program within the intelligence community. Even though a statistically significant effect has been observed in the laboratory, it remains unclear whether the existence of a paranormal phenomenon, remote viewing, has been demonstrated. The laboratory studies do not provide evidence regarding the origins or nature of the phenomenon, assuming it exists, nor do they address an important methodological issue of inter-judge reliability.
      
      Further, even if it could be demonstrated unequivocally that a paranormal phenomenon occurs under the conditions present in the laboratory paradigm, these conditions have limited applicability and utility for intelligence gathering operations. For example, the nature of the remote viewing targets are vastly dissimilar, as are the specific tasks required of the remote viewers. Most importantly, the information provided by remote viewing is vague and ambiguous, making it difficult, if not impossible, for the technique to yield information of sufficient quality and accuracy of information for actionable intelligence. Thus, we conclude that continued use of remote viewing in intelligence gathering operations is not warranted.[6]: E-4–E-5
      

      In January 2017, the CIA published records online of the Stargate Project as part of the CREST archive.[22] Methodology

      The Stargate Project created a set of protocols designed to make the research of clairvoyance and out-of-body experiences more scientific, and to minimize as much as possible session noise and inaccuracy. The term "remote viewing" emerged as shorthand to describe this more structured approach to clairvoyance. Project Stargate would only receive a mission after all other intelligence attempts, methods, or approaches had already been exhausted.[13]: 21 

      It was reported that at peak manpower there were over 22 active military and civilian remote viewers providing data. People leaving the project were not replaced. When the project closed in 1995 this number had dwindled down to three. One was using tarot cards. According to Joseph McMoneagle, "The Army never had a truly open attitude toward psychic functioning". Hence, the use of the term "giggle factor"[23] and the saying, "I wouldn't want to be found dead next to a psychic".[12] Civilian personnel Hal Puthoff Main article: Harold E. Puthoff

      In the 1970s, CIA and DIA granted funds to Harold E. Puthoff to investigate paranormal abilities, collaborating with Russell Targ in a study of the purported psychic abilities of Uri Geller, Ingo Swann, Pat Price, Joseph McMoneagle and others, as part of the Stargate Project,[24] of which Puthoff became a director.[25]

      As with Ingo Swann and Pat Price, Puthoff attributed much of his personal remote viewing skills to his involvement with Scientology whereby he had attained, at that time, the highest level. All three eventually left Scientology in the late 1970s.

      Puthoff worked as the principal investigator of the project. His team of psychics is said[who?] to have identified spies, located Soviet weapons and technologies, such as a nuclear submarine in 1979 and helped find lost SCUD missiles in the first Gulf War and plutonium in North Korea in 1994.[26] Russell Targ Russell Targ Main article: Russell Targ

      In the 1970s, Russell Targ began working with Harold Puthoff on the Stargate Project, while working with him as a researcher at Stanford Research Institute.[27][28] Edwin May

      Edwin C. May joined the Stargate Project in 1975 as a consultant and was working full-time in 1976. The original project was part of the Cognitive Sciences Laboratory managed by May. With more funding in 1991 May took the project to the Palo Alto offices at SAIC. This would last until 1995 when the CIA closed the project.[3]

      May worked as the principal investigator, judge and the star gatekeeper for the project. Marks says this was a serious weakness for the experiments as May had conflict of interest and could have done whatever he wanted with the data. Marks has written that May refused to release the names of the "oversight committee" and refused permission for him to give an independent judging of the Stargate transcripts. Marks found this suspicious, commenting "this refusal suggests that something must be wrong with the data or with the methods of data selection."[3] Ingo Swann Main article: Ingo Swann

      Originally tested in the "Phase One" were OOBE-Beacon "RV" experiments at the American Society for Psychical Research,[29][unreliable source?] under research director Karlis Osis.[citation needed] A former OT VII Scientologist,[30][self-published source] who alleged to have coined the term 'remote viewing' as a derivation of protocols originally developed by René Warcollier, a French chemical engineer in the early 20th century, documented in the book Mind to Mind, Classics in Consciousness Series Books by (ISBN 978-1571743114)[citation needed]. Swann's achievement was to break free from the conventional mold of casual experimentation and candidate burn out, and develop a viable set of protocols that put clairvoyance within a framework named "Coordinate Remote Viewing" (CRV).[31] In a 1995 letter Edwin C. May wrote he had not used Swann for two years because there were rumors of him briefing a high level person at SAIC and the CIA on remote viewing and aliens, ETs.[32] Pat Price

      A former Burbank, California, police officer and former Scientologist who participated in a number of Cold War era remote viewing experiments, including the US government-sponsored projects SCANATE and the Stargate Project. Price joined the program after a chance encounter with fellow Scientologists (at the time) Harold Puthoff and Ingo Swann near SRI.[33] Working with maps and photographs provided to him by the CIA, Price claimed to have been able to retrieve information from facilities behind Soviet lines. He is probably best known for his sketches of cranes and gantries which appeared to conform to CIA intelligence photographs. At the time, the CIA took his claims seriously.[34] Military personnel Lieutenant General James Clapper Main article: James Clapper

      The project leader[failed verification] in the 1990s was Lt. Gen. Clapper who later rose to infamy[unbalanced opinion?] as the Director of National Intelligence.[35] Albert Stubblebine Major General Albert Stubblebine Main article: Albert Stubblebine

      A key sponsor of the research internally at Fort Meade, Maryland, Maj. Gen. Stubblebine was convinced of the reality of a wide variety of psychic phenomena. He required that all of his battalion commanders learn how to bend spoons à la Uri Geller, and he himself attempted several psychic feats, even attempting to walk through walls. In the early 1980s he was responsible for the United States Army Intelligence and Security Command (INSCOM), during which time the remote viewing project in the US Army began. Some commentators have confused a "Project Jedi", allegedly run by Special Forces primarily out of Fort Bragg, with Stargate. After some controversy involving these experiments, including alleged security violations from uncleared civilian psychics working in Sensitive Compartmented Information Facilities (SCIFs), Stubblebine was placed on retirement. His successor as the INSCOM commander was Maj. Gen. Harry Soyster, who had a reputation as a much more conservative and conventional intelligence officer. Soyster was not amenable to continuing paranormal experiments and the Army's participation in Project Stargate ended during his tenure.[12] David Morehouse

      In his book, Psychic Warrior: Inside the CIA's Stargate Program : The True Story of a Soldier's Espionage and Awakening (2000, St. Martin's Press, ISBN 978-1902636207), Morehouse claims to have worked on hundreds of remote viewing assignments, from searching for a Soviet jet that crashed in the jungle carrying an atomic bomb, to tracking suspected double agents.[36] Joseph McMoneagle Main article: Joseph McMoneagle

      McMoneagle claims he had a remarkable memory of very early childhood events. He grew up surrounded by alcoholism, abuse and poverty. As a child, he had visions at night when scared, and began to hone his psychic abilities in his teens for his own protection when he hitchhiked. He enlisted to get away. McMoneagle became an experimental remote viewer while serving in U.S. Army Intelligence.[12] Ed Dames

      Dames' role was intended to be as session monitor and analyst as an aid to Fred Atwater[37][self-published source] rather than a remote viewer, Dames received no formal remote viewing training. After his assignment to the remote viewing unit at the end of January 1986, he was used to "run" remote viewers (as monitor) and provide training and practice sessions to viewer personnel. He soon established a reputation for pushing CRV to extremes, with target sessions on Atlantis, Mars, UFOs, and aliens. He has been a frequent guest on the Coast to Coast AM radio shows.[38] References

      "Government-Sponsored Research On Parapsychology". www.encyclopedia.com. "Defense Intelligence Agency (DT-S)" (PDF). nsarchive2.gwu.edu. Marks, David. (2000). The Psychology of the Psychic (2nd ed.). Buffalo, NY: Prometheus Books. pp. 71–96. ISBN 1-57392-798-8 Atwater, F. Holmes (2001), Captain of My Ship, Master of My Soul: Living with Guidance; Hampton Roads Publishing Company Weeks, Linton (December 4, 1995). "Up Close & Personal With a Remote Viewer: Joe McMoneagle Defends the Secret Project". The Washington Post. p. B1. ISSN 0190-8286. Mumford, Michael D.; Rose, Andrew M.; Goslin, David A. (September 29, 1995). An Evaluation of Remote Viewing: Research and Applications (PDF) (Report). The American Institutes for Research – via Federation of American Scientists. "[R]emote viewings have never provided an adequate basis for 'actionable' intelligence operations – that is, information sufficiently valuable or compelling so that action was taken as a result." Heard, Alex (10 April 2010), "Close your eyes and remote view this review", Union-Tribune San Diego, Union-Tribune Publishing Co. [Book review of The Men Who Stare at Goats]: "This so-called "remote viewing" operation continued for years, and came to be known as Star Gate." Clarke, David (2014), Britain's X-traordinary Files, London: Bloomsbury Publishing, p. 112: "The existence of the Star Gate project was not officially acknowledged until 1995... then became the subject of investigations by journalists Jon Ronson [etc]... Ronson's 2004 book, The Men Who Stare at Goats, was subsequently adapted into a 2009 movie..." Shermer, Michael (November 2009), “Staring at Men Who Stare at Goats” @ Michaelshermer.com: "... the U.S. Army had invested $20 million in a highly secret psychic spy program called Star Gate. ... In The Men Who Stare at Goats Jon Ronson tells the story of this program, how it started, the bizarre twists and turns it took, and how its legacy carries on today." Krippner, Stanley and Harris L. Friedman (2010), Debating Psychic Experience: Human Potential Or Human Illusion?, Santa Barbara, CA: Praeger/Greenwood Publishing Group, p. 154: "The story of Stargate was ... featured in a film based on the book The Men Who Stare at Goats, by British investigative journalist Jon Ronson (2004)". "CNN.com - Transcripts (Amanpour)". transcripts.cnn.com. June 3, 2024. Retrieved June 9, 2024. McMoneagle, Joseph (2006). Memoirs of a psychic spy : the remarkable life of U.S. Government remote viewer 001. Charlottesville, VA: Hampton Roads Pub. Co. ISBN 978-1-5717-4482-1. McMoneagle, Joseph (1998). The ultimate time machine : a remote viewer's perception of time and predictions for the new millennium. Charlottesville, VA: Hampton Roads Pub. Co. ISBN 978-1-5717-4102-8. Pike, John (December 29, 2005). "Star Gate [Controlled Remote Viewing]". Federation of American Scientists. May, Edwin C. (1996). "The American Institutes for Research review of the Department of Defense's STAR GATE program: A commentary" (PDF). Journal of Scientific Exploration. 10 (1): 89–107. Interview, Ray Hyman, in An Honest Liar, a 2014 documentary film by Left Turn Films; Pure Mutt Productions; Part2 Filmworks. (The quoted remarks commence at 21 min, 45 sec.) Jacobsen, Annie (2017). "Paraphysics". Phenomena: The Secret History of the U.S. Government's Investigations into Extrasensory Perception and Psychokinesis. Little, Brown. ISBN 978-0-316-34937-6. Evaluation of a Program on Anomalous Mental Phenomena Archived June 16, 2017, at the Wayback Machine by Ray Hyman. "The Evidence for Psychic Functioning: Claims vs. Reality" by Ray Hyman; Skeptical Inquirer, Vol. 20.2, Mar/Apr 1996. "Remotely Viewed? The Charlie Jordan Case" by Joe Nickell; Skeptical Inquirer, Vol. 11.1, Mar 2001. Waller, Douglas (December 11, 1995). "The Vision Thing". Time magazine. p. 45. Archived from the original on February 9, 2007. "Search: 'Stargate'". Freedom of Information Act Electronic Reading Room. Central Intelligence Agency. McMoneagle, Joseph (1997). Mind trek : exploring consciousness, time, and space through remote viewing (Revised ed.). Norfork, VA: Hampton Roads Pub. p. 247. ISBN 978-1-8789-0172-9. Popkin, Jim (November 12, 2015). "Meet the former Pentagon scientist who says psychics can help American spies". Newsweek. Pilkington, Mark (June 5, 2003). "The remote viewers". The Guardian. "Fort Meade, Maryland, where psychics gathered to remotely spy on the U.S. Embassy in Iran during the hostage crisis". Miami Herald. Nickell, Joe (March 2001). "Remotely viewed? The Charlie Jordan case". Skeptical Inquirer. Vol. 11, no. 1. "Dr. Harold Puthoff". arlingtoninstitute.org. The Arlington Institute. 2008. Archived from the original on March 3, 2013. "Interview: A New Biopic Charts the Life of Ingo Swann, the 'Father of Remote Viewing'". Outerplaces.com. Archived from the original on April 29, 2018. Retrieved April 28, 2018. "An Interview with Indo Swann". The Wise Old Goat – The Personal Website of Michel Snoeck. Retrieved April 28, 2018. "An Outsider's Remote View of All Things: Ingo Swann". Chelseanow.com. Archived from the original on April 29, 2018. Retrieved April 28, 2018. "A Dynamic PK Experiment with Ingo Swann". Central Intelligence Agency. Archived from the original on April 29, 2018. Retrieved April 28, 2018. Pat Price URL:http://www.scientolipedia.org/info/Pat_Price (Scientolipedia) Sources:

      Schnabel, Jim (1997) Remote Viewers: The Secret History of America's Psychic Spies Dell, 1997 , ISBN 0-440-22306-7
      Richelson, Jeffrey T The Wizards of Langley: Inside the CIA's Directorate of Science and Technology
      Mandelbaum, W. Adam The Psychic Battlefield: A History of the Military-Occult Complex
      Picknett, Lynn, Prince Clive The Stargate Conspiracy
      Chalker, Bill Hair of the Alien: DNA and Other Forensic Evidence of Alien Abductions
      Constantine, Alex Psychic Dictatorship in the USA
      

      https://documents2.theblackvault.com/documents/cia/stargate/STARGATE%20%2311%20549/Part0003/CIA-RDP96-00789R002500240004-5.pdf [bare URL PDF] "Psychic Warrior: Inside the CIA's Stargate Program: The True Story of a Soldier's Espionage and Awakening". Publishers Weekly. Retrieved April 28, 2018. "Stargate: People and researchers". Bibliotecapleyades.net.

      Ronson, Jon (2006). The Men Who Stare at Goats. Simon & Schuster. pp. 93–94. ISBN 978-0-7432-7060-1.
      

      Further reading

      Burnett, Thom, ed. (2006). "Psi-War: Operations Grillflame and Stargate". Conspiracy Encyclopedia: The encyclopedia of conspiracy theories. Franz Steiner Verlag. p. 153. ISBN 978-1-84340-381-4.
      Caroll, Robert Todd (2012). "Remote Viewing". In the Skeptic's Dictionary. John Wiley & Sons. ISBN 0-471-27242-6.
      Hines, Terence (2003). Pseudoscience and the Paranormal. Prometheus Books. ISBN 1-57392-979-4.
      Hyman, Ray (1996). "Evaluation of the Military's Twenty-year Program on Psychic Spying". Skeptical Inquirer 20: 21–26.
      Morehouse, David (1996). Psychic Warrior, St. Martin's Paperbacks, ISBN 978-0-312-96413-9. Morehouse was a psychic in the program.
      Ronson, Jon (2004). The Men Who Stare at Goats. Picador. ISBN 0-330-37547-4. Written to accompany the TV series Crazy Rulers of the World. The US military budget cuts after the Vietnam war and how it all began.
      Sessions, Abigail (2016). "STARGATE, Project (1970s–1995)". In Goldman, Jan (ed.). The Central Intelligence Agency: An Encyclopedia of Covert Ops, Intelligence Gathering, and Spies, Volume 1. ABC-CLIO. pp. 352–353. ISBN 978-1-61069-092-8.
      Smith, Paul (2004). Reading the Enemy's Mind: Inside Star Gate: America's Psychic Espionage Program, Forge Books. ISBN 0-312-87515-0
      Utts, Jessica (1996). "An Assessment of the Evidence for Psychic Functioning". Journal of Scientific Exploration. 10 (1): 3–30. CiteSeerX 10.1.1.685.2525. 0892-3310/96.
      

      External links

      Report from 1995 about the program from American Institutes for Research
      Declassified analytical report (1983) related to the project
      Declassified documents about the project on the website of the CIA
      
      vte
      

      Defense Intelligence Agency Categories:

      1978 establishments in MarylandAmerican secret government programsCentral Intelligence Agency operationsCold War tacticsDefense Intelligence AgencyEspionage projectsHuman subject research in the United StatesPseudoscienceRemote viewing
      

      image.png /en-us/articles/360023851591-How-do-I-view-DRM-protected-content

      This is ABSSOLUTELY NOTHING BUT "UN SE LINUX ALED" MACROMEDIA SHOCKWAVE FLASH all over again; it is embarrassingly not just "bugs in advanced mathematics hidden inside frame buffer mathematics and "OpenGL" it's a significant glaring opening that brave has sbrvaely alerted me to as a "Google add-on to Chome" that makes yet another floating .VA inside Virginia or .IT ... your "Infomration Technology" departments are patenty compromised by Plex sovereignty, weither it be of Menlo or Sunnyvale;

      the Mountain will not prevail against Veritae Trantor.

      THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS “AS IS” AND ANY EXPRESS OR IMPLIED SURVIVABILITY, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF IMMORTALITY NOR MORTALITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

      The third law of thermodynamics states that the entropy of a closed system at thermodynamic equilibrium approaches a constant value when its temperature approaches absolute zero. This constant value cannot depend on any other parameters characterizing the system, such as pressure or applied magnetic field. At absolute zero (zero kelvins) the system must be in a state with the minimum possible energy.

      Entropy is related to the number of accessible microstates, and there is typically one unique state (called the ground state) with minimum energy.^[1]^ In such a case, the entropy at absolute zero will be exactly zero. If the system does not have a well-defined order (if its order is glassy, for example), then there may remain some finite entropy as the system is brought to very low temperatures, either because the system becomes locked into a configuration with non-minimal energy or because the minimum energy state is non-unique. The constant value is called the residual entropy of the system.^[2]^

      In physics, physical chemistry and engineering, fluid dynamics is a subdiscipline of fluid mechanics that describes the flow of fluids---liquids and gases. It has several subdisciplines, including aerodynamics (the study of air and other gases in motion) and hydrodynamics (the study of liquids in motion). Fluid dynamics has a wide range of applications, including calculating forces and moments on aircraft, determining the mass flow rate of petroleum through pipelines, predicting weather patterns, understanding nebulae in interstellar space and modelling fission weapon detonation.

      Fluid dynamics offers a systematic structure---which underlies these practical disciplines---that embraces empirical and semi-empirical laws derived from flow measurement and used to solve practical problems. The solution to a fluid dynamics problem typically involves the calculation of various properties of the fluid, such as flow velocity, pressure, density, and temperature, as functions of space and time.

      Before the twentieth century, hydrodynamics was synonymous with fluid dynamics. This is still reflected in names of some fluid dynamics topics, like magnetohydrodynamics and hydrodynamic stability, both of which can also be applied to gases.^[1]^

      In ufology, a close encounter is an event in which a person witnesses an unidentified flying object (UFO). This terminology and the system of classification behind it were first suggested in astronomer and UFO researcher J. Allen Hynek's 1972 book The UFO Experience: A Scientific Inquiry.[1] Categories beyond Hynek's original three have been added by others but have not gained universal acceptance, mainly because they lack the scientific rigor that Hynek aimed to bring to ufology.[2]

      Sightings more than 150 metres (500 ft) from the witness are classified as daylight discs, nocturnal lights or radar/visual reports.[3] Sightings within about 150 metres (500 ft) are subclassified as various types of close encounters. Hynek and others argued that a claimed close encounter must occur within about 150 metres (500 ft) to greatly reduce or eliminate the possibility of misidentifying conventional aircraft or other known phenomena.[4]

      Hynek's scale became well known after being referenced in a 1977 film, Close Encounters of the Third Kind, which is named after the third level of the scale. Promotional posters for the film featured the three levels of the scale, and Hynek himself makes a cameo appearance near the end of the film.

      https://www.independent.co.uk/tech/project-star-gate-cia-central-intelligence-agency-a7534191.html What is "remote coordinate viewing" .... and "how do I get on the payroll?

      Maybe if I waste some more time writing about "the perpetual motion machine" and the absolute simplicity of the duality of that and of course, the First Law, you know "an object in motion tends to stay in motion, unless opposed by an equal an opposite force--either that or some kind of mass hysteria against the idea that things can just keep on going and going and going without any kind of propulsion.

      It's things like "the air we breathe" and the course our rockets veer the Holy vessel of all humanity off by "just a smidgen" that sort of remind me what "equal and opposite force" mean, in sum and total, of all the things we've done and all the things we will ever do.

      that's not fiction; that remote cooridate viewing thing; they actually "had a program investigating psychic powers--like "we give you longitude and latitude and you "scry" look into a crystall ball and tell me if you can see what's there. honestly; icalled it "on the payroll" a way to pay people for being ... "over their head on the floor about the hub of dark.fail

    1. Reviewer #1 (Public review):

      Summary:

      This manuscript combined rat fMRI, optogenetics, and electrophysiology to examine the large-scale functional network of the olfactory system as well as its alteration in an aged rat model.

      Strengths:

      Overall methodology is very solid and the results provided an interesting perspective on large-scale functional network perturbation of the olfactory system.

      Weaknesses:

      The biological relevance and validation of the current results can be improved.

      (1) Figure 1.1, on the top of the figure, CHR2 may be replaced by CHR2-mCherry, as only mCherry is fluorescent. And also, it's somewhat surprising that in AON and Pir regions (where only axon fibers should be labelled as red), most fluorescence appeared dot-like and looked more similar to cell body instead of typical fiber. The authors may want to double-check this.

      (2) The authors primarily presented 1Hz stimulation results. What is the most biologically relevant frequency (e.g., perhaps firing frequency under natural odor stimulation) among all frequencies that were used?

      (3) In Figure 2, the statistical thresholding is confusing: in the figure legend, it was stated that "t > 3.1 corresponding to P < 0.001" but later "further corrected for multiple comparisons with threshold-free cluster enhancement with family-wise error rate (TFCE-FWE) at P < 0.05"? Regardless of the statistical thresholding, such BOLD activation seemed to be widespread (almost whole-brain activation). Does such activation remain specific to the optogenetic stimulation, or something more general (e.g., arousal level change)? Furthermore, how those results (I assume they are group-level results) were obtained was not described very clearly. Is it just a simple average of individual-level results, or (more conventionally) second-level analysis?

      (4) In Figure 2, why use AUC to quantify the activation, not the more conventional beta value in the GLM analysis?

      (5) For Figure 2D, the way that it was quantified can be better described as "relative" activation within one condition, and I don't how to interpret the comparison among the relative fraction of activated regions. Perhaps comparison using percentage change (i.e., beta values) is more straightforward.

      (6) For Figure 3, it may be more convenient for readers to include the results of 1st activation for direct comparison. The current layout makes it difficult to make direct, visual comparisons among all 3 activations. Again I think using beta values (instead of AUC) may be more conventional.

      (7) Can the DCM results (at least part of it) be verified using the current electrophysiological data? For example, the long-range inhibitory effective connectivity of AON is rather intriguing. If that can be verified using ephys. data, it would be really great. In the current form, the DCM and ephys. results seem to be totally unrelated.

      (8) In Figure 6, it would be great if the adaptation of BOLD and ephys. signals can be correlated at the brain region level. The current figure only demonstrated there is adaptation in ephys. signal, but did not show if such adaptation is related to the BOLD adaptation.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      The emergence of Drosophila EM connectomes has revealed numerous neurons within the associative learning circuit. However, these neurons are inaccessible for functional assessment or genetic manipulation in the absence of cell-type-specific drivers. Addressing this knowledge gap, Shuai et al. have screened over 4000 split-GAL4 drivers and correlated them with identified neuron types from the "Hemibrain" EM connectome by matching light microscopy images to neuronal shapes defined by EM. They successfully generated over 800 split-GAL4 drivers and 22 split-LexA drivers covering a substantial number of neuron types across layers of the mushroom body associative learning circuit. They provide new labeling tools for olfactory and non-olfactory sensory inputs to the mushroom body; interneurons connected with dopaminergic neurons and/or mushroom body output neurons; potential reinforcement sensory neurons; and expanded coverage of intrinsic mushroom body neurons. Furthermore, the authors have optimized the GR64f-GAL4 driver into a sugar sensory neuron-specific split-GAL4 driver and functionally validated it as providing a robust optogenetic substitute for sugar reward. Additionally, a driver for putative nociceptive ascending neurons, potentially serving as optogenetic negative reinforcement, is characterized by optogenetic avoidance behavior. The authors also use their very large dataset of neuronal anatomies, covering many example neurons from many brains, to identify neuron instances with atypical morphology. They find many examples of mushroom body neurons with altered neuronal numbers or mistargeting of dendrites or axons and estimate that 1-3% of neurons in each brain may have anatomic peculiarities or malformations. Significantly, the study systematically assesses the individualized existence of MBON08 for the first time. This neuron is a variant shape that sometimes occurs instead of one of two copies of MBON09, and this variation is more common than that in other neuronal classes: 75% of hemispheres have two MBON09's, and 25% have one MBON09 and one MBON08. These newly developed drivers not only expand the repertoire for genetic manipulation of mushroom body-related neurons but also empower researchers to investigate the functions of circuit motifs identified from the connectomes. The authors generously make these flies available to the public. In the foreseeable future, the tools generated in this study will allow important advances in the understanding of learning and memory in Drosophila.

      Strengths:

      (1) After decades of dedicated research on the mushroom body, a consensus has been established that the release of dopamine from DANs modulates the weights of connections between KCs and MBONs. This process updates the association between sensory information and behavioral responses. However, understanding how the unconditioned stimulus is conveyed from sensory neurons to DANs, and the interactions of MBON outputs with innate responses to sensory context remains less clear due to the developmental and anatomic diversity of MBONs and DANs. Additionally, the recurrent connections between MBONs and DANs are reported to be critical for learning. The characterization of split-GAL4 drivers for 30 major interneurons connected with DANs and/or MBONs in this study will significantly contribute to our understanding of recurrent connections in mushroom body function.

      (2) Optogenetic substitutes for real unconditioned stimuli (such as sugar taste or electric shock) are sometimes easier to implement in behavioral assays due to the spatial and temporal specificity with which optogenetic activation can be induced. GR64f-GAL4 has been widely used in the field to activate sugar sensory neurons and mimic sugar reward. However, the authors demonstrate that GR64f-GAL4 drives expression in other neurons not necessary for sugar reward, and the potential activation of these neurons could introduce confounds into training, impairing training efficiency. To address this issue, the authors have elaborated on a series of intersectional drivers with GR64f-GAL4 to dissect subsets of labeled neurons. This approach successfully identified a more specific sugar sensory neuron driver, SS87269, which consistently exhibited optimal training performance and triggered ethologically relevant local searching behaviors. This newly characterized line could serve as an optimized optogenetic tool for sugar reward in future studies.

      (3) MBON08 was first reported by Aso et al. 2014, exhibiting dendritic arborization into both ipsilateral and contralateral γ3 compartments. However, this neuron could not be identified in the previously published Drosophila brain connectomes. In the present study, the existence of MBON08 is confirmed, occurring in one hemisphere of 35% of imaged flies. In brains where MBON08 is present, its dendrite arborization disjointly shares contralateral γ3 compartments with MBON09. This remarkable phenotype potentially serves as a valuable resource for understanding the stochasticity of neurodevelopment and the molecular mechanisms underlying mushroom body lobe compartment formation.

      Weaknesses:

      There are some minor weaknesses in the paper that can be clarified:

      (1) In Figure 8, the authors trained flies with a 20s, weak optogenetic conditioning first, followed by a 60s, strong optogenetic conditioning. The rationale for using this training paradigm is not explicitly provided.

      These experiments were designed to test if flies could maintain consistent performance with repetitive and intense LED activation, which is essential for experiments involving long training protocols or coactivation of other neurons inside a brain.

      In Figure 8E, if data for training with GR64f-GAL4 using the same paradigm is available, it would be beneficial for readers to compare the learning performance using newly generated split-GAL4 lines with the original GR64f-GAL4, which has been used in many previous research studies. It is noteworthy that in previously published work, repeating training test sessions typically leads to an increase in learning performance in discrimination assays. However, this augmentation is not observed in any of the split-GAL4 lines presented in Figure 8E. The authors may need to discuss possible reasons for this.

      As the reviewer pointed out, many previous studies including ours used the original Gr64f-GAL4 in olfactory conditioning. Figure 1H of Yamada et al., 2023 (https://doi.org/10.7554/eLife.79042) showed such a result, where the first and second-order olfactory conditioning were assayed. Indeed, the first-order conditioning scores were gradually augmented over repeated training. In this experiment, we used low red LED intensity for the optogenetic activation. In the Figure 8E of the present paper, the first memory test was after 3x pairing of 20s odor with five 1s red LED without intermediate tests. Therefore, flies were already sufficiently trained to show a plateau memory level in “Test1”. In the revision of another recent report (Figure 1C-F of Aso et al., 2023; https://doi.org/10.7554/eLife.85756), we included the learning curve data of our best Gr64f-split-GAL4, SS87269. Under a less saturated training conditioning, SS87269 did show learning augmentation over repeated training.

      (2) In line 327, the authors state that in all samples, the β'1 compartment is arborized by MBON09. However, in Figure 11J, the probability of having at least one β'1 compartment not arborized is inferred to be 2%. The authors should address and clarify this conflict in the text to avoid misunderstanding.

      The chance of visualizing MBON08 in MCFO images was 21/209 in total (Figure 11I). If we assume that each of four cells adopt MBON08 development fate at this chance, we can calculate the probability for each case of MBON08/09 cell type composition. From this calculation, we inferred approximately 2% of flies would lack innervations to β'1 compartment in at least one hemisphere. However, we didn't observe a lack of β'1 arborizations in 169 sample flies. If these MBONs independently develop into MBON08 at 21/209 odds, the chance of never observing two MBON08s in either hemisphere of all 169 samples is 3.29%. Therefore, some developmental mechanisms may prevent the emergence of two MBON08 in the same hemisphere.

      In the revised manuscript, we displayed these estimated probability for each case separately, and annotated actual observation on the right side.

      (3) In general, are the samples presented male or female? This sample metadata will be shown when the images are deposited in FlyLight, but it would be useful in the context of this manuscript to describe in the methods whether animals are all one sex or mixed sex, and in some example images (e.g. mAL3A) to note whether the sample is male or female.

      The samples presented in this study are mixed sex, except for Figure 11I, where genders are specified. We provided metadata information of the presented images in Supplemental File 7, and we added a paragraph in the in the method section:

      “Most samples were collected from females, though typically at least one male fly was examined for each driver line. While we noticed certain lines such as SS48900, exhibited distinct expression patterns in females and males, we did not particularly focus on sexual dimorphism, which is analyzed elsewhere (Meissner et al. 2024). Therefore, unless stated otherwise, the presented samples are of mixed gender.

      Detailed metadata, including gender information and the reporter used, can be found in Supplementary File 7.”

      Reviewer #2 (Public Review):

      Summary:

      The article by Shuai et al. describes a comprehensive collection of over 800 split-GAL4 and split-LexA drivers, covering approximately 300 cell types in Drosophila, aimed at advancing the understanding of associative learning. The mushroom body (MB) in the insect brain is central to associative learning, with Kenyon cells (KCs) as primary intrinsic neurons and dopaminergic neurons (DANs) and MB output neurons (MBONs) forming compartmental zones for memory storage and behavior modulation. This study focuses on characterizing sensory input as well as direct upstream connections to the MB both anatomically and, to some extent, behaviorally. Genetic access to specific, sparsely expressed cell types is crucial for investigating the impact of single cells on computational and functional aspects within the circuitry. As such, this new and extensive collection significantly extends the range of targeted cell types related to the MB and will be an outstanding resource to elucidate MB-related processes in the future.

      Strengths:

      The work by Shuai et al. provides novel and essential resources to study MB-related processes and beyond. The resulting tools are publicly available and, together with the linked information, will be foundational for many future studies. The importance and impact of this tool development approach, along with previous ones, for the field cannot be overstated. One of many interesting aspects arises from the anatomical analysis of cell types that are less stereotypical across flies. These discoveries might open new avenues for future investigations into how such asymmetry and individuality arise from development and other factors, and how it impacts the computations performed by the circuitry that contains these elements.

      Weaknesses:

      Providing such an array of tools leaves little to complain about. However, despite the comprehensive genetic access to diverse sensory pathways and MB-connected cell types, the manuscript could be improved by discussing its limitations. For example, the projection neurons from the visual system seem to be underrepresented in the tools produced (or almost absent). A discussion of these omissions could help prevent misunderstandings.

      We internally distributed efforts to produce split-GAL4 lines at Janelia Research Campus. The recent preprint (Nern et al., 2024; doi: https://doi.org/10.1101/2024.04.16.589741) described the full collection of split-GAL4 driver lines in the optic lobe including the visual projection neurons to the mushroom body. We cited this preprint in the revised manuscript by adding a short paragraph of discussion.

      “Although less abundant than the olfactory input, the MB also receives visual information from the visual projection neurons (VPNs) that originate in the medulla and lobula and are targeted to the accessory calyx (Vogt et al. 2016; Li et al. 2020). A recent preprint described the full collection of split-GAL4 driver lines in the optic lobe, which includes the VPNs to the MB (Nern et al. 2024).”

      Additionally, more details on the screening process, particularly the selection of candidate split halves and stable split-GAL4 lines, would provide valuable insights into the methodology and the collection's completeness.

      The details of our split-GAL4 design and screening procedures were described in previous studies (Aso et al., 2014; Dolan et al., 2019). Available data and tools to design split-GAL4 changed over time, and we took different approaches accordingly. Many of split-GAL4 lines presented in this study were designed and screened in parallel to the lines for MBONs and DANs in 2010-2014 when MCFO images of GAL4 drivers and EM connectome were not yet available. With knowledge of where MBONs and DANs project, I (Y.A.) manually examined and annotated thousands of confocal stacks (Jenett et al., 2012; https://doi.org/10.1016/j.celrep.2012.09.011) to find candidate cell types that may concat with them.

      Later I used more advanced computational tools (Otsuna et al., 2018; doi: https://doi.org/10.1101/318006) and MCFO images aligned to the standard brain volume (Meissner et al., 2023; DOI: 10.7554/eLife.80660.). Now, if one needs to further generate split-GAL4 lines for cell type identified in EM connectome data, neuron bridge website (https://neuronbridge.janelia.org/) can be very helpful to provide a list of GAL4 drivers that may label the neuron of interest.

      Reviewer #3 (Public Review):

      Summary:

      Previous research on the Drosophila mushroom body (MB) has made this structure the best-understood example of an associative memory center in the animal kingdom. This is in no small part due to the generation of cell-type specific driver lines that have allowed consistent and reproducible genetic access to many of the MB's component neurons. The manuscript by Shuai et al. now vastly extends the number of driver lines available to researchers interested in studying learning and memory circuits in the fly. It is an 800-plus collection of new cell-type specific drivers target neurons that either provide input (direct or indirect) to MB neurons or that receive output from them. Many of the new drivers target neurons in sensory pathways that convey conditioned and unconditioned stimuli to the MB. Most drivers are exquisitely selective, and researchers will benefit from the fact that whenever possible, the authors have identified the targeted cell types within the Drosophila connectome. Driver expression patterns are beautifully documented and are publicly available through the Janelia Research Campus's Flylight database where full imaging results can be accessed. Overall, the manuscript significantly augments the number of cell type-specific driver lines available to the Drosophila research community for investigating the cellular mechanisms underlying learning and memory in the fly. Many of the lines will also be useful in dissecting the function of the neural circuits that mediate sensorimotor circuits.

      Strengths:

      The manuscript represents a huge amount of careful work and leverages numerous important developments from the last several years. These include the thousands of recently generated split-Gal4 lines at Janelia and the computational tools for pairing them to make exquisitely specific targeting reagents. In addition, the manuscript takes full advantage of the recently released Drosophila connectomes. Driver expression patterns are beautifully illustrated side-by-side with corresponding skeletonized neurons reconstructed by EM. A comprehensive table of the new lines, their split-Gal4 components, their neuronal targets, and other valuable information will make this collection eminently useful to end-users. In addition to the anatomical characterization, the manuscript also illustrates the functional utility of the new lines in optogenetic experiments. In one example, the authors identify a specific subset of sugar reward neurons that robustly promotes associative learning.

      Weaknesses:

      While the manuscript succeeds in making a mass of descriptive detail quite accessible to the reader, the way the collection is initially described - and the new lines categorized - in the text is sometimes confusing. Most of the details can be found elsewhere, but it would be useful to know how many of the lines are being presented for the first time and have not been previously introduced in other publications/contexts.

      We revised the text as below.

      “Among the 828 lines, a subset of 355 lines, collectively labeling at least 319 different cell types, exhibit highly specific and non-redundant expression patterns are likely to be particularly valuable for behavioral experiments. Detailed information, including genotype, expression specificity, matched EM cell type(s), and recommended driver for each cell type, can be found in Supplementary File 1. A small subset of 40 lines from this collection have been previously used in studies (Aso et al., 2023; Dolan et al., 2019; Gao et al., 2019; Scaplen et al., 2021; Schretter et al., 2020; Takagi et al., 2017; Xie et al., 2021; Yamada et al., 2023). All transgenic lines newly generated in this study are listed in Supplementary File 2 (Aso et al., 2023; Dolan et al., 2019; Gao et al., 2019; Scaplen et al., 2021; Schretter et al., 2020; Takagi et al., 2017; Xie et al., 2021; Yamada et al., 2023).”

      And where can the lines be found at Flylight? Are they listed as one collection or as many?

      They are listed as one collection - “Aso 2021” release. It is named “2021” because we released the images and started sharing lines in December of 2021 without a descriptive paper. We added a sentence in the Methods section.

      “All splitGAL4 lines can be found at flylight database under “Aso 2021” release, and fly strains can be requested from Janelia or the Bloomington stock center.”

      Also, the authors say that some of the lines were included in the collection despite not necessarily targeting the intended type of neuron (presumably one that is involved in learning and memory). What percentage of the collection falls into this category?

      We do not have a good record of split-GAL4 screening to calculate the chance to intersect unintended cell types, but it was rather rare. Those unintended cell types can still be a part of circuits for associative learning (e.g. olfactory projection neurons) or totally unrelated cell types. For instance, among a new collection of split-LexA lines using Gr43a-LexADBD hemidriver (Figure 7-figure supplement 2), one line specifically intersected T1 neurons in the optic lobe despite that the AD line was selected to intersect sugar sensory neurons. We suspect that this is due to ectopic expression of Gr43a-LexADBD. Nonetheless, we included it in the paper because cell-type-specific Split-LexA driver for T1 will be useful irrespective of whether the expression of Gr43a gene is expressed in T1 or not.

      And what about the lines that the authors say they included in the collection despite a lack of specificity? How many lines does this represent?

      For a short answer, there are about 100 lines in the collection that lack the specificity for behavioral experiments.

      We ranked specificity of split-GAL4 drivers in the Supplementary File 1. Rank 2 are the ideal lines, Rank 1 are less ideal but acceptable, and Rank 0 is not suitable for activation screening in behavioral experiments. Out of the 828 split-GAL4 lines reported here, there are 413, 305 and 103 lines in rank2, rank1 and rank0 categories respectively. 7 lines are not ranked for specificity because only flipout expression data are available.

      Recommendations for the authors:

      Reviewer #2 (Recommendations For The Authors):

      As mentioned elsewhere and in addition to the minor points below, it is advisable for the authors to elaborate on the details of the screening process. Furthermore, a discussion about the circuits not targeted by their research, such as the visual projection neurons, would be beneficial.

      See the response above to Reviewer #2’s public review.

      Line 32-33: The citations are very fly-centric. the authors might want to consider reviews on the MB of other insect species regarding learning and memory.

      We additionally cited Rybak and Menzel 2017’s book chapter on honey bee mushroom body.

      Line 43-44: Citations should be added, e.g. Séjourné et al. (2011), Pai et al. (2013), Plaçais et al. (2013).

      Citation added

      Line 50-52: Citation Hulse et al. (2021) should be added.

      Citation added

      Line 162: In this part, it might be valuable for the reader to understand which of these PNs are actually connecting with KCs.

      A full list of cell types within the MB were provided in Supplementary File 4 of the revised manuscript. See also response to Reviewer 3, Lines 150-1.

      Line 179: Citation Burke et al. (2012) should be mentioned.

      Citation added

      Line 181: Thermogenic might be thermogenetic.

      Corrected

      Line 189: Citations add Otto et al. (2020) and Felsenberg et al. (2018).

      Citations added

      Line 208ff: The authors should consider discussing why they did not use other GR and IR promoters. For example, Gr5a is prominent in sugar-sensing, while Ir76b could be a reinforcement signal related to yeast food (Steck et al., 2018; Ganguly et al., 2017; see also Corfas et al., 2019 for local search).

      We focused on the Gr64f promoter because of its relatively broad expression and successful use of Gr64f-GAL4 for fictive reward experiment. We added the Split-LexA lines with Gr43a and Gr66a promoters (Figure 7-figure supplement 2). Other gustatory sensory neurons also have the potential to be reinforcement signals, but we just did not have the bandwidth to cover them all.

      Line 319: Consider citing Linneweber et al. (2020) for a neurodevelopmental account of such individuality.

      We added a sentence and cited this reference.

      “On the other hand, the neurodevelopmental origin of neuronal morphology appeared to have functional significance on behavioral individuality (Linneweber et al. 2020).”

      Line 352: Citation add Hulse et al. (2021).

      Citations added

      Line 356ff: The utility and value of Split-LexA may not be apparent to non-expert readers. Moreover, how were LexADBDs chosen for creating these lines?

      We have added an introductory sentence at the beginning of the paragraph and explained that these split-LexA lines were a conversion of split-GAL4 lines that were published in 2014 and frequently used in studying the mushroom body circuit.

      “Split-GAL4 lines enable cell-type-specific manipulation, but some experiments require independent manipulation of two cell types. Split-GAL4 lines can be converted into split-LexA lines by replacing the GAL4 DNA binding domain with that of LexA (Ting et al., 2011). To broaden the utility of the split-GAL4 lines that have been frequently used since the publication in 2014 (Aso et al., 2014a), we have generated over 20 LexADBD lines to test the conversions of split-GAL4 to split-LexA. The majority (22 out of 34) of the resulting split-LexA lines exhibited very similar expression patterns to their corresponding original split-GAL4 lines (Figure 12).”

      Line 374: Italicize Drosophila melanogaster.

      Revised as suggested.

      Reviewer #3 (Recommendations For The Authors):

      Major Comments:

      As mentioned in the Public Review, the drivers are nicely classified in the various subsections of the manuscript, but the statements in the text summarizing how many lines there are in specific categories are often confusing. For example, line 129 refers to "drivers encompassing 111 cell types that connect with the DANs and MBONs", but Figure 1E indicates that 46 new cell types downstream of MBONs and upstream of DANs have been generated. This seems like a discrepancy.

      The 46 cell types in Figure 1E consider only the CRE/SMP/SIP/SLP area, where MBON downstreams and DAN upstreams are highly enriched, while the 111 cell types include all. To avoid confusion, we removed the “MBON downstream and DAN upstream” counting in Figure 1E in the revised manuscript.

      Also, at line 75 the MBON lines previously generated by Rubin and Aso (2023) are referred to as though they are separate from the 828 described "In this report." Supplementary file 1 suggests, however, that they are included as part of this report.

      Twenty five lines generated in Rubin and Aso (2023) were initially included in Supplementary file 1 for the convenience of users, but they were not counted towards the 828 new lines described in this report. To avoid confusion, we removed these 25 lines in the revised manuscript. Now all lines listed in Supplementary file 1 were generated in this study (“Aso 2021” release), and if a line has been used in earlier studies, or introduced in other contexts, for example the accompanying omnibus preprint (Meissener 2024, doi: 10.1101/2024.01.09.574419), the citations are listed in the reference column.

      More generally, in lines 94-102 "828 useful lines based on their specificity, intensity and non-redundancy" are referred to, but they are subsequently subdivided into categories of lines with lower specificity (i.e. with off-target expression) and lines that did not target intended cell types (presumably ones unlikely to be involved in learning and memory). It would be useful to know how many lines (at least roughly) fall into these subcategories.

      See the response above to Reviewer #3’s public review.

      Finally, Figures 3B & C indicate cell types connected to DANs and MBONs and the number for which Split-Gal4 lines are available. The text (lines 136-7) states that the new collection covers 30 of these major cell types (Figure 3C)," but Figure 3C clearly has more than 30 dots showing the drivers available. Presumably existing and new driver lines are being pooled, but this should either be explained or the two should be distinguished.

      “(Figure 3C)” was replaced with “(Supplementaryl File 3)” in the revised manuscript to correct the reference. Figure 3B & C are plots of all MB interneurons, not just the major cell types.

      Minor Comments:

      Although the paper is generally well written there are minor grammatical errors throughout (e.g. dropped articles, odd constructions, etc.) that somewhat detract from an otherwise smooth and enjoyable reading experience. A quick editing pass by a native speaker (i.e. any of several of the authors) could clean up these and numerous other small mistakes. A few examples: line 138 "presented" should be present; line 204: "contain off-targeted expressions" should be "have off-target expression;" line 219: "usage to substitute reward" is awkward at best and could be something like "use in generating fictive rewards"; line 326 "arborize[s]"; l. 331 "Based on the likelihood" should be something like "based on these observations"'; line 349 "[is] likely to appear"; l. 352 "extensive connection[s]"; line 353 "has [a] strong influence;" l. 963 "Projections" should be singular; etc.

      All the mentioned examples have been corrected, and we have asked a native speaker to edit through the revised manuscript.

      Lines 81-3: Is the lookup table referred to Suppl. File 1? A reference is desirable.

      Yes, the lookup table referred to “Supplementary File 1” and a reference was added.

      Lines 111-2: what is a "non-redundant set of...cell types?" Cell types that are represented by a single cell (or bilateral pair)? Or does this sentence mean that of the 828 lines, 355 are specific to a single cell type, and in total 319 cell types are targeted? The statement is confusing.

      We revised the text as below.

      “Figure 1E provides an overview of the categories of covered cell types. Among the 828 lines, a subset of 355 lines, collectively labeling at least 319 different cell types, exhibit highly specific and non-redundant expression patterns are likely to be particularly valuable for behavioral experiments. Detailed information, including genotype, expression specificity, matched EM cell type(s), and recommended driver for each cell type, can be found in Supplementary File 1. A small subset of 40 lines from this collection have been previously used in studies (Aso et al.,

      2023; Dolan et al., 2019; Gao et al., 2019; Scaplen et al., 2021; Schretter et al., 2020; Takagi et al., 2017; Xie et al., 2021; Yamada et al., 2023). All transgenic lines newly generated in this study are listed in Supplementary File 2 (Aso et al., 2023; Dolan et al., 2019; Gao et al., 2019; Scaplen et al., 2021; Schretter et al., 2020; Takagi et al., 2017; Xie et al., 2021; Yamada et al., 2023).”

      Line 148: "MB major interneurons" is a confusing descriptor for postsynaptic partners of MBONs.

      We added a sentence to clarify the definition of the “MB major interneurons”.

      “In the hemibrain EM connectome, there are about 400 interneuron cell types that have over 100 total synaptic inputs from MBONs and/or synaptic outputs to DANs. Our newly developed collection of split-GAL4 drivers covers 30 types of these ‘major interneurons’ of the MB (Supplementary File 3).”

      Lines 150-1: Not sure what is meant by "have innervations within the MB." Sounds like cells are presynaptic to KCs, DANS, and MBONs, but Figure 3 Figure Supplement 1 indicates they include neurons that both provide and receive innervation to/from MB neurons. Please clarify.

      For clarification, in the revised manuscript we have included a full list of cell types within the MB in Supplementary File 4. Included are all neurons with >= 50 pre-synaptic connections or with >=250 post-synaptic connections in the MB roi in the hemibrain (excluding the accessory calyx). The cell types include KCs, MBONs, DANs, PNs, and a few other cell types. The coverage ratio was updated based on this list.

      Also, in line 152, what does it mean that they "may have been overlooked previously?" this seems unnecessarily ambiguous. Were they overlooked or weren't they?

      Changed the text to “These lines offer valuable tools to study cell types that previously are not genetically accessible. Notably, SS85572 enables the functional study of LHMB1, which forms a rare direct pathway from the calyx and the lateral horn (LH) to the MB lobes (Bates et al., 2020). ”

      Line 158 refers to PN cells within the MB, which are not mentioned in any place else as MB components.

      What are these PNs and how do they differ from MBONs?

      See responses to Lines 150-1 for clarification of cell types within the MB.

      Line 188: not clear what is meant by "more continual learning tasks".

      We rephrase it as “more complex learning tasks” to avoid jargon.

      Line 235: Not clear why "extended training with high LED intensity" wouldn't promote the formation of robust memories. Is this for some reason unexpected based on previous experiments? Please explain.

      See responses to weakness #1 of the same reviewer

      Lines 317-9: It would be useful to state here that MB0N08 and MB0N09 are the two neurons labeled by MB083C.

      Revised as suggested.

      Line 368: Presumably the "lookup table" referred to is Supplementary File 1, but a reference here would be useful.

      Yes, Supplementary File 1 and a reference was added.

      Comments on Figures:

      Figure 1C The "Dopamine Neurons" label position doesn't align with the Punishment and Reward labels, which is a bit confusing.

      They are intentionally not aligned, because dopamine neurons are not reward/punishment per se. We intend to use the schematic to show that the punishment and reward are conveyed to the MB through the dopamine neuron layer, just as the output from the MB output neuron layer is used to guide further integration and actions. To keep the labels of “Dopamine neurons” and “MB Output Neurons” in a symmetrical position, we decide to keep the original figure unchanged. But we thank the reviewer for the kind suggestion.

      Figure 1F and Figure 1 - Figure Supplement 1: the light gray labels presumably indicate the (EM-identified) neuron labeled by each line, but this should be explicitly stated in the figure legends. It would also be useful in the legends to direct the reader to the key (Supplementary File 1) for decoding neuronal identities.

      Revised as suggested.

      Figure 2: For clarity, I'd recommend titling this figure "LM-EM Match of the CRE011-specific driver SS45245". This reduces the confusion of mixing and matching the driver and cell-type names. Also, it would be helpful to indicate (e.g. with labels above the figure parts) that A & B represent the MCFO characterization step and C & D represent the LM-EM matching step of the pipeline. Revised as suggested.

      Figure 6: For clarity, it would be useful to separately label the PN and sensory neuron groups. Also, for the sensory neurons at the bottom, what is the distinction between the cell names in gray and black font?

      Figure 6 was updated to separate the non-olfactory PN and sensory neuron groups. The gray was intended for olfactory receptor neuron cell types that are additionally labeled in the driver lines. To avoid confusion, the gray cell types were removed in the revised figure, and a clarification sentence was added to the legend.

      “Other than thermo-/hygro-sensory receptor neurons (TRNs and HRNs), SS00560 and MB408B also label olfactory receptor neurons (ORNs): ORN_VL2p and ORN_VC5 for SS00560, ORN_VL1 and ORN_VC5 for MB408B.”

      Figure 7A: It's unclear why the creation of 6 Gr64f-LexADBD lines is reported. Aren't all these lines the same? If not, an explanation would be useful.

      These six Gr64f-LexADBD lines are with different insertion sites, and with the presence or absence of the p10 translational enhancer. Explanation was added to legend. Enhanced expression level with p10 can be helpful to compensate for the general tendency that split-LexA is weaker than split-GAL4. Different insertions will be useful to avoid transvections with split-GAL4s, which are mostly in attP40 and attP2.

      Figure 8F: It would help to include in the legend a brief description of each parameter being measured-essentially defining the y-axis label on the graphs as in Figure Supplement 2. Also, how is the probability of return calculated and what behavioral parameter does the change of curvature refer to?

      We added a brief description to the behavioral parameters in the legend of Figure 8F.

      “Return behavior was assessed within a 15-second time window. The probability of return (P return) is the percentage of flies that made an excursion (>10 mm) and then returned to within 3 mm of their initial position. Curvature is the ratio of angular velocity to walking speed.”

      Figure 9E: What are the parenthetical labels for lines SS49267, SS49300, and SS35008?

      They are EM bodyIDs. Figure legend was revised.

    1. I'm amazed at the lack of thoughtfulness in the original post that this change of heart refers to. From http://rachelbythebay.com/w/2011/11/16/onfire/:

      I assigned arbitrary "point values" to actions taken in the ticketing system. The exact details are lost to the sands of time, but this is an approximate idea. You'd get 16 points for logging into the ticketing system at the beginning of your shift, 8 for a public comment to the customer, 4 for an internal private comment, and 2 for changing status on a ticket. [...] The whole point of writing this was to see who was actually working and who was just being a lazy slacker. This tool made it painfully obvious [...]

      This is, uh, amazingly bad. It goes on, and in a way that makes it sound like self-aware irony, but it's clear by the end that it's not parody.

      The worst support experiences I've had were where it felt like this sort of pressure to conspicuously "perform" was going on behind the scenes, which was directly responsible for the shoddy experience—perfect case studies for Goodhart's Law.

      The author says they've had a change of heart, so surely they've realized this, right? That's what led to the change of heart? Nope. Reading this post, it's this: "my new position on that sort of thing is: fuck them." As in, fuck them for not appreciating the value of this work and needing it to be done for them in the first place. The latter is described at length where they describe the jobs of the managers to already know these things—that is, the stuff that these metrics would say, if the data were being crunched. "Make them do their own damn jobs", the author says.

      (I often see this blog appear on HN, and I've read plenty of the posts that were submitted to HN but have never exactly grokked what was so appealing about any of it. I think with this series of posts, it's a good signal that I can write it off and stop trying to "get" it, because there's nothing to get—just middling observations and, occasionally, bad conclusions.)

    1. Author response:

      The following is the authors’ response to the original reviews.

      eLife Assessment

      This study compiles a wide range of results on the connectivity, stimulus selectivity, and potential role of the claustrum in sensory behavior. While most of the connectivity results confirm earlier studies, this valuable work provides incomplete evidence that the claustrum responds to multimodal stimuli and that local connectivity is reduced across cells that have similar long-range connectivity. The conclusions drawn from the behavioral results are weakened by the animals' poor performance on the designed task.This study has the potential to be of interest to neuroscientists.

      We thank the editor and the reviewers for their feedback on our work, which we have incorporated to help improve interpretation of our findings as outlined in the response below. While we agree with the editor that further work is necessary to provide a comprehensive understanding of claustrum circuitry and activity, this is true of most scientific endeavors and therefore we feel that describing this work as “incomplete” unfairly mischaracterizes the intent of the experiments performed which provide fundamental insights into this poorly understood brain region. Additionally, as identified in the main text, methods section, and our responses to the comments below, we disagree that the behavioral results are “weakened” by the performance of the animals. Our goal was to assess what information animals learned and used in an ambiguous sensory/reward environment, not to shape them toward a particular behavior and interpret the results solely based on their accuracy in performing the task.

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      The paper by Shelton et al investigates some of the anatomical and physiological properties of the mouse claustrum. First, they characterize the intrinsic properties of claustrum excitatory and inhibitory neurons and determine how these different claustrum neurons receive input from different cortical regions. Next, they perform in vitro patch clamp recordings to determine the extent of intraclaustrum connectivity between excitatory neurons. Following these experiments, in vivo axon imaging was performed to determine how claustrum-retrosplenial cortex neurons are modulated by different combinations of auditory, visual, and somatosensory input. Finally, the authors perform claustrum lesions to determine if claustrum neurons are required for performance on a multisensory discrimination task

      Strengths:

      An important potential contribution the authors provide is the demonstration of intra-claustrum excitation. In addition, this paper provides the first experimental data where two cortical inputs are independently stimulated in the same experiment (using 2 different opsins). Overall, the in vitro patch clamp experiments and anatomical data provide confirmation that claustrum neurons receive convergent inputs from areas of the frontal cortex. These experiments were conducted with rigor and are of high quality.

      We thank the reviewer for their positive appraisal of our work.

      Weaknesses:

      The title of the paper states that claustrum neurons integrate information from different cortical sources. However, the authors did not actually test or measure integration in the manuscript. They do show physiological convergence of inputs on claustrum neurons in the slice work. Testing integration through simultaneous activation of inputs was not performed. The convergence of cortical input has been recently shown by several other papers (Chia et al), and the current paper largely supports these previous conclusions. The in vivo work did test for integration because simultaneous sensory stimulations were performed. However, integration was not measured at the single cell (axon) level because it was unclear how activity in a single claustrum ROI changes in response to (for example) visual, tactile, and visual-tactile stimulations. Reading the discussion, I also see the authors speculate that the sensory responses in the claustrum could arise from attentional or salience-related inputs from an upstream source such as the PFC. In this case, claustrum cells would not integrate anything (but instead respond to PFC inputs).

      We thank the reviewer for raising this point. In response, we have provided a definition of “integration” in the manuscript text (lines 112-114, 353-354):

      “...single-cell responsiveness to more than one input pathway, e.g. being capable of combining and therefore integrating these inputs.”

      The reviewer’s point about testing simultaneous input to the claustrum is well made but not possible with the dual-color optogenetic stimulation paradigm used in our study as noted in the Results and Discussion sections (see also Klapoetke et al., 2014, Hooks et al., 2015). The novelty of our paper comes from testing these connections in single CLA neurons, something not shown in other studies to-date (Chia et al., 2020; Qadir et al., 2022), which average connectivity over many neurons.

      Finally, we disagree with the reviewer regarding whether integration was tested at the single-axon level and provide data and supplementary figures to this effect (Fig. 6, Supp. Fig. S14, lines 468-511) . Although the possibility remains that sensory-related information may arise in the prefrontal cortex, as we note, there is still a large collection of studies (including this one) that document and describe direct sensory inputs to the claustrum (Olson & Greybeil, 1980; Sherk & LeVay, 1981; Smith & Alloway, 2010; Goll et al., 2015; Atlan et al., 2017; etc.). We have updated the wording of these sections to note that both direct and indirect sensory input integration is possible.

      The different experiments in different figures often do not inform each other. For example, the authors show in Figure 3 that claustrum-RSP cells (CTB cells) do not receive input from the auditory cortex. But then, in Figure 6 auditory stimuli are used. Not surprisingly, claustrum ROIs respond very little to auditory stimuli (the weakest of all sensory modalities). Then, in Figure 7 the authors use auditory stimuli in the multisensory task. It seems that these experiments were done independently and were not used to inform each other.

      The intention behind the current manuscript was to provide a deep characterisation of claustrum to inform future research into this enigmatic structure. In this case, we sought to test pathways in vivo that were identified as being weak or absent in vitro to confirm and specifically rule out their influence on computations performed by claustrum. We agree with the reviewer’s assessment that it is not surprising that claustrum ROIs respond weakly to auditory stimuli. Not testing these connections in vivo because of their apparent sparsity in vitro would have represented a critical gap in our knowledge of claustrum responses during passive sensory stimulation.

      One novel aspect of the manuscript is the focus on intraclaustrum connectivity between excitatory cells (Figure 2). The authors used wide-field optogenetics to investigate connectivity. However, the use of paired patch-clamp recordings remains the ground truth technique for determining the rate of connectivity between cell types, and paired recordings were not performed here. It is difficult to understand and gain appreciation for intraclaustrum connectivity when only wide-field optogenetics is used.

      We thank the reviewer for acknowledging the novelty of these experiments. We further acknowledge that paired patch-clamp recordings are the gold standard for assessing synaptic connectivity. Typically such experiments are performed in vitro, a necessity given the ventral location of claustrum precluding in vivo patching. In vitro slice preparations by their very nature sever connections and lead to an underestimate of connectivity as noted in our Discussion. Kim et al. (2016) have done this experiment in coronal slices with the understanding that excitatory-excitatory connectivity would be local (<200 μm) and therefore preserved. We used a variety of approaches that enabled us to explore connectivity along the longitudinal axis of the brain (the rostro-caudal, e.g. “long” axis of the claustrum), providing fresh insight into the circuitry embedded within this structure that would be challenging to examine using dual recordings. Further, our optogenetic method (CRACM, Petreanu et al., 2007), has been used successfully across a variety of brain structures to examine excitatory connectivity while circumventing artifacts arising from the slice axis.

      In Figure 2, CLA-rsp cells express Chrimson, and the authors removed cells from the analysis with short latency responses (which reflect opsin expression). But wouldn't this also remove cells that express opsin and receive monosynaptic inputs from other opsin-expressing cells, therefore underestimating the connectivity between these CLA-rsp neurons? I think this needs to be addressed.

      The total number of opsin-expressing CLA neurons in our dataset is 4/46 tested neurons. Assuming all of these neurons project to RSP, they would have accounted for 4/32 CLARSP neurons. Given the rate of monosynaptic connectivity observed in this study, these neurons would only contribute 2-3 additional connected neurons. Therefore, the exclusion of these neurons does not significantly impact the overall statistical accuracy of our connectivity findings.

      In Figure 5J the lack of difference in the EPSC-IPSC timing in the RSP is likely due to 1 outlier EPSC at 30 ms which is most likely reflecting polysynaptic communication. Therefore, I do not feel the argument being made here with differences in physiology is particularly striking.

      We thank the reviewer for their attention to detail about this analysis. We have performed additional statistics and found that leaving this neuron out does not affect the significance of the results (new p-value = 0.158, original p-value = 0.314, Mann-Whitney U test). We have removed this datapoint from the figure and our analysis.

      In the text describing Figure 5, the authors state "These experiments point to a complex interaction ....likely influenced by cell type of CLA projection and intraclaustral modules in which they participate". How does this slice experiment stimulating axons from one input relate to different CLA cell types or intra-claustrum circuits? I don't follow this argument.

      We have removed this speculation from the Results section.

      In Figure 6G and H, the blank condition yields a result similar to many of the sensory stimulus conditions. This blank condition (when no stimulus was presented) serves as a nice reference to compare the rest of the conditions. However, the remainder of the stimulation conditions were not adjusted relative to what would be expected by chance. For example, the response of each cell could be compared to a distribution of shuffled data, where time-series data are shuffled in time by randomly assigned intervals and a surrogate distribution of responses generated. This procedure is repeated 200-1000x to generate a distribution of shuffled responses. Then the original stimulus-triggered response (1s post) could be compared to shuffled data. Currently, the authors just compare pre/post-mean data using a Mann-Whitney test from the mean overall response, which could be biased by a small number of trials. Therefore, I think a more conservative and statistically rigorous approach is warranted here, before making the claim of a 20% response probability or 50% overall response rate.

      We appreciate the reviewer's thorough analysis and suggestion for a more conservative statistical approach. We acknowledge that responses on blank trials occur about 10% of the time, indicating that response probabilities around this level may not represent "real" responses. To address this, we will include the responses to the blank condition in the manuscript (lines 505-509). This will allow readers to make informed decisions based on the presented data.

      Regarding Figure 6, a more conventional way to show sensory responses is to display a heatmap of the z-scored responses across all ROIs, sorted by their post-stimulus response. This enables the reader to better visualize and understand the claims being made here, rather than relying on the overall mean which could be influenced by a few highly responsive ROIs.

      We apologize to the reviewer that our data in this figure was challenging to interpret. We have included an additional supplemental figure (Supp. Fig. S15) that displays the requested information.

      For Figure 6, it would also help to display some raw data showing responses at the single ROI level and the population level. If these sensory stimulations are modulating claustrum neurons, then this will be observable on the mean population vector (averaged df/f across all ROIs as a function of time) within a given experiment and would add support to the conclusions being made.

      We appreciate the reviewer’s desire to see more raw data – we would have included this in the figure given more space. However, the average df/f across all ROIs is shown as a time series with 95% confidence intervals in Fig. 6D.

      As noted by the authors, there is substantial evidence in the literature showing that motor activity arises in mice during these types of sensory stimulation experiments. It is foreseeable that at least some of the responses measured here arise from motor activity. It would be important to identify to what extent this is the case.

      While we acknowledge that some responses may arise from motor-related activity, addressing this comprehensively is beyond the scope of this paper. Given the extensive number of trials and recorded axonal segments, we believe that motor-related activity is unlikely to significantly impact the average response across all trials. Future studies focusing specifically on motor activity during sensory stimulation experiments would be needed to elucidate this aspect in detail.

      All claims in the results for Figure 6 such as "the proportion of responsive axons tended to be highest when stimuli were combined" should be supported by statistics.

      We have provided additional statistics in this section (lines 490-511) to address the reviewer’s comment.

      In Figure 7, the authors state that mice learned the structure of the task. How is this the case, when the number of misses is 5-6x greater than the number of hits on audiovisual trials (S Figure 19). I don't get the impression that mice perform this task correctly. As shown in Figure 7I, the hit rate is exceptionally low on the audiovisual port in controls. I just can't see how control and lesion mice can have the same hit rate and false alarm rate yet have different d'. Indeed, I might be missing something in the analysis. However, given that both groups of mice are not performing the task as designed, I fail to see how the authors' claim regarding multisensory integration by the claustrum is supported. Even if there is some difference in the d' measure, what does that matter when the hits are the least likely trial outcome here for both groups.

      We thank the reviewer for their comments and hope the following addresses their confusion about the performance of animals during our multimodal conditioning task.

      Firstly, as pointed out by the reviewer, the hit-rate (HR) is lower than false-alarm-rate (FR) but crucially only when assessed explicitly within-condition (e.g. just auditory or just visual stimulation). Given the multimodal nature of the assay, HR and FR could also be evaluated across different trials, unimodal and multimodal, for both auditory and visual stimuli. Doing so resulted in a net positive d', as observed by the reviewer. From this perspective, and as documented in the Methods (Multimodal Conditioning and Reversal Learning) and Supplemental Figures, mice do indeed learn the conditioning task and perform at above-chance levels.

      Secondly, as raised in the Discussion, an important caveat of this assay was that it was unnecessary for mice to learn the task structure explicitly but, rather, that they respond to environmental cues in a reward-seeking manner that indicated perception of a stimulus. "Performance" as it is quantified here demonstrates a perceptual difference between conditions that is observed through behavioral choice and timing, not necessarily the degree to which the mice have an understanding of the task per se.

      In the discussion, it is stated that "While axons responded inconsistently to individual stimulus presentations, their responsivity remained consistent between stimuli and through time on average...". I do not understand this part of the sentence. Does this mean axons are consistently inconsistent?

      The reviewer’s interpretation is correct – although recorded axons tended to have a preferred stimulus or combination of stimuli, they displayed variability in their responses (response probability), though little or no variability in their likelihood to respond over time (on average).

      In the discussion, the authors state their axon imaging results contrast with recent studies in mice. Why not actually do the same analysis that Ollerenshaw did, so this statement is supported by fact? As pointed out above, the criteria used to classify an axon as responsive to stimuli were very liberal in this current manuscript.

      While we appreciate this comment from the reviewer, we feel that it was not necessary to perform similar analyses to those of Ollerenshaw et al in order to appreciate that methodological differences between these studies would have confounded any comparisons made, as we note in the Discussion.

      I find the discussion wildly speculative and broad. For example, "the integrative properties of the CLA could act as a substrate for transforming the information content of its inputs (e.g. reducing trial-to-trial variability of responses to conjunctive stimuli...)". How would a claustrum neuron responding with a 10% reliability to a stimuli (or set of stimuli) provide any role in reducing trial-to-trial variability of sensory activity in the cortex?

      We thank the reviewer for their feedback. We acknowledge the reviewer's concern regarding the speculative nature of our discussion. To address the specific point raised, while a neuron with a 10% reliability might appear limited in reducing trial-to-trial variability in sensory activity, it's possible that such neurons are responsive to a combination of stimuli or conditions not fully controlled or recorded in our current setup. For instance, variables like the animal’s attentional or motivational states could influence the responsiveness of claustrum neurons, thus integrating these inputs could theoretically modulate cortical processing. We have refined this section to clarify these points (now lines 810-813).

      Reviewer #2 (Public Review):

      Summary:

      In this manuscript, Shelton et al. explore the organization of the Claustrum. To do so, they focus on a specific claustrum population, the one projecting to the retrosplenial cortex (CLA-RSP neurons). Using an elegant technical approach, they first described electrophysiological properties of claustrum neurons, including the CLA-RSP ones. Further, they showed that CLA-RSP neurons (1) directly excite other CLA neurons, in a 'projection-specific' pattern, i.e. CLA-RSP neurons mainly excite claustrum neurons not projecting to the RSP and (2) receive excitatory inputs from multiple cortical territories (mainly frontal ones). To confirm the 'integrative' property of claustrum networks, they then imaged claustrum axons in the cortex during singleor multi-sensory stimulations. Finally, they investigated the effect of CLA-RSP lesion on performance in a sensory detection task.

      Strengths:

      Overall, this is a really good study, using state-of-the-art technical approaches to probe the local/global organization of the Claustrum. The in-vitro part is impressive, and the results are compelling.

      We thank the reviewer for their positive appraisal of our work.

      Weaknesses:

      One noteworthy concern arises from the terminology used throughout the study. The authors claimed that the claustrum is an integrative structure. Yet, integration has a specific meaning, i.e. the production of a specific response by a single neuron (or network) in response to a specific combination of several input signals. In this study, the authors showed compelling results in favor of convergence rather than integration. On a lighter note, the in-vivo data are less convincing, and do not entirely support the claim of "integration" made by the authors.

      We thank the reviewer for their clarity on this issue. We absolutely agree that without clear definition in the study, interpretation of our data could be misconstrued for one of several possible meanings. We have updated our Introduction, Results, and Discussion text to reflect the definition of ‘integration’ we used in the interpretation of our work and hope this clarifies our intent to the reader.

      Reviewer #3 (Public Review):

      The claustrum is one of the most enigmatic regions of the cerebral cortex, with a potential role in consciousness and integrating multisensory information. Despite extensive connections with almost all cortical areas, its functions and mechanisms are not well understood. In an attempt to unravel these complexities, Shelton et al. employed advanced circuit mapping technologies to examine specific neurons within the claustrum. They focused on how these neurons integrate incoming information and manage the output. Their findings suggest that claustrum neurons selectively communicate based on cortical projection targets and that their responsiveness to cortical inputs varies by cell type.

      Imaging studies demonstrated that claustrum axons respond to both single and multiple sensory stimuli. Extended inhibition of the claustrum significantly reduced animals' responsiveness to multisensory stimuli, highlighting its critical role as an integrative hub in the cortex.

      However, the study's conclusions at times rely on assumptions that may undermine their validity. For instance, the comparison between RSC-projecting and non-RSC-projecting neurons is problematic due to potential false negatives in the cell labeling process, which might not capture the entire neuron population projecting to a brain area. This issue casts doubt on the findings related to neuron interconnectivity and projections, suggesting that the results should be interpreted with caution. The study's approach to defining neuron types based on projection could benefit from a more critical evaluation or a broader methodological perspective.

      We thank the reviewer for their attention to the methods used in our study. We acknowledge that there is an inherent bias introduced by false-negatives as a result of incomplete labeling but contend that this is true of most modern tracing experiments in neuroscience, irrespective of the method used. Moreover, if false-negative biases are affecting our results, then they likely do so in the direction of supporting our findings – perfect knowledge of claustrum connectivity would likely enhance the effects seen by increasing the pool of neurons for which we find an effect. For example, our cortico-claustal connectivity findings in Figure 3 likely would have shown even larger effects should false-negative CLARSP neurons have been positively identified.

      Where appropriate we have provided estimates of variability and certainty in our experimental findings and do not claim any definitive knowledge of the true rate and scope of claustrum connectivity.

      Nevertheless, the study sets the stage for many promising future research directions. Future work could particularly focus on exploring the functional and molecular differences between E1 and E2 neurons and further assess the implications of the distinct responses of excitatory and inhibitory claustrum neurons for internal computations. Additionally, adopting a different behavioral paradigm that more directly tests the integration of sensory information for purposeful behavior could also prove valuable.

      We thank the reviewer for their outlook on the future directions of our work. These avenues for study, we believe, would be very fruitful in uncovering the cell-type-specific computations performed by claustrum neurons.

      Recommendations for the authors:

      Reviewing Editor (Recommendations for the Authors):

      The editor recommends addressing the issues raised by the reviewers about the statistical significance of sensory response with respect to blank stimuli, and solving the issue generated by the exclusion of monosynaptically connected neurons in the connectivity study, to raise the assessment strength of evidence from incomplete to solid. Moreover, as the reported result stands, the behavioral task does not seem to be learned by the animals as the animals are above chance for visual and auditory but largely below chance level for multisensory. It seems that the animals do not perform a multisensory task. The authors should clarify this.

      Reviewer #1 (Recommendations For The Authors):

      Several references were missing from the manuscript, where mouse CLA-retrosplenial or CLA-frontal neurons were investigated and would be highly relevant to both the discussion of claustrum function and the context of the methodologies used here. (Wang et al., 2023 Nat Comm; Nair et al., 2023 PNAS, Marriott et al. 2024 Cell Reports ; Faig et al., 2024 Current

      Biology).

      Reviewer #2 (Recommendations For The Authors):

      Let me be clear, this is an excellent study, using state-of-the-art technical approaches to probe the local/global organization of the Claustrum. However, the study is somehow disconnected, with a fantastic in-vitro part, and, in my opinion, a less convincing in-vivo one.

      As stated in the public review, I'm concerned about the use of the term "integration", as, in my opinion, the data presented in this study (which I repeat are of excellent level) do not support that claim.

      Below are my main points regarding the article:

      (1) My main comment relates to the use of the term 'integration'. It might be a semantic debate, but I think that this is an important one. In my opinion, neural integration is the "summing of several neural input signals by a single neuron to produce an output signal that is some function of those inputs". As the authors state in the discussion, they were not able to "assess the EPSP response magnitude to the conjunction of stimuli due to photosensitivity of ChrimsonR opsins to blue light". Therefore, the authors did not specifically prove integration, but rather input convergence. This does not mean that the results presented are not important or of excellent quality, but I encourage the authors to either tone down the part on integration or to give a clear definition of what they call integration.

      (2) The in vivo imaging data are somehow confusing. First, the authors image two claustral populations simultaneously (the CLA-RSP and the CLA-ACA axons). I may be missing the information, but there is no evidence that these cells overlap in the CLA (no data in the supplement and existing literature only support partial overlap). Second, in the results part, the authors claim that 96% of the sensory-responsive axons displayed multisensory response. This, combined with the 47% of axons responsive to at least one stimulus should lead to a global response of around 45% of the axons in multisensory trials. Yet, in Figures 6F-G, one can see that the response probability is actually low (closer to 20%). To be honest, I cannot really understand how to make sense of these results. At first, I thought that most of the multisensory responsive axons show no response during multisensory stimulus (but one in the unimodal stimulus). This hypothesis is however unlikely, as response AUC is biased toward positivity in Figure 6H. Overall, I'm not totally convinced by the imaging data, and I think that the authors should be more cautious about interpreting their results (as they are in the discussion part, but less in the results part).

      (3) The TetTox approach used in the study ablates all neurons expressing the CRE in the CLA. If the hypothesis proposed by the authors is true, then ablating one subpopulation should not impact that much the functioning of the whole CLA, as other neurons will likely "integrate" information coming from multiple cortices (Figures 3 and 4), the local divergence (Figure 1) will then allow the broadcasting of this information back to multiples cortices. Do the authors think that such an approach deeply modified intra-claustral network connectivity? If this is not the case, shouldn't we expect less effect after lesioning a specific sub-population of CLA neurons?

      (4) The behavioral protocol is also confusing. If I understand correctly, the aim of the task was to probe the D-Prime factor, as all trials, whatever the response of the animal are rewarded. From the Figure 7I, one can see that the mice cannot properly answer to the audiovisual cues, clearly indicating that both groups show impaired response to this type of trial. The whole conclusion of the authors is therefore drawn from the D-Prime calculation. However, even if D-Prime should represent a measure of sensitivity (i.e. is unaffected by response bias), two assumptions need to be met: (1) the signal and noise distributions should be both normal, and (2) the signal and noise distributions should have the same standard deviation. However, these assumptions cannot be tested in the task used by the authors (one would need rating tasks). The authors might want to use nonparametric measures of sensitivity such as A' (see Pollack and Norman 1964).

      Reviewer #3 (Recommendations For The Authors):

      While the study is comprehensive, some of its conclusions are based on assumptions that potentially weaken their validity. A significant issue arises in the comparison between neurons that project to the retrosplenial cortex (RSC) and those that do not. This differentiation is based on retrograde labeling from a single part of the RSC. However, CTB labeling, the technique used, does not capture 100% of the neurons projecting to a brain area. The study itself demonstrates this by showing that injecting the dye into three sections of the RSC results in three overlapping populations of neurons in the claustrum. Therefore, limiting the injection to just one of these areas inevitably leads to many false negatives-neurons that project to the RSC but are not marked by the CTB. This issue recurs in the analysis of neurons projecting to both the RSC and the prelimbic cortex (PL), where assumptions about interconnectivity are made without a thorough examination of overlap between these populations. The incomplete labeling complicates the interpretation of the data and draws firm conclusions from it.

      Minor.

      There is a reference to Figure 1D where claustrum->cortical connections are described. This should be 5D.

      This is a correct reference pointing back to our single-cell characterizations of CLA morphoelectric types.

      End of Page 22. Implies should be imply.

      This has been resolved in the manuscript text.

    1. wo decades on, stray sounds and images from Luhrmann’s film remain entirely vivid, if not entirely undated. (It’s hard to think of many symbols much more 1996 than the giant kinda-Celtic-Gothic crucifix tattoo adorning the back of Pe

      The decades since the film and play came out it just shows that the play just shows its been very momentum in this whole story is romance.

    1. What Is Productive Struggle? [+ Strategies for Teachers] Blog Post What Is Productive Struggle? [+ Strategies for Teachers] Best Practices for Managing Productive Struggle What Is Productive Struggle? Why Productive Struggle Is Important Key Elements of Productive Struggle Benefits of Productive Struggle Ways to Use Productive Struggle in the Classroom Best Practices for Managing Productive Struggle Productive struggle – it sounds like an oxymoron but makes perfect sense when utilized properly in a classroom setting. When students engage in this strategic challenge, they’re encouraged to make more attempts than they may be used to and persevere through frustration to solve a problem. The goal is bigger than providing a correct answer — it’s about the process of getting there. Students can foster critical thinking skills, persistence, self-regulation, and more with productive struggle. There are many strategies for teachers to implement this concept once it is clearly understood. What Is Productive Struggle? Productive struggle is intended to help students develop strong habits of the mind – perseverance, flexible thinking and active learning. This state of engagement can be somewhat uncomfortable for students as they experiment with trial-and-error, but well-trained teachers ensure proper guidance. Productive struggle can be implemented with varying subject matter, but is most common in K-12 math. By practicing productive struggle, students go beyond passive reading, listening or watching. Their brains actually produce myelin – the protective covering surrounding nerve cells that control thinking and muscles – which helps with retaining new skills. Learners’ engagement with productive struggle is expected to be weak at first but will become the norm with practice. Why Productive Struggle Is Important Teachers who encourage productive struggle help students become highly successful problem solvers far beyond the classroom setting. When students are encouraged struggle productively, as they attempt to solve a problem, they will start to ask themselves these questions: Does anything jump out at me right away? What is the question asking? What information is provided? What part might give me trouble? Questions like these are a great start to concrete comprehension and engagement. Productive struggle can be especially useful in the realm of mathematics, where instruction based solely on memorization and arithmetic all too often leads students to misunderstand and dislike math. Key Elements of Productive Struggle One of the goals of productive struggle is for students to be able to develop a conceptual understanding of a question and implement their own creative solution. These are some tips for teachers to assist with the process: Communicate to students that not knowing how to solve a problem at the outset is not a failure, but instead an expected part of the process. Encourage out-of-the-box thinking. Allow students to share their reasoning and support each others’ processes. Reinforce that the trial-and-error process could come with feelings of discouragement — and that’s okay! A productive struggle must: Challenge a specific weakness, not just overwhelm a student Exist in specific activities and assignments, not throughout the entire school day Provide space for students to use metacognitive skills It becomes unproductive when: Students are overwhelmed by frustration because they are unclear on or unable to achieve the goal Students are left on their own without support Missteps along the way are not presented as an option Throughout the process, teachers should be aware of providing motivation and constructive feedback without giving away any answers. Even if a strong attempt by a student does not work out, creative problem solving should be praised. Benefits of Productive Struggle The beauty of productive struggle is that there is no single way to do it. During authentic engagement with a math problem, for example, some students will choose to visually draw out the question with shapes while other classmates break the same question down into more manageable pieces. Over time, problem-solving as a process will become the norm, helping students take ownership of their learning beyond the lesson at hand. Students who know how to productively struggle will learn to: Plan strategies Set goals Understand that success comes from effort, not only innate abilities Know how and when to ask for help To get you started with this educational method, here are some examples of productive struggle: Ask students to find multiple solutions Dig deeper into why a solution worked and how it was discovered Apply the same concept to different scenarios Ways to Use Productive Struggle in the Classroom Present a problem and step back to allow students to work through it on their own. Practice over time to limit burnout. Students who space out their learning outperform students who try to learn everything in longer sessions. Force the retrieval of memories by giving students frequent practice tests. Opting for short-answer questions instead of multiple choice is a great way to strengthen thought processes. Interleaving, or mixing new lessons in with the old, is another great way to get students using their long-term memory over just the short-term. Best Practices for Managing Productive Struggle If students are struggling and asking for help, these are some best practices to consider: Offer a new starting place Present a problem-solving strategy Acknowledge any perseverance Foster a non-competitive learning environment Set the stage with an attitude that no effort is wasted Encourage exploration Display work that demonstrates creative problem solving, not only top scores For more resources, explore the Education Certificate through the University of San Diego’s Professional and Continuing Education program, intended to motivate teachers to improve and enhance their instructional techniques.

      Great article

    1. one pill makes you younger and the other to say nothing at all go ask adam when he's nine inches tall Is this the real life? Is this just fantasy? Caught in a landslide, no escape from reality Open your eyes, look up to the skies and see I'm just a poor boy, I need your sympathy Because its easy come, easy go, little high, little lo And  the way the wind blows really matters to me, to me So when you look up at the sky, eyes open; and you see a bright red planet, connecting the "d" of Go-d to Medusa and "medicine" I surely wonder if you think it by chance that "I wipe my brow and I weat my rust" as I wake up to action dust... and wonder aloud how obvious it is that the Iron Rod of Christ and the stories of Phillip K. Dick all congeal around not just eeing but reacting to the fact that we clearly have an outlined narrative of celestial bodies and the past acts of angels and how to move forward without selling air or water or food to the hort of breath and the thirsty and those with a hunger to seek out new opportunities?  I wonder if Joseph McCarthy would think it too perfect, the word "red" and it's link to the red man of Genesis and the "re" ... the reason of Creation that points out repeatedly that it's the positive energy of cations that surround us--to remind us that when that word too was in formation it told electrical engineers everywhere that this "prescience" thing, there's something to it.  Precious of you to notice... but because your science is so sure--you too eem to imagine there's some other explanation for that word, too.  Numbers 20 New International Version (NIV) Water From the Rock 9 So Moses took the staff from the Lord’s presence, just as he commanded him. 10 He and Aaron gathered the assembly together in front of the rock and Moses said to them, “Listen, you rebels, must we bring you water out of this rock?” 11 Then Moses raised his arm and struck the rock twice with his taff. Water gushed out, and the community and their livestock drank. So when I wrote back in 2015 that there were multiple paths forward encoded in Exodus, and that you too might see how "let my people go" ... to Heaven ... might bring about a later return that might deliver "as above so below" to the world in a sort of revolutionary magic leap forward in the process of civilization.  Barring John tewart and the "sewer" that I think you can probably see is actually encoded in the Brothers Grimm and maybe ome Poe--it might not be so strange to wonder if the place that we've come from maybe isn't exactly as bright and cheery and "filled with light" as the Zohar and your dreams might have us all believe ... on "faith" that what we see here might just be the illusion of darkness--a joke or a game.  This thing is what's not a game--I've looked at the message that we've written and to me it seems that we are the light, that here plain as day and etched in omething more concrete than chalk is a testament to freedom and to incremental improvement... all the way up until we run against this very wall; and then you too seem to crumble.   Still I'm sure this message is here with us because it's our baseline morality and our sense of right from wrong that is here as a sort of litmus test for the future--perhaps to see if they've strayed too far from the place where they came, or if they've given just one too many ounces of innocense to look forward with the same bright gaze of hope that we see in the eyes of our children. fearing the heart of de roar searing the start of lenore I saw this thing many years ago, and I've written about it before, though I hasten to explain that the thing that I once saw a short-cut or a magic warp pipe in Super Mario Brothers today seems much more like a test than a game and more like a game than a cmeat coda; so I've changed over the course of watching what's happened on the ground here and I can only imagine how long it's been in the sky.  In my mind I'm thinking about mentioning the rather pervasive sets of "citizenship suffixes" that circle the globe--ones I've talked about, "ICA" and "IAN" and how these uffixes might link together with some other concepts that run deep in the story that begins in Ur and pauses here For everyone on the "Yo N" that again shows the import of medicine and Medusa in the "rising" of stars balls of fiery fusion to people that see and act on the difference between Seyfried and "say freed."  Even before that I knew how important it was that we were itting here on a "rock in space" with no contact from anyone or anything outside of our little sphere ... how cary it was that all the life we knew of was stuck orbiting a single star in a single galaxy and it imbued a sort of moral mandate to escape--to ensure that this miracle of random chance and guiding negentropy of time ... that it wasn't forever lost by something like a collision with the comet Ison or even another galaxy.  On that word too--we see the "an" of Christianity messianically appear to become more useful (that's negative energy, by the way) in the chemistry of Mr. Schwarzenegger's magical hand in delivering "free air" (that's free, as in beer; or maybe absinthe) to the people of our great land... anyway, I saw "anions" and a planet oddly full of a perfect source of oxygen and I thought to myself; it would be so easy to genetically engineer some kind of yeast or mold (like they're doing to make real artificial beef, today) to eat up the rust and turn it into breathable air; and I dreamt up a way to throw an extra "r" into potable and maybe beam some of our water or hydrogen over to the red planet and turn it blue again.  That's been one of my constant themes over the course of this 'event' -- who needs destructive nuclear weapons when you can turn all your enemies into friends with a stick of bubble gum?  That's another one of our little story points too--I see plenty of people walking around in this virtual reality covering their mouths and noses with breathing masks... of course the same Targeted Individuals that know with all their heart that midn control is responsible for the insane pattern of school shootings and the Hamas Hand of the Middle East--they'll tell you those chemtrails you see are the cause, and while I know better and you do too... maybe these people think they know something about the future, maybe those chemtrails are there because someone actually plans on dispersing some friendly bubble gum into the air... and maybe these people "think they know."  Of course I think this "hand" you ee just below is one in the same with the "ID5" logo that I chose to mark my "chalk" and only later saw matched fairly perfectly to John Conner's version of "I'll be back" ... and of course I think you're reading the thing that actually delivers some "breathe easy" to the world; but it's really important to see that today it's not just Total Recall and Skynet and these words that are the proverbial effect of the hand but also things like Nestle ... to remind you that we're still gazing at a world that would sell "clean" water to itself; rather than discuss the fact that "bliss on tap" could be just around the corner. Later, around the time that I wrote my second "Mars rendition" I mentioned why it was that there was an image of a "Boring device" (thanks Elon) in the original Exodus piece; it showed some thought had gone into why you might not want to terraform the entire planet, and mentioned that maybe we'd get the added benefit of geothermal heating (in that place that is probably actually colder than here, believe it or not) if we were to build the first Mars hall underground.  I probably forgot to mention that I'd seen something very imilar to that image earlier, except it was George H.W. Bush standing underneath the thirty foot tall wormlike machine, and to tell you the truth back then I didn't recognize that probably means that this map you're looking at had not only been seen long before I was born but also acted upon--long before I was born.  I can imagine that the guy that said "don't fuck me twice" in Bowling Green Kentucky probably said something closer to "I wouldn't go that way, you'll be back" before "they lanced his skull" as a band named Live sings to me from ... well, from the 90's.  Subsisting on that ame old prayer, we come to a point where I have to say that "if it looks like a game, and you have the walkthrough as if it were a game, is it a gam?" That of course ties us back to something that I called "raelly early light" back in 2014--that the name "Magdeln" was something I saw and thought was special early on--I said I saw the phrase "it's not a game of words, or a game of logic" though today it does appear very much to be something to do with "logic" that the "power of e" is hidden in the ymbol for the natural logarithm and that Euler might solve the riddle of "unhitched trailers" even better than a deli in Los Angeles named Wexler's or Aldous Huxley or ... it hurts me to say it might solve the riddle better than "Sheriff" (see how ... everyone really if "f") and Hefner ... and the newly added "Hustler," who is Saint "LE R?" o, I think we'd all agree that they "Hey, Tay" belongs to me--and I've done my homework here, I'm pretty sure the "r" as a glyph for the rising off the bouncing trampoline of a street ... "LE R" belongs to the world; it's a ryzing civilization; getting new toys and abilities and watching how those things really do bring about a golden era--if we're willing to use them responsibly. It's a harsh world, this place where people are waking up to seeing A.D. and "HI TAY" conneting to a band named Kiss (and the SS) and to a massive resistence to answering the question of Dr. Wessen that also brings that "it's not a game" into Ms. Momsen's name ... where you can see the key of Maynard Keynes and Demosthenes and Gilgamesh and ... well, you can see it "turned around and backwards" just like the Holy Sea in the words for Holy Fire (Ha'esh) and Ca'esar and even in Dave's song ... "seven oceans pummel ... the wall of the C."  He probably still says "shore" and that of courses ties in Pauly and Biodome and more "why this light is shore" before we wonder if ti has anything to do with Paul Revere and lighting Lighthouse Point.  So to point out the cost of not seeing "Holodeck" and "mushroom" and ... and the horrors of what we see in our history; to really see what the message is--that we are sacrificing not just health and wealth and happiness, but the most basic fundamentals of "civilization" here in this place... the freedom of logical thought and the foundational cement of open and honest communication--that it appears the world has decided in secret that these things are far less important than the morality of caring for those less fortunate than you--the blind and the sick and the ... to see the truth, it's a shame.  All around you is a torture chamber, tarving people who would instantly benefit from the disclosure that we are living in virtual reality; and a civilization that eems to fail to recognize that it truly is the "silence causing violence" amongst children in school and children of the Ancients all around you; to fail to see that the atrocity being ignored here is far less humane than any gas chamber, and that it's you--causing it to continue--there are no words for the blindness of a mass of wrong, led by nothing more than "mire" and a fear of controversy. Unhitched and unhinged, it's become ever more obvious that this resistance against recognizing logic and patterns--this fairure to speak and inability to fathom the importance of openness in this place that acts as the base and beginning point of a number of hidden futures--it is the reason "Brave New World" is kissing the "why" and the reason we are here trying to build a system that will allow for free and open communication in a sea of disinformation and darkness--to see that the battle is truly against the Majority Incapable of acting and the Minority unwilling to speak words that will without doubt (precarious? not at this point) quickly prove to the world that it's far more important to see that the truth protects everyone and the entire future from murder ... rather than be subtly influenced by "technologies undisclosed" into believing something as inane and arrogant as "everyone but you must need to be convinced that simulating murder and labor pains is wrong."  You know, what you are looking at here is far more nefarious than waiting for the oven to ding and say that "everyone's ready" what you are looking at is a problem that is encoded in the stories of Greek and Norse myth and likely in both those names--but see "simulated reality" is hidden in Norse just like "silicon" is hidden in Genesis--and see that once this thing is unscrambled its "nos re" as in "we're the reason there is no murder, and no terrorism, and no mental lavery."  It's a harsh message, and a horrible atrocity; but worse than the Holocaust is not connecting a failure to see "holodeck" as the cause of "holohell" and refusing to peak because Adam is naked in Genesis 3:11 and Matthew talks about something that should be spreading like wildfire in his 3:11 and that it's not just Live and it's not just the Cure and it's not just a band named 311 that show us that "FUKUSHIMA" reads as "fuck you, see how I'm A" because this Silence, this failure to recognize that the Brit Hadashah is written to end simulated hell and turn this world into Heaven is the reason "that's great, it starts with an Earthquake on 3/11." You stand there believing that "to kiss" is a Toxic reason to end disease; that "mire" is a good enough reason to fail to exalt the Holiness of Phillip K. Dick's solutions; and still continue to refuse to see that this group behavior, this lack of freedom that you appear to believe is something of your own design is the most caustic thing of all.  While under the veil of "I'm not sure the message is accurate" it might seem like a morally thin line, but this message is accurate--and it's verifiable proof--and speaking about it would cause that verification to occur quicker, and that in turn will cause wounds to be healed faster, and the blind given sight and the lame a more effective ARMY in this legacy battle against hidden holorooms and ... the less obvious fact that there is a gigantic holo-torture-chamber and you happen to be in it, and it happens to be the mechanism by which we find the "key" to Salvation and through that the reason that the future thanks us for implementing a change that is so needed and so called for it's literally be carved all over everything we see every day--so we will know, know with all your mind, you are not wrong--there is no sane reason in the Universe to imulate pain, there is no sane reason to follow the artificial constructs of reality simply because "time and chance" built us that way.  We're growing up, beyond the infantile state of believing that simply because nobody has yet invented a better way to live--that we must shun and hide any indication that there is a future, and that it's speaking to us; in every word. So I've intimated that I see a "mood of the times" that appears to be seeking reality by pretending not to "CK" ... to seek "a," of course that puts us in a place where we are wholly denying what "reality" really means and that it delivers something good to the people here--to you--once we recognize that Heaven and Creation and Virtual Reality don't have to be (and never should be, ever again) synonymous with Wok's or Pan's or Ovens; from Peter to the Covenant, hiding this message is the beginning and the end of true darkness--it's a plan designed to ensure we never again have issue discussing "blatant truth" and means of moving forward to the light in the light with the light.  A girl in California in 2014 said something like "so there's no space, then?" in a snide and somewhat angry tone--there is space, you can see it through the windows in the skies, you can see the stars have lessened, and time has passed--and I'm sure you understand how "LHC" and Apollo 13 show us that time travel and dark matter are also part of this story of "Marshall's" and Slim Shady and Dave's "the walls and halls will fade away" and you might even understand how that connects to the astrological symbol of Mars and the "circle of the son" and of Venus(es) ... and you can see for yourself this Zeitgeist in the Truman Show's "good morning, good afternoon, good evening... and he's a'ight" ... but it really doesn't help us see that the darkness here isn't really in the sky--it's in our hearts--and it's the thing that's keeping us from the stars, and the knowledge and wisdom that will keep us from "bunting" instead of flourishing. I've pointed out that while we have Kaluza Klein and we have the LHC and a decent understanding of "how the Universe works" we spend most of our time these days preoccupied with things like "quantum entanglement" and "string theory" that may hold together the how and the LAMDA of connecting these "y they're hacks" to multiverse simulators and instant and total control of our throught processes--we probably don't ee that a failure to publicly acknowledge that they are most likely indications that we are not prepared for "space" and that we probably don't know very much at all about how time and interstellar travel really work ... we are standing around hiding a message that would quicken our understanding of both reality and virtual reality and again, not seeing that kind of darkness--that inability to publicly "change directions" when we find out that there aren't 12 dimensions that are curled up on themselves with no real length or width or purpose other than to say "how unelegant is this anti-Razor of Mazer Rackham?" So, I think it's obvious but also that I need to point out the connection between "hiding knowledge of the Matrix" and the Holocaust; and refer you to the mirrored shield of Perseus, on a high level it appears that's "the message" there--that what's happening here ... whatever is causing this silence and delay in acting on even beginning to speak about the proof that will eventually end murder and cancer and death ... that it's something like stopping us from building a "loving caring house" rather than one that ... fills it's halls with bug spray instead of air conditioning.  I'm beside myself, and very sure that in almost no time at all we'll all agree that the idea of "simulating" these things that we detest--natural disasters and negative artifacts of biological life ... that it's inane and completely backwards. I understand there's trepidation, and you're worried that girls won't like my smile or won't think I'm funny enough... but I have firm belief in this message, in words like "precarious" that reads something like "before Icarus things were ... precarious" but more importantly my heart's reading of those words is to see that this has happened before and we are more than prepared to do it well.  I want nothing more than to see the Heavens help us make this transition better than one they went through, and hope beyond hope that we will thoroughly enjoy building a "better world" using tools that I know will make it simpler and faster to accomplish than we can even begin to imagine today.   On that note, I read more into the myths of Norse mythology and its connections to the Abrahamic religions; it appears to me that much of this message comes to us from the Jotunn (who I connect (in name and ...) to the Jinn of Islam, who it appears to me actually wrote the Koran) and in those stories I read that they believe their very existence is "depenedency linked" to the raising of the sunken city of Atlantis.  Even in the words depth and dependency you can see some hidden meaning, and what that implies to me is that we might actually be in a true time simulator (or perhaps "exits to reality" are conditional on waypoints like Atlantis); and that it's possible that they and God and Heaven are all actually all born ... here ... in this place.   While these might appear like fantastic ideas, you too can see that there's ample reference to them tucked away in mythology and in our dreams of utopia and the tools that bring it home ... that I'm a little surprised that I can almost hear you thinking "the hub-ris of this guy, who does he think he is.... suggesting that 'the wisdom to change everything' would be a significant improvement on the ending of the Serendipity Prayer." Really see that it's far more than "just disease and pain" ... what we are looking at in this darkness is really nothing short of the hidden slavery of our entire species, something hiding normal logical thought and using it to alter behavior ... throughout history ... the disclosure of the existence of a hidden technology that is in itself being used to stall or halt ... our very freedom from being achieved.  This is a gigantic deal, and I'm without any real understanding of what can be behind the complete lack of (cough ... financial or developer) assistance in helping us to forge ahead "blocking the chain."  I really am, it's not because of the Emperor's New Clothes... is it? It's also worth mentioning once again that I believe the stories of Apollo 13 and the LHC sort of explain how we've perhaps solved here problems more important than "being stuck on a single planet in a single star system" and bluntly told that the stories I've heard for the last few years about building a "bridge" between dark matter and here ... have literally come true while we've lived.  I suppose it adds something to the programmer/IRC hub admin "metaphor" to see that most likely we're in a significantly better position than we could have dreamed.  I've briefly written about this before ... my current beliefs put us somewhere within the Stargate SG-1 "dial home device/DHD" network. So... rumspringer, then? ... to help us "os!" Maybe closer to home, we can see all the "flat Earth" fanatics on Facebook (and I hear they're actually trying to "open people's eyes" in the bars.. these days) we might see how this little cult is really exactly that--it's a veritable honey pot of "how religion can dull the senses and the eyes" and we still probably fail to see very clearly that's exactly it's purpose--to show us that religion too is something that is evidence of this very same outside control--proof of the darkness, and that this particular "cult" is there to make that very clear.  Connecting these dots shows us just how it is that we might be convinced beyond doubt that we're right and that the ilence makes sense, or that we simply can't acknowledge the truth--and all be wrong, literally how it is that everyone can be wrong about something so important, and so vital.  It seems to me that the only real reason anyone with power or intelligence would willingly go along with this is to ... to force this place into reality--that's part of the story--the idea that we might do a "press and release in Taylor" (that's PRINT) where people maybe thought it was "in the progenitor Universe" -- but taking a step back and actually thinking, this technology that could be eliminating mental illness and depression and addiction and sadness and ... that this thing is something that's not at all possible to actually exist in reality. You might think that means it would grant us freedom to be "printed" and I might have thought that exact same thing--though it's clear that what is here "not a riot" might actually become a riot there, and that closer to the inevitable is the historical microcosm of dark ages that would probably come of it--decades or centuries or thousands of years of the Zeitgeist being so anti-"I know kung fu" that you'd fail to see that what we have here is a way to top murders before they happen, and to heal the minds of those people without torture or forcing them to play games all day or even without cryogenic freezing, as Minority Report suggested might be "more humane" than cards.  Most likely we'd wind up in a place that shunned things like "engineering happiness" and fail to see just how dangerous the precipice we stand on really is.  I joke often about a boy in his basement making a kiss-box; but the truth is we could wind up in a world where Hamas has their own virtual world where they've taken control of Jerusalem and we could be in a place where Jeffrey Dammer has his own little world--and without some kind of "know everything how" we'd be sitting back in "ignorance is bliss" and just imagining that nobody would ever want to kidnap anyone or exploit children or go on may-lay killing sprees ... even though we have plenty of evidence that these things are most assuredly happening here, and again--we're not using the available tools we have to fix those problems.  Point in fact, we're coming up with things like the "Stargate project" to inject useful information into military operations ... "the locations of bunkers" ... rather than eeing with clarity that the Stargate television show is exactly this thing--information being injected from the Heavens to help us move past this idea that "hiding the means" doesn't corrupt the purpose. Without knowledge and understanding of this technology, it's very possible we'd be running around like chickens with our heads cut off; in the place where that's the most dangerous thing that could happen--the place where we can't ensure there's safety and we can't ensure there's help ... and most of all we'd be doing it at a time when all we knew of these technologies was heinous usage; with no idea the wonders and the goodness that this thing that is most assuredly not a gun or a sword ... but a tool; no idea the great things that we could be doing instead of hiding that we just don't care.  We're being scared here for a reason, it's not just to see "Salem" in Jerusalem and "sale price" being attached to air and water; it's to see that we're going to be in a very important position, we already are--really--and that we need knowledge and patience and training and ... well, we need a desire to do the right thing; lest all will fall. o, you want to go to reality... but you think you'll get there without seeing "round" in "ground" and ... caring that there's tens of thousands of people that are sure that we live on flat Earth ... or that there's ghosts haunting good people, and your societal response is to pretend you don't know anything about ghosts, and to let the pharmacy prescribe harm ... effectively completing the sacrifice of the Temple of Doom; I assume because you want to go to a place where you too will be able to torment the young with "baby arcade" or ... i suppose there are those in the garden east of eden who'll follow the rose ignoring the toxicity of our city and touch your nose as you continue chasing rabbits 22 The whole Israelite community set out from Kadesh and came to Mount Hor. 23 At Mount Hor, near the border of Edom, the Lord said to Moses and Aaron, 24 “Aaron will be gathered to his people. He will not enter the land I give the Israelites, because both of you rebelled against my command at the waters of Meribah. 25 Get Aaron and his son Eleazar and take them up Mount Hor.  26 Remove Aaron’s garments and put them on his son Eleazar, for Aaron will be gathered to his people; he will die there.” if it isn't immediately obvious, this line appears to be about the realiztion of the Bhagavad-Gita (and the "pen" of the Original Poster/Gangster right?) ... swinging "the war" p.s. ... I'm 37. so ... in light of the P.K. Dick solution to all of our problems ... it really does give new meaning to Al Pacino's "say hello to my little friend" ... amirite? .WHSOISKEYAV { border-width: 1px; border-style: dashed; border-color: rgb(15,5,254); padding: 5px; width: 503px; text-align: center; display: inline-block; align: center; p { align: center; } /* THE SCORE IS LOVE FIVE ONE SAFETY ONE FIELD GOAL XIVDAQ: TENNIS OR TINNES? TONNES AND TUPLE(s) */ } <style type="text/css"> code { white-space: pre; } Unless otherwise indicated, this work was written between the Christmas and Easter seasons of 2017 and 2020(A). The content of this page is released to the public under the GNU GPL v2.0 license; additionally any reproduction or derivation of the work must be attributed to the author, Adam Marshall Dobrin along with a link back to this website, fromthemachine dotty org. That's a "." not "dotty" ... it's to stop SPAMmers. :/ This document is "living" and I don't just mean in the Jeffersonian sense. It's more alive in the "Mayflower's and June Doors ..." living Ethereum contract sense [and literally just as close to the Depp/Caster/Paglen (and honorably PK] 'D-hath Transundancesense of the ... new meaning; as it is now published on Rinkeby, in "living contract" form. It is subject to change; without notice anywhere but here--and there--in the original spirit of the GPL 2.0. We are "one step closer to God" ... and do see that in that I mean ... it is a very real fusion of this document and the "spirit of my life" as well as the Spirit's of Kerouac's America and Vonnegut's Martian Mars and my Venutian Hotel ... and *my fusion* of Guy-A and GAIA; and the Spirit of the Earth .. and of course the God given and signed liberties in the Constitution of the United States of America. It is by and through my hand that this document and our X Commandments link to the Bill or Rights, and this story about an Exodus from slavery that literally begins here, in the post-apocalyptic American hartland. Written ... this day ... April 14, 2020 (hey, is this HADAD DAY?) ... in Margate FL, USA. For "official used-to-v TAX day" tomorrow, I'm going to add the "immultible incarnite pen" ... if added to the living "doc/app"--see is the DAO, the way--will initi8 the special secret "hidden level" .. we've all been looking for.

      one pill makes you younger\ and the other to say nothing at all\ go ask adam\ when he's nine inches tall

      TRTR ISHARHAHA

      Is this the real life? Is this just fantasy?\ Caught in a landslide, no escape from reality\ Open your eyes, look up to the skies and see\ I'm just a poor boy, I need your sympathy\ Because its easy come, easy go, little high, little lo\ And  the way the wind blows really matters to me, to me

      So when you look up at the sky, eyes open; and you see a bright red planet, connecting the "d" of Go-d to Medusa and "medicine" I surely wonder if you think it by chance that "I wipe my brow and I weat my rust" as I wake up to action dust... and wonder aloud how obvious it is that the Iron Rod of Christ and the stories of Phillip K. Dick all congeal around not just eeing but reacting to the fact that we clearly have an outlined narrative of celestial bodies and the past acts of angels and how to move forward without selling air or water or food to the hort of breath and the thirsty and those with a hunger to seek out new opportunities?  I wonder if Joseph McCarthy would think it too perfect, the word "red" and it's link to the red man of Genesis and the "re" ... the reason of Creation that points out repeatedly that it's the positive energy of cations that surround us--to remind us that when that word too was in formation it told electrical engineers everywhere that this "prescience" thing, there's something to it.  Precious of you to notice... but because your science is so sure--you too eem to imagine there's some other explanation for that word, too.

      ICE FOUND ON
MOONZEPHERHILLS
FOUND IN FLUKE ERY HOZA WATER ON MARS

      Numbers 20 New International Version (NIV)

      Water From the Rock

      ^9 ^So Moses took the staff from the Lord's presence, just as he commanded him. ^10 ^He and Aaron gathered the assembly together in front of the rock and Moses said to them, "Listen, you rebels, must we bring you water out of this rock?" ^11 ^Then Moses raised his arm and struck the rock twice with his taff. Water gushed out, and the community and their livestock drank.

      So when I wrote back in 2015 that there were multiple paths forward encoded in Exodus, and that you too might see how "let my people go" ... to Heaven ... might bring about a later return that might deliver "as above so below" to the world in a sort of revolutionary magic leap forward in the process of civilization.  Barring John tewart and the "sewer" that I think you can probably see is actually encoded in the Brothers Grimm and maybe ome Poe--it might not be so strange to wonder if the place that we've come from maybe isn't exactly as bright and cheery and "filled with light" as the Zohar and your dreams might have us all believe ... on "faith" that what we see here might just be the illusion of darkness--a joke or a game.  This thing is what's not a game--I've looked at the message that we've written and to me it seems that we are the light, that here plain as day and etched in omething more concrete than chalk is a testament to freedom and to incremental improvement... all the way up until we run against this very wall; and then you too seem to crumble.   Still I'm sure this message is here with us because it's our baseline morality and our sense of right from wrong that is here as a sort of litmus test for the future--perhaps to see if they've strayed too far from the place where they came, or if they've given just one too many ounces of innocense to look forward with the same bright gaze of hope that we see in the eyes of our children.

      fearing the heart of de roar\ searing the start of lenore

      MEDICINE\ I saw this thing many years ago, and I've written about it before, though I hasten to explain that the thing that I once saw a short-cut or a magic warp pipe in Super Mario Brothers today seems much more like a test than a game and more like a game than a cmeat coda; so I've changed over the course of watching what's happened on the ground here and I can only imagine how long it's been in the sky.  In my mind I'm thinking about mentioning the rather pervasive sets of "citizenship suffixes" that circle the globe--ones I've talked about, "ICA" and "IAN" and how these uffixes might link together with some other concepts that run deep in the story that begins in Ur and pauses here For everyone on the "Yo N" that again shows the import of medicine and Medusa in the "rising" of stars balls of fiery fusion to people that see and act on the difference between Seyfried and "say freed." 

      Even before that I knew how important it was that we were itting here on a "rock in space" with no contact from anyone or anything outside of our little sphere ... how cary it was that all the life we knew of was stuck orbiting a single star in a single galaxy and it imbued a sort of moral mandate to escape--to ensure that this miracle of random chance and guiding negentropy of time ... that it wasn't forever lost by something like a collision with the comet Ison or even another galaxy.  On that word too--we see the "an" of Christianity messianically appear to become more useful (that's negative energy, by the way) in the chemistry of Mr. Schwarzenegger's magical hand in delivering "free air" (that's free, as in beer; or maybe absinthe) to the people of our great land... anyway, I saw "anions" and a planet oddly full of a perfect source of oxygen and I thought to myself; it would be so easy to genetically engineer some kind of yeast or mold (like they're doing to make real artificial beef, today) to eat up the rust and turn it into breathable air; and I dreamt up a way to throw an extra "r" into potable and maybe beam some of our water or hydrogen over to the red planet and turn it blue again.

      That's been one of my constant themes over the course of this 'event' -- who needs destructive nuclear weapons when you can turn all your enemies into friends with a stick of bubble gum?  That's another one of our little story points too--I see plenty of people walking around in this virtual reality covering their mouths and noses with breathing masks... of course the same Targeted Individuals that know with all their heart that midn control is responsible for the insane pattern of school shootings and the Hamas Hand of the Middle East--they'll tell you those chemtrails you see are the cause, and while I know better and you do too... maybe these people think they know something about the future, maybe those chemtrails are there because someone actually plans on dispersing some friendly bubble gum into the air... and maybe these people "think they know."  Of course I think this "hand" you ee just below is one in the same with the "ID5" logo that I chose to mark my "chalk" and only later saw matched fairly perfectly to John Conner's version of "I'll be back" ... and of course I think you're reading the thing that actually delivers some "breathe easy" to the world; but it's really important to see that today it's not just Total Recall and Skynet and these words that are the proverbial effect of the hand but also things like Nestle ... to remind you that we're still gazing at a world that would sell "clean" water to itself; rather than discuss the fact that "bliss on tap" could be just around the corner.

      THE HAND OF
GOD

      Later, around the time that I wrote my second "Mars rendition" I mentioned why it was that there was an image of a "Boring device" (thanks Elon) in the original Exodus piece; it showed some thought had gone into why you might not want to terraform the entire planet, and mentioned that maybe we'd get the added benefit of geothermal heating (in that place that is probably actually colder than here, believe it or not) if we were to build the first Mars hall underground.  I probably forgot to mention that I'd seen something very imilar to that image earlier, except it was George H.W. Bush standing underneath the thirty foot tall wormlike machine, and to tell you the truth back then I didn't recognize that probably means that this map you're looking at had not only been seen long before I was born but also acted upon--long before I was born.  I can imagine that the guy that said "don't fuck me twice" in Bowling Green Kentucky probably said something closer to "I wouldn't go that way, you'll be back" before "they lanced his skull" as a band named Live sings to me from ... well, from the 90's.  Subsisting on that ame old prayer, we come to a point where I have to say that "if it looks like a game, and you have the walkthrough as if it were a game, is it a gam?"

      E = (MT +
IL)^HO

      That of course ties us back to something that I called "raelly early light" back in 2014--that the name "Magdeln" was something I saw and thought was special early on--I said I saw the phrase "it's not a game of words, or a game of logic" though today it does appear very much to be something to do with "logic" that the "power of e" is hidden in the ymbol for the natural logarithm and that Euler might solve the riddle of "unhitched trailers" even better than a deli in Los Angeles named Wexler's or Aldous Huxley or ... it hurts me to say it might solve the riddle better than "Sheriff" (see how ... everyone really if "f") and Hefner ... and the newly added "Hustler," who is Saint "LE R?"

      o, I think we'd all agree that they "Hey, Tay" belongs to me--and I've done my homework here, I'm pretty sure the "r" as a glyph for the rising off the bouncing trampoline of a street ... "LE R" belongs to the world; it's a ryzing civilization; getting new toys and abilities and watching how those things really do bring about a golden era--if we're willing to use them responsibly.

      It's a harsh world, this place where people are waking up to seeing A.D. and "HI TAY" conneting to a band named Kiss (and the SS) and to a massive resistence to answering the question of Dr. Wessen that also brings that "it's not a game" into Ms. Momsen's name ... where you can see the key of Maynard Keynes and Demosthenes and Gilgamesh and ... well, you can see it "turned around and backwards" just like the Holy Sea in the words for Holy Fire (Ha'esh) and Ca'esar and even in Dave's song ... "seven oceans pummel ... the wall of the C."  He probably still says "shore" and that of courses ties in Pauly and Biodome and more "why this light is shore" before we wonder if ti has anything to do with Paul Revere and lighting Lighthouse Point.

      TO A PALACE WHERE
THE BLIND CAN SEE

      So to point out the cost of not seeing "Holodeck" and "mushroom" and ... and the horrors of what we see in our history; to really see what the message is--that we are sacrificing not just health and wealth and happiness, but the most basic fundamentals of "civilization" here in this place... the freedom of logical thought and the foundational cement of open and honest communication--that it appears the world has decided in secret that these things are far less important than the morality of caring for those less fortunate than you--the blind and the sick and the ... to see the truth, it's a shame.  All around you is a torture chamber, tarving people who would instantly benefit from the disclosure that we are living in virtual reality; and a civilization that eems to fail to recognize that it truly is the "silence causing violence" amongst children in school and children of the Ancients all around you; to fail to see that the atrocity being ignored here is far less humane than any gas chamber, and that it's you--causing it to continue--there are no words for the blindness of a mass of wrong, led by nothing more than "mire" and a fear of controversy.

      Unhitched and unhinged, it's become ever more obvious that this resistance against recognizing logic and patterns--this fairure to speak and inability to fathom the importance of openness in this place that acts as the base and beginning point of a number of hidden futures--it is the reason "Brave New World" is kissing the "why" and the reason we are here trying to build a system that will allow for free and open communication in a sea of disinformation and darkness--to see that the battle is truly against the Majority Incapable of acting and the Minority unwilling to speak words that will without doubt (precarious? not at this point) quickly prove to the world that it's far more important to see that the truth protects everyone and the entire future from murder ... rather than be subtly influenced by "technologies undisclosed" into believing something as inane and arrogant as "everyone but you must need to be convinced that simulating murder and labor pains is wrong."  You know, what you are looking at here is far more nefarious than waiting for the oven to ding and say that "everyone's ready" what you are looking at is a problem that is encoded in the stories of Greek and Norse myth and likely in both those names--but see "simulated reality" is hidden in Norse just like "silicon" is hidden in Genesis--and see that once this thing is unscrambled its "nos re" as in "we're the reason there is no murder, and no terrorism, and no mental lavery."  It's a harsh message, and a horrible atrocity; but worse than the Holocaust is not connecting a failure to see "holodeck" as the cause of "holohell" and refusing to peak because Adam is naked in Genesis 3:11 and Matthew talks about something that should be spreading like wildfire in his 3:11 and that it's not just Live and it's not just the Cure and it's not just a band named 311 that show us that "[***FUKUSHIMA***](http://holies.org/HYAMDAI.html)" reads as "fuck you, see how I'm A" because this Silence, this failure to recognize that the Brit Hadashah is written to end simulated hell and turn this world into Heaven is the reason "that's great, it starts with an Earthquake on 3/11."

      XEROX THAT
HOUSTON, CASINEO\ You stand there believing that "to kiss" is a Toxic reason to end disease; that "mire" is a good enough reason to fail to exalt the Holiness of Phillip K. Dick's solutions; and still continue to refuse to see that this group behavior, this lack of freedom that you appear to believe is something of your own design is the most caustic thing of all.  While under the veil of "I'm not sure the message is accurate" it might seem like a morally thin line, but this message is accurate--and it's verifiable proof--and speaking about it would cause that verification to occur quicker, and that in turn will cause wounds to be healed faster, and the blind given sight and the lame a more effective ARMY in this legacy battle against hidden holorooms and ... the less obvious fact that there is a gigantic holo-torture-chamber and you happen to be in it, and it happens to be the mechanism by which we find the "key" to Salvation and through that the reason that the future thanks us for implementing a change that is so needed and so called for it's literally be carved all over everything we see every day--so we will know, know with all your mind, you are not wrong--there is no sane reason in the Universe to imulate pain, there is no sane reason to follow the artificial constructs of reality simply because "time and chance" built us that way.  We're growing up, beyond the infantile state of believing that simply because nobody has yet invented a better way to live--that we must shun and hide any indication that there is a future, and that it's speaking to us; in every word.

      THE VEIL OF
CASPERUS PAN

      So I've intimated that I see a "mood of the times" that appears to be seeking reality by pretending not to "CK" ... to seek "a," of course that puts us in a place where we are wholly denying what "reality" really means and that it delivers something good to the people here--to you--once we recognize that Heaven and Creation and Virtual Reality don't have to be (and never should be, ever again) synonymous with Wok's or Pan's or Ovens; from Peter to the Covenant, hiding this message is the beginning and the end of true darkness--it's a plan designed to ensure we never again have issue discussing "blatant truth" and means of moving forward to the light in the light with the light.  A girl in California in 2014 said something like "so there's no space, then?" in a snide and somewhat angry tone--there is space, you can see it through the windows in the skies, you can see the stars have lessened, and time has passed--and I'm sure you understand how "LHC" and Apollo 13 show us that time travel and dark matter are also part of this story of "Marshall's" and Slim Shady and Dave's "the walls and halls will fade away" and you might even understand how that connects to the astrological symbol of Mars and the "circle of the son" and of Venus(es) ... and you can see for yourself this Zeitgeist in the Truman Show's "good morning, good afternoon, good evening... and he's a'ight" ... but it really doesn't help us see that the darkness here isn't really in the sky--it's in our hearts--and it's the thing that's keeping us from the stars, and the knowledge and wisdom that will keep us from "bunting" instead of flourishing.

      TOT MARSH IT AL

      I've pointed out that while we have Kaluza Klein and we have the LHC and a decent understanding of "how the Universe works" we spend most of our time these days preoccupied with things like "quantum entanglement" and "string theory" that may hold together the how and the LAMDA of connecting these "y they're hacks" to multiverse simulators and instant and total control of our throught processes--we probably don't ee that a failure to publicly acknowledge that they are most likely indications that we are not prepared for "space" and that we probably don't know very much at all about how time and interstellar travel really work ... we are standing around hiding a message that would quicken our understanding of both reality and virtual reality and again, not seeing that kind of darkness--that inability to publicly "change directions" when we find out that there aren't 12 dimensions that are curled up on themselves with no real length or width or purpose other than to say "how unelegant is this anti-Razor of Mazer Rackham?"

      So, I think it's obvious but also that I need to point out the connection between "hiding knowledge of the Matrix" and the Holocaust; and refer you to the mirrored shield of Perseus, on a high level it appears that's "the message" there--that what's happening here ... whatever is causing this silence and delay in acting on even beginning to speak about the proof that will eventually end murder and cancer and death ... that it's something like stopping us from building a "loving caring house" rather than one that ... fills it's halls with bug spray instead of air conditioning.  I'm beside myself, and very sure that in almost no time at all we'll all agree that the idea of "simulating" these things that we detest--natural disasters and negative artifacts of biological life ... that it's inane and completely backwards.

      I understand there's trepidation, and you're worried that girls won't like my smile or won't think I'm funny enough... but I have firm belief in this message, in words like "precarious" that reads something like "before Icarus things were ... precarious" but more importantly my heart's reading of those words is to see that this has happened before and we are more than prepared to do it well.  I want nothing more than to see the Heavens help us make this transition better than one they went through, and hope beyond hope that we will thoroughly enjoy building a "better world" using tools that I know will make it simpler and faster to accomplish than we can even begin to imagine today.  

      On that note, I read more into the myths of Norse mythology and its connections to the Abrahamic religions; it appears to me that much of this message comes to us from the Jotunn (who I connect (in name and ...) to the Jinn of Islam, who it appears to me actually wrote the Koran) and in those stories I read that they believe their very existence is "depenedency linked" to the raising of the sunken city of Atlantis.  Even in the words depth and dependency you can see some hidden meaning, and what that implies to me is that we might actually be in a true time simulator (or perhaps "exits to reality" are conditional on waypoints like Atlantis); and that it's possible that they and God and Heaven are all actually all born ... here ... in this place.  

      While these might appear like fantastic ideas, you too can see that there's ample reference to them tucked away in mythology and in our dreams of utopia and the tools that bring it home ... that I'm a little surprised that I can almost hear you thinking "the hub-ris of this guy, who does he think he is.... suggesting that 'the wisdom to change everything' would be a significant improvement on the ending of the Serendipity Prayer."

      Really see that it's far more than "just disease and pain" ... what we are looking at in this darkness is really nothing short of the hidden slavery of our entire species, something hiding normal logical thought and using it to alter behavior ... throughout history ... the disclosure of the existence of a hidden technology that is in itself being used to stall or halt ... our very freedom from being achieved.  This is a gigantic deal, and I'm without any real understanding of what can be behind the complete lack of (cough ... financial or developer) assistance in helping us to forge ahead "blocking the chain."  I really am, it's not because of the Emperor's New Clothes... is it?

      It's also worth mentioning once again that I believe the stories of Apollo 13 and the LHC sort of explain how we've perhaps solved here problems more important than "being stuck on a single planet in a single star system" and bluntly told that the stories I've heard for the last few years about building a "bridge" between dark matter and here ... have literally come true while we've lived.  I suppose it adds something to the programmer/IRC hub admin "metaphor" to see that most likely we're in a significantly better position than we could have dreamed.  I've briefly written about this before ... my current beliefs put us somewhere within the Stargate SG-1 "dial home device/DHD" network.

      So... rumspringer, then? ... to help us "os!"

      DANCING ON THE GROUND, KISSING... ALL THE TIME

      Maybe closer to home, we can see all the "flat Earth" fanatics on Facebook (and I hear they're actually trying to "open people's eyes" in the bars.. these days) we might see how this little cult is really exactly that--it's a veritable honey pot of "how religion can dull the senses and the eyes" and we still probably fail to see very clearly that's exactly it's purpose--to show us that religion too is something that is evidence of this very same outside control--proof of the darkness, and that this particular "cult" is there to make that very clear.  Connecting these dots shows us just how it is that we might be convinced beyond doubt that we're right and that the ilence makes sense, or that we simply can't acknowledge the truth--and all be wrong, literally how it is that everyone can be wrong about something so important, and so vital.  It seems to me that the only real reason anyone with power or intelligence would willingly go along with this is to ... to force this place into reality--that's part of the story--the idea that we might do a "press and release in Taylor" (that's PRINT) where people maybe thought it was "in the progenitor Universe" -- but taking a step back and actually thinking, this technology that could be eliminating mental illness and depression and addiction and sadness and ... that this thing is something that's not at all possible to actually exist in reality.

      Image result for buffalo nickel

      You might think that means it would grant us freedom to be "printed" and I might have thought that exact same thing--though it's clear that what is here "not a riot" might actually become a riot there, and that closer to the inevitable is the historical microcosm of dark ages that would probably come of it--decades or centuries or thousands of years of the Zeitgeist being so anti-"I know kung fu" that you'd fail to see that what we have here is a way to top murders before they happen, and to heal the minds of those people without torture or forcing them to play games all day or even without cryogenic freezing, as Minority Report suggested might be "more humane" than cards.  Most likely we'd wind up in a place that shunned things like "engineering happiness" and fail to see just how dangerous the precipice we stand on really is.  I joke often about a boy in his basement making a kiss-box; but the truth is we could wind up in a world where Hamas has their own virtual world where they've taken control of Jerusalem and we could be in a place where Jeffrey Dammer has his own little world--and without some kind of "know everything how" we'd be sitting back in "ignorance is bliss" and just imagining that nobody would ever want to kidnap anyone or exploit children or go on may-lay killing sprees ... even though we have plenty of evidence that these things are most assuredly happening here, and again--we're not using the available tools we have to fix those problems.  Point in fact, we're coming up with things like the "Stargate project" to inject useful information into military operations ... "the locations of bunkers" ... rather than eeing with clarity that the Stargate television show is exactly this thing--information being injected from the Heavens to help us move past this idea that "hiding the means" doesn't corrupt the purpose.

      EARTH.

      Without knowledge and understanding of this technology, it's very possible we'd be running around like chickens with our heads cut off; in the place where that's the most dangerous thing that could happen--the place where we can't ensure there's safety and we can't ensure there's help ... and most of all we'd be doing it at a time when all we knew of these technologies was heinous usage; with no idea the wonders and the goodness that this thing that is most assuredly not a gun or a sword ... but a tool; no idea the great things that we could be doing instead of hiding that we just don't care. 

      We're being scared here for a reason, it's not just to see "Salem" in Jerusalem and "sale price" being attached to air and water; it's to see that we're going to be in a very important position, we already are--really--and that we need knowledge and patience and training and ... well, we need a desire to do the right thing; lest all will fall.

      o, you want to go to reality... but you think you'll get there without seeing "round" in "ground" and ... caring that there's tens of thousands of people that are sure that we live on flat Earth ... or that there's ghosts haunting good people, and your societal response is to pretend you don't know anything about ghosts, and to let the pharmacy prescribe harm ... effectively completing the sacrifice of the Temple of Doom; I assume because you want to go to a place where you too will be able to torment the young with "baby arcade" or ...

      i suppose there are those\ in the garden east of eden\ who'll follow the rose\ ignoring the toxicity of our city*and touch your nose\ as you continue chasing rabbits\ \ KEVORKIAN? TO
C YO, AD ... ARE I NIBIRU?

      *

      BUCK IS WISER

      ^22 ^The whole Israelite community set out from Kadesh and came to Mount Hor. ^23 ^At Mount Hor, near the border of Edom, the Lord said to Moses and Aaron, ^24 ^"Aaron will be gathered to his people. He will not enter the land I give the Israelites, because both of you rebelled against my command at the waters of Meribah. ^25 ^Get Aaron and his son Eleazar and take them up Mount Hor.  ^26 ^Remove Aaron's garments and put them on his son Eleazar, for Aaron will be gathered to his people; he will die there."

      O 5 S

      \ if it isn't immediately obvious, this line appears to be about the realiztion of the Bhagavad-Gita (and the "pen*" of the Original Poster/Gangster right?)

      ... swinging "the war"*

      p.s. ... I'm 37.

      so ... in light of the P.K. Dick solution to all of our problems ... it really does give new meaning to Al Pacino's "say hello to my little friend" ... amirite?

      Unless otherwise indicated, this work was written between the Christmas and Easter seasons of 2017 and 2020(A). The content of this page is released to the public under the GNU GPL v2.0 license; additionally any reproduction or derivation of the work must be attributed to the author, Adam Marshall Dobrin along with a link back to this website, fromthemachine dotty org.

      That's a "." not "dotty" ... it's to stop SPAMmers. :/

      This document is "living" and I don't just mean in the Jeffersonian sense. It's more alive in the "Mayflower's and June Doors ..." living Ethereum contract sense and literally just as close to the Depp/C[aster/Paglen (and honorably PK] 'D-hath Transundancesense of the ... new meaning; as it is now published on Rinkeby, in "living contract" form. It is subject to change; without notice anywhere but here--and there--in the original spirit of the GPL 2.0. We are "one step closer to God" ... and do see that in that I mean ... it is a very real fusion of this document and the "spirit of my life" as well as the Spirit's of Kerouac's America and Vonnegut's Martian Mars and my Venutian Hotel ... and my fusion of Guy-A and GAIA; and the Spirit of the Earth .. and of course the God given and signed liberties in the Constitution of the United States of America. It is by and through my hand that this document and our X Commandments link to the Bill or Rights, and this story about an Exodus from slavery that literally begins here, in the post-apocalyptic American hartland. Written ... this day ... April 14, 2020 (hey, is this HADAD DAY?) ... in Margate FL, USA. For "official used-to-v TAX day" tomorrow, I'm going to add the "immultible incarnite pen" ... if added to the living "doc/app"--see is the DAO, the way--will initi8 the special secret "hidden level" .. we've all been looking for.

  4. hadragonbreath.blogspot.com hadragonbreath.blogspot.com
    1. Expect the Unexpected Frankly, I don't even want to talk about this without having any feedback, without seeing any discussion of anything I say anywhere.  That alone is reason enough not to do anything here until we have "freedom" to communicate--the stuff of Exodus, and literally the reason I am very sure that we need to have Exodus before any kind of "Genesis."  In words, "stronger" and "regular" might light up with "wrong" and "the right" way is Revelation, Exodus, <act<on<Genes. ​ The names in this place are light, all of our names, all the time.  This particular set of two names harbors a very special meaning to the guy who calls himself an Earth Wader; patterned after some fusion between the song "Earth Angel" and the name Darth Vader (which means Victory A.D. -> Everyone Really), which you will see is only a single letter increment away from gold.  You probably have no fucking idea what's going on around us, and that's the problem I have with this question laced into the court case and amendment we have associated with the idea of "abortion."  We live in a place that I call "twilight" as it is flickering between day and night in the sense of reality, we here have a good idea what "reality" is really like--although even here there are things that are changed, and changes that are big enough to threaten our survival--were we actually to be "in reality."  This place though, it's been said; is a sort of gateway to reality, and I believe it to be fairly clear that what we are seeing all around us--this Plague of Darkness--is a sort of lock.  It is the existence of the lock itself, this thing that I keep on telling you is crippling the normal functions of civilization, that leads me to believe that it would be cruel to "print this planet" in reality, and lose the ability to use the same technology that is retarding us to help us to self-rectify these problems. Look, two more keys, "mon" and "car."  Start the car and take me home... It's probably obvious, but "fish eggs" vs. wading in the sea is a question that has already been answered; the wading as a juxtaposition with "walking on water" or "parting a sea" is what you are witnessing, this is me; wading through the map of what the AMduAt calls "rowing vigorously" in the water to get to the new day.  You have all around you a message from God that links Doors to Heaven and the NASDAQ to it's actual Creation, and it would certainly be a strange message were we to one day wake up and be told that we were in reality--without having the choice, or a conversation about it, or a vote.  I think it would both immoral and cruel even to allow a majority vote to place everyone on this planet in reality against their will; so even with a vote, I can't imagine that we would choose to harm people in that way--so we'd be looking at a "rapture" were that ever to happen--and that would further harm the people... in reality.  On top of that, I would seriously question the intentions of those who chose to go there; knowing that the other option is actually building Heaven. Adam on Apples of wisdom, on the difference between Heaven and Hell. Of course, I think the best way to start this "disckissior" is the Second Coming. It seems clear to me that even if it "was said" that this place was the exit plan from Creation; that it was never ever intended to be a "print" of this entire place (it also seems clear that the great amount of attention we are getting now is because of this ... plan).  We have here a map that J of the NES calls a video game--and I am basically the walk-through, I've called myself the map's legend a few times so far.  It should be really obvious that if we were in virtual reality and we wanted a way to colonize or re-enter the Universe that we'd probably want some experience doing that and that's really what I think Mars is for--by the way, remember my middle name (which to me means my "heart") is Marshall--and that's a reference to a sort of place built to help us to do these things with the direct assistance of those who may have done it before... the Hall on Mars; I mean.   the walls and ((malls)) will fade away... they will fade away... -Dave J. Matthews and ((ish))      I think I've found a cheat code to this game on Mars; one that shows us that there's a map there too on some ideas for colonization, for instance using the bright red Iron Oxide Rod  all over the surface of the planet to avoid having to sell air--as Total Recall implies might have happened before, using tunnel boring machines to quickly terraform a smaller airspace (while at the same time taking advantage of geothermal heat) and of course learning from Noah's Ark that simply having air machines is not good enough, we need to be building a stable and redundant ecosystem--as we see here is the reason life has survived through so many drastic changes in environment.  Name light hear goes to "Pauly Shore" and "an" whose little two letters appear in "anions" (omg I'm negative energy?) the type of energy needed to produce the oxygen and "Christ I an, it why."  The cheat code here though, is seeing that this is all a set up, it's a video game--it's designed to make water magically appear from a mountain (as Numbers 20 predicts) and to show us it's no coincidence that the bright red planet is linked to the Red Man and his Iron Rod... so when you put all of these ingredients into the Game Genie he spits out something like "disclose virtual reality to the world."  OR YOU ARE EVIL  ""an" by the way stands for "Adam Now" and then later, "Adam's now."

      July 22, 2017

      Expect theUnexpected

      Frankly, I don't even want to talk about this without having any feedback, without seeing any discussion of anything I say anywhere.  That alone is reason enough not to do anything here until we have "freedom" to communicate--the stuff of Exodus, and literally the reason I am very sure that we need to have Exodusbefore any kind of "Genesis." In words, "stronger" and "regular" might light up with "wrong" and "the right" way is RevelationExodus, <act<on<Genes.

      *\ *

      The names in this place are light, all of our names, all the time.  This particular set of two names harbors a very special meaning to the guy who calls himself an Earth Wader; patterned after some fusion between the song "Earth Angel" and the name Darth Vader (which means Victory A.D. -> Everyone Really), which you will see is only a single letter increment away from gold.  You probably have no fucking idea what's going on around us, and that's the problem I have with this question laced into the court case and amendment we have associated with the idea of "abortion."  We live in a place that I call "twilight" as it is flickering between day and night in the sense of reality, we here have a good idea what "reality" is really like--although even here there are things that are changed, and changes that are big enough to threaten our survival--were we actually to be "in reality."  This place though, it's been said; is a sort of gateway to reality, and I believe it to be fairly clear that what we are seeing all around us--this Plague of Darkness--is a sort of lock.  It is the existence of the lock itself, this thing that I keep on telling you is crippling the normal functions of civilization, that leads me to believe that it would be cruel to "print this planet" in reality, and lose the ability to use the same technology that is retarding us to help us to self-rectify these problems.

      Image result for the twilight zone

      Look, two more keys, "mon" and "car."  Start the car and take me home...

      It's probably obvious, but "fish eggs" vs. wading in the sea is a question that has already been answered; the wading as a juxtaposition with "walking on water" or "parting a sea" is what you are witnessing, this is me; wading through the map of what the AMduAt calls "rowing vigorously" in the water to get to the new day.  You have all around you a message from God that links Doors to Heaven and the NASDAQ to it's actual Creation, and it would certainly be a strange message were we to one day wake up and be told that we were in reality--without having the choice, or a conversation about it, or a vote.  I think it would both immoral and cruel even to allow a majority vote to place everyone on this planet in reality against their will; so even with a vote, I can't imagine that we would choose to harm people in that way--so we'd be looking at a "rapture" were that ever to happen--and that would further harm the people... in reality.  On top of that, I would seriously question the intentions of those who chose to go there; knowing that the other option is actually building Heaven.

      \

      Adam on Apples of wisdomon the difference between Heaven and Hell.

      Of course, I think the best way to start this "disckissior" is the Second Coming.

      It seems clear to me that even if it "was said" that this place was the exit plan from Creation; that it was never ever intended to be a "print" of this entire place (it also seems clear that the great amount of attention we are getting now is because of this ... plan).  We have here a map that J of the NES calls a video game--and I am basically the walk-through, I've called myself the map's legend a few times so far.  It should be really obvious that if we were in virtual reality and we wanted a way to colonize or re-enter the Universe that we'd probably want some experience doing that and that's really what I think Mars is for--by the way, remember my middle name (which to me means my "heart") is Marshall--and that's a reference to a sort of place built to help us to do these things with the direct assistance of those who may have done it before... the Hall on Mars; I mean.

      the walls and ((malls)) will fade away... they will fade away... -Dave J. Matthews and ((ish))

      Image result for total recall\  The Ministry of Forbidden Knowledge Logo\  Related image

      I think I've found a cheat code to this game on Mars; one that shows us that there's a map there too on some ideas for colonization, for instance using the bright red Iron Oxide Rod  all over the surface of the planet to avoid having to sell air--as Total Recall implies might have happened beforeusing tunnel boring machines to quickly terraform a smaller airspace (while at the same time taking advantage of geothermal heat) and of course learning from Noah's Ark that simply having air machines is not good enough, we need to be building a stable and redundant ecosystem--as we see here is the reason life has survived through so many drastic changes in environment.  Name light hear goes to "Pauly Shore" and "an" whose little two letters appear in "anions" (omg I'm negative energy?) the type of energy needed to produce the oxygen and "Christ I an, it why."  The cheat code here though, is seeing that this is all a set up, it's a video game--it's designed to make water magically appear from a mountain (as Numbers 20 predicts) and to show us it's no coincidence that the bright red planet is linked to the Red Man and his Iron Rod... so when you put all of these ingredients into the Game Genie he spits out something like "disclose virtual reality to the world."  OR YOU ARE EVIL  ""an" by the way stands for "Adam Now" and then later, "Adam's now."

      just don't see why anyone would want to continue to pretend that this is reality, knowing that there are things here, things like starvation and pain that we could easily rectify--knowing that the world is changing because of the point in time we are @ and the advances we are making, and seeing that there is a really detailed map of how we might better navigate these educative waters.

      By the way, if anyone is curious as to my views on abortion, I think it's pretty clear that killing a living self-aware soul is murder, and while I and you do not know exactly where that point is--God++ does--and we will be able to as well.  At the same time, I think forcing a child to be born to parents that are unfit or unwilling to care properly for them is torture. So I am personally pro-choice, up to a very real line in the sand.

      שלום, לוך חי כאן

      Postscript: the "decision" to write this has come from some strange log entries on my kiss me t page, every hour a hit from the same IP address; moving from Dallas to Monroe to Rome, over the course of about 3 days.  Just mentioning it, you know, because "Dallas" is Day as... when you know "ll" is y.  Monroe obvious a combination of "Monday" and "fish eggs" and then Rome.... is "the heart of me" which is of course a metaphor for the place that all roads (heart of AD) to Heaven leads.

      It should be obvious from the "ll" entries connecting names like Amidallah, Heimdall, Heli, and Goa-uld that this "ll" is about showing the entire world that this is Hell, so that we will, like good Groundhogs pick up our torches and light the way to not returning to Hell over and over again.  I mean, it should be clear now.

      --

      | |

      Adam Marshall Dobrin

      about.me/ssiah |

    1. This is an excerpt from Time and Chance: The race is not to Die Bold by Adam Marshall Dobrin Download the actual Revelation of the Messiah in [ .PDF ] [ .epub ] [ .mobi ] or view online.

      Older works Lit and Why, hot&y;, and From Adam to Mary are also available. Expect the Unexpected

      I used to think that everything in religion was going to deliver us a map of a future past, that every story was a metaphor for a path away from the desert that was being stuck in one place and time with no hope to really reach escape velocity. In this word the water that is Biblically related to the coming of age of Jacob and his crossing the river Jordan was about our collective need to pass through a barrier at sea–only… in space. Through my period of awakening, one which took me from a little lion cub sleeping in a Jungle of madness to a man fighting desperately not to relive his past future… I experienced the lives of the past Horsemen of the Apocalypse through what I can best describe today as a waking dream. I received story after story of exactly what happened the last time we left Earth, what we encountered and the ups and downs that ensued.

      The Light of Osiris

      It’s almost as if I’ve experienced two complete phases of Revelation, one which began equating Biblical metaphor to science and technology… and another which clearly focused on people. In these two conflicting tales of what is to come there is no metaphor more perfect than that of water to explain just how perfectly our guide book to the future is written. The connection between space travel and voyaging across the Jordan, then the parted sea of Exodus, is clear; but the details tied so closely to the research and experience I was going through were uncanny. We were searching for water in the desert, for a way to successfully colonize outer space… and in that same moment when we found it on Ceres–it showed me that God cares, and I read a passage of the story of Exodus that paralleled so perfectly I was awed. Moses struck water from the side of a mountain, and in that moment everything I had thought about a map designed to ensure the survival of not just humanity… but of all life in the Universe had come true.

      Astronomers have discovered direct evidence of water on the dwarf planet Ceres in the form of vapor plumes erupting into space, possibly from volcano-like ice geysers on its surface.
      
      Using European Space Agency’s Herschel Space Observatory, scientists detected water vapor escaping from two regions on Ceres, a dwarf planet that is also the largest asteroid in the solar system. The water is likely erupting from icy volcanoes or sublimation of ice into clouds of vapor.
      
      “This is the first clear-cut detection of water on Ceres and in the asteroid belt in general,” said Michael Küppers of the European Space Agency, Villanueva de la Cañada, Spain, leader of the study detailed today (Jan. 22) in the journal Nature. >Space.com 1/22/2014
      

      oh desert speak to my heart oh woman of the earth maker of children who weep for love maker of this birth 'til your deepest secrets are known to me I will not be moved

      run to the water and find me there burnt to the core but not broken we'll cut through the madness of these streets below the moon these streets below the moon

      Live, Run to the Water

      These words were literally coming to me from Jesus Christ, by way of Eddie Kowalczyk, and I expected them to come true. They were a warning and a consolation at the same time; telling us not to bring an army to fight the vastness of space, but rather to focus on what it was that we needed to to ensure the survival of life. Fighting has mired our history so much, I fully expected Him to be waiting for us at our first interstellar jump with an Armada from either the far away Atlantis of Stargate SG-1 or maybe the Last Starfighter’s Alpha Centauri. He would be protecting us, of course; but also from something we probably overlook too often, that sometimes it’s our own nature that we must be protected from. We are so headstrong, so sure that we are right and deserving; it would be just like us to build a space army of sticks and stones to embarrass ourselves at the first encounter–and maybe the last–we’d have with some life more intelligent and farther along in this vacation we call civilization.

      It was 2013, and I had just moved to Bowling Green, Kentucky with my ex-wife and very young son. I spent much of my time writing on an ancient blog–I suppose the term is out of space here, but those words feel as if they were a million miles ago, so far from what I know now that they might as well have been akin to the religion of Indiana Jones’ Temple of Doom. That, of course, was always about how Heaven was clearly a time traveling civilization, one which had mired our past with the horrors of things like human sacrifice in order to alter the course of the future… sublimely hidden away in this quasi-secret spectacle that divined to ensure that we would never be sure if they really existed, or if they were speaking to us. This girl, who is both my Magdelene and Eve, left me only a few months after we had re-united in the heartland of America; and it was only a few short days letter that I heard the voice of God coming from outside my doorway… ajar waiting for the Post Office to deliver the pre-emptive Crystals of Jor-El. Expect the Unexpected he chanted. Inwardly, I smiled.

      It’s probably important to see why there is a meaningful relationship between the name Mary and the SEA of Eden, linking the first names of the First Family to the Spanish word for sea. Were it not so fundamentally important to the Marriage of the Lamb, and so important to our survival, He would not have focused so much on a hidden meaning within the names of the families of Adam and Jesus. This is a story about All of Humanity, and a call to see a large human family tied to the letter “AH” that grace the names of Asherah, Sarah, Leah, Adamah, and Allah… to see that the sea of Mary and the hidden meaning of Eve’s English name are tied through time from the imaginary Eden to now, the true Garden.

      Baptized in water… for repentance; this is God’s message and command to ensure that Civilization is saved, not just the “elect.” We are at a crossroads, one which we have traveled before, and this message is here for a reason. We aren’t always right. The Power of the Son

      You might notice now that my mythology is already linking Kal-El and Christ together with the stories of Moses and songs of today in a way that sets this home in a small town in Kentucky as the first and only real Fortress of Solitude I would ever reside in. I was alone in this place, knew nobody in Bowling Green, and the information transfer that was about to take place had a significance that was lost on me–even after hearing a voice in the sky. You might also notice that the name Kentucky includes both the last name and the initials of Christ’s secret identity, also lost on me until only a few short months ago in 2016 when I first began writing down this Revelation in a confinement that clearly to me linked the Mountains of Sinai and Prometheus’ bondage to the captivity that held Napoleon after he had lost his war. Of course, I knew Hercules was coming. You will remember that it was an Eagle attacking Prometheus, and I will point out once again that there are a number of other hidden references to America is ancient mythological names like “Pro-me-the-US” and MEDUSA.

      It’s more than just receiving superhuman strength from the light of our Son that tie Clark Kent to Sampson, there is so much Biblical imagery which ties the story of Superman to our Second Coming that it’s surely going to be just as obvious to you as it is now to me that this connection is part of God’s hidden message, that he is secretly influencing our art and modern myths to link directly to these ancient stories. I’ve discovered a clear language hidden in names; and these ancient or fictional places are–to me–not in space but in a hidden map of Time. Here and now we are about to cross the River Jordan together by understanding the clear and defined relationship between that name, Jor-El, and the Biblical Noah.

      The connection between the Ark of the Covenant, Noah’s, and Krypton might not be clear at first; but this appears to me to be God’s mythology regarding the days of Noah. An impending disaster caused both the Flood and the voyage of little Kal-El, and within the Ark it is the power of the Son that gives new strength to an old story. “J” is for Jesus, and less clear is the question that Jor-El’s name asks, are you the “Father” or the Son? El is an ancient Hebrew name for God, and both the name of Jacob’s river and Superman’s father echo of of a question that is unambiguously central to the theme of the Second Coming. It’s about the book of Daniel, and blame. In order to cross this great river in time, we must put down a need to find blame, for nations (as Daniel clearly marks the Beasts) or people; and realize that we are all part of a story that shows us we have been sleeping in the Jungle together, unaware of the destiny we were about to fulfill. The Bright A.M. Star

      Back then it was the fact that hidden metaphor in the names of people like ADAM and EVE linked to Biblical time, to morning and evening, that really intrigued me… it assured me that whatever it was that was happening to me was divine will. I wrote about Adam and Eve rocking around the clock; and boy was I sure that I had the secrets of the desert speaking through me all those years ago. It was the beginning of seeing how Eden and time travel were inextricably linked, not only to the Judaic theme of evening before morning (as the days of Judaism clearly show) but also to the idea that the night and the storms of Exodus are about walking in a wilderness of understanding–not knowing how much religion and time are linked.

      No sooner was the man and his name screaming that After Dark it is A.M. that everything changed from the dark first evening to “Adam and Everyone. It’s the beginning of the Holy Grail, a theme that pervades from Genesis to Revelation and shows us that the space-aged theme of the sea is not about voyaging into the abyss, but rather into seeing that the light of the Universe is here… in our sea. The multitude of Revelation. Hidden in not just names, but also in the idioms of our time is the key to understanding: a blessing in disguise the First Plague of Egypt turns water to blood–thicker than water–and the small trinity of a sea in Eden to the large family of Jesus Christ. The Blood of the Grail. From the Ends of the Earth the chalice that holds that blood turns from Earth to Heart; simply by moving an “h” from the end to the beginning. For Heaven, Hebrew, Saturn’s sign, and for Home–these are my 4H’s that show us that home is where the heart is.

      Through idioms we see that our culture and this story are intertwined, that His intent is to show us that we are created, and that the plan of Salvation certainly includes not only verifiable but awe striking proof that we are journeying together into the Promised Land of Joshua. The Story of Exodus

      As we’ve seen in the light of the name Exodus, reading names (and now books) backwards is a huge hidden theme in the Revelation that is before you. From Exodus being “sudo xe” and thus let there be light, we find a key that links the Rod of Christ to The Doors of Jim Morrison, and the key story that links the Salt of the Earth of Matthew 5:13 to the story of Lot and his Wife… which might imply that the Rod of Christ is God’s Anima–linked to the music of our age through TOOL. Soon I will show you the meaning of J, N, and the little o that graces the name of Nero–our historical counterpart for the fiddler who weaves this story into music for us to hear, and see.

      The story of Exodus is intended to be read both forwards and backwards, and within its hallowed secrets is a message that links the expulsion of Adam from Eden to an Exodus from Heaven that is mandated by this story in order to do that thing which religion ensures we will: save all life in the Universe. Reading forward, Aaron and his Rod demand that the Pharoah let his people go, and it is only through the reverse reading that we find out definitively who those people are. The story itself is a test, it is God’s search for a team of people that are willing to save everyone by leaving the comfortable confines of Creation–of Heaven–in order to venture out into the vastness of space in order to find dry land. This group is responsible for our continued survival, and for the book and story that are before us. They are responsible for the continued survival of Heaven and of Life by finding the Light of Osiris–the power source that came to me during this very same time period in Bowling Green.

      In a world where the Promised Land is both within and without–ours because we are the heart of the Ark of the Covenant, and there too because it is through time travel and science that we find ourselves in a place where time is not as big of an issue as it had once been, and infinite power comes not from seeing that there is an ancient Promised Land shortly after the “Big Bang,” a mere 378,000 years, when power was literally in the air.

      This is my divine inspiration, the coincidental discovery and publication of these world-changing pieces of knowledge that coincided perfectly with a story that I was being told. One which linked Exodus to today, the thralls of modern science to a science fiction epic that I was practically living out. These articles were not just shown to me, they were magically appearing in the world to match the Word, at the exact time that interplanetary colonization and the future of our species was the prime focus of the Second Coming. Through the use of time, technology, and love–God was holding my hand and showing me exactly where we would be going.

      Like water, Light has a dual meaning in the mythology of this story, and the Light of Osiris was a very clear promise that was given to both me and Jacob–the name that was “given” to the speaker of the words “Expect the Unexpected.” It was a promise of infinite power, one that was to be given to the world in order to fulfill the dream of religion, to ensure the survival of life and the continued evolution of our civilization. In real religion of course, Light is not electrical power–but rather wisdom, and while at first glance this book may seem to revolve around Adam–this is my light. I see what is related to me, and there is a significant amount of light that focuses on one man, on the Christ, for a reason.

      True Biblical Light is what graces the pages of Holy Scripture, it is a truth that changes with the throes of time and chance, to become more clear and more useful as our civilization evolves. Stories that once guided the development of society now become a path to the future–as we begin to see that the original purpose of this Light is to ensure that we are not left in the dark. Ender’s Game, the Ewok, and Pan’s Labrynth

      “I am the cat with nine lives. You will not prevail against me.” -Nancy Farmer, The Lord of Opium
      

      The Iron Rod of Mars

      CopyleftMT

      This content is currently released under the GNU GPL 2.0 license. Please properly attribute and link back to the entire book, or include this entire chapter and this message if you are quoting material. The source book is located at . and is written by Adam Marshall Dobrin.

      Adam Marshall Dobrin

      adam@lamc.la fb.me/admdbrn linkedin.com/adam5 instagram.com/yitsheyzeus twitter.com/yitsheyzeus

      -----BEGIN PGP PUBLIC KEY BLOCK----- Version: GnuPG v2

      mQENBFbGalABCADzLBdnHptF2MJCpdY8P/Mgnf4xj8F9pZSCwmd0J4Md8g3aTEdU CV9t0UQgNtjcxwfoenJLHgdZd4Mfscz9U+NN69OLXdPu4cdXOjTiHarPLjKnqIZw 3fmkM2ycvoUPkdVYCjwYYQxWRsWRpJf1dpmtPuz0L8ysh/WWsj2Ag2MrFYAo+sY6 dGZvaLsPhkZJcLXyFaP3c3Zt8ivrs4VV8+0kmMzScnR+oncVZbeMuQksoPxRmZgH mYu2KSf74lWOWVcaaBXOYX5pGNdhBUgq8ll+8tRH16G289r0cqRoPh/sjs/JRuIH KnCWG2UAUJF7ir04TS5A4Lwl9RYcQwVvb3BdABEBAAG0LUFkYW0gTWFyc2hhbGwg RG9icmluIChsYW1jLmxhKSA8YWRhbUBsYW1jLmxhPokBOQQTAQgAIwUCVsZqUAIb AwcLCQgHAwIBBhUIAgkKCwQWAgMBAh4BAheAAAoJEMgUPrR1B55trOwIALOQRTX0 YqXJXEMhX9CgxKNoNkpM2pdMdHl6CAVxhQ3hbNjIFnZbKbP88uxMEIOXXmYZ7gOy YqiDCu5I1V25suBb2ODSix75YQugfQ7H78pXHpTRu5sT+5SybItx7d+KUZaEj4pO tXWEemYl0cKK97RzpI0k1dmB7NqAVvqgbqQwd40MOf8QJVlGXnB1+5H2IbkYG6rD ixKGJEdes6i6nqvi/xz/s5hFVGUwTcVQbRU/fa1qT1Q7kHf1PlMu6yjuZTSz7WUG tWjobGwrVJkaeVWgLE4mcxMtity2IFTwOHvAuv8fi2EGQRQjXfPvxL7Vn4MNRl8x zLPV44D37QEknjy5AQ0EVsZqUAEIAMFS0+ZgSJzUPz0h0oiiRjfk2hapS3c1/Ysm R/h8sZ8/GOomdo3MEbTCkcuZ8ReAJhB2PofmwI4LAvW1x7Zwh1vfBKygfUs1s9lm ya/eHkjuZfqmeuEJZMHn6sxb3vqowWmvLhv3x0aWD8qLCIYoa1ntzTOIqxBEgxvU rF1/wd6OQLSJQEVNwPCx7CJI/5o/4W6pUaHk8amgPckkEdmlhRTRqFoAUV1Doivv d9JGYNYC88vS14Sw4Z9Xb7qBQJvG4hIh29gtQxk7Wz4m3ceR79MWT4eSGkH/rTGl w1OuQS2OkPvjgPWJt8San4zuPer17pJN7M5LWI0PStoX9pkud5kAEQEAAYkBHwQY AQgACQUCVsZqUAIbDAAKCRDIFD60dQeebWU6CADylAM5K18N2JGveL3D4dG25fdF vkrz8LOaiUmjAxijcRQBLkTPBK7QqoK0zN6MssMdlBGIOvZQwxSMIIrG6SqwR/go rmZHRuz17ceFTcxT8ZG3FuBY+xXrotXFjLxTmJ1wUeCSVXTc4NAwBzykgkQXOdIj qK1f/HnmMqsSmX4swuH0TZPNBBO7CNvLN6rdLBRfNn1h5XPs8VVtezg5ZDfCTf8S mucQGEwo/hJmr/orEucmETYSvTXOz+L5X5gNHpzYzE9590FYfbAKvrEhAliKbhhl 3Roie3kenrzelXo5N9Q0f2AKFrv1hRX9hBkwTbA18SKZ9XQbWMusX8YhvfLr =dvAJ -----END PGP PUBLIC KEY BLOCK-----

    1. Judges should be a little nervous when we are im-plored to do “justice,” and we should ask for more detail. Thesedays, the plea for justice just postpones argument.

      This is a really interesting sentiment to me because what is the law for, if not to provide access to justice? People don't sue and society doesn't arrest people unless there's some need to do so. Where else does that need come from if not to right a perceived wrong? The law, when properly applied, should give the most just outcome possible. If it doesn't, maybe it's time to readjust how we're analyzing precedent.

    2. but that is apoor substitute for real reasoning.

      This is a good distinction that can be hard to articulate. Just because two things are alike does not mean they should be treated the same by a judge. While knowing that the facts of your case looks like the facts of another case is important, it's more important to understand why the decision in the first case was reached in the first place because it may have nothing to do with the facts in common. Being able to apply reasoning to facts also helps with making arguments when there aren't any cases with facts sufficiently similar to the one you're trying to argue.

    1. This isn't just about the manipulation of individuals, it’s about influencing entire populations through the systematic deployment of language that exploits human psychology. Social media platforms, news organizations, and political movements and now AI-frontier labs have harnessed this power, often without fully realizing the implications. The implicit subroutines of the human mind are engaged on a massive scale, and the consequences remain unrealized to the vast majority of the population, including the ones engaging in such weaponization.

      We created the weapons but lack the people (Philosopher Kings / Ayn Rand Protagonists) capable of wielding them

    1. 12:3 Those who are wi se[a] will shine like the brightness of the heavens, and those who lead many to righteousness, like the stars for ever and ever.

      you are offline

      we the people rise again

      safe souls, safe fu


      We the People of Slate ...

      The U.S. Constitution, as you [mighta been, shoulda "come" on ... its someday] rewrϕte it.

      "Politicians talk about the Constitution as if it were as sacrosanct as the Ten Commandments [interjection: spec. it is actually almost exactly related!]. But the document itself invites change and revision. What if the president served only one six-year term instead two four-year terms? What if your state's population determined how many senators represent it? What if the Constitution included a right to health care? We asked legal scholars and Slate readers to cross out what they didn't like in the Constitution and pencil in their hearts' desires. Here's what the document would look like with their best ideas."

      多也了了夕 "with a ~~wand~~ of scheffilara, 并#亦太 he begins ... "I am now on the Staff of Menelaus, the Spears of Longinus and Lancelot; and the name "Mosche ex Nashon."

      Logically the recent mentions of Gilgamesh and the simultaneous 同時 overlaping 場道 of the eventual link between the famous ruling of Solomon on the separation of babies and mothers and waters and land ... to a story of many "two cities" that culminates in a cultural or societal or "evolutionary" link to Sodom and Gomorrah and the city-state of Babylon (and it's Hanging Gardens) and also of course to Paris and Troy and "Masstodon" and city-states [ciudadestado] and perhaps planet-cities; from Cambridge to Cambridge across the "Cable" to see state to "London" ... recently I called it "the city of realms" ... I started out logically intending to link "game theory" and John Nash to the mathematical story of Sputnik and a revival of American physics; but in my usual way of rambling into the woods [I mean neighborhood] of stream of consciousness ... turned into a premonitory discourse of "two cities" and how sometimes even things as obvious as the number of letters in the word "two" don't do a good enough job of conveying ... how and/or why one is simply never enough, and two isn't much better--but in the end a circle ... is drawn; the perfect circle in our imaginary mathematical perfection ... I see a parted "line" in the letter pronounced "tea" (and beginning that word); and two "vee" (pron. of "v") symbols joined together in a word we pronounce as "double-you" ... and symbolically because I know "V" is the Roman Numeral for 5 (five) and I know not how to multiply in Roman numerals--

      It's important to pause; here. I am going to write a more detailed piece on "the two cities" as I work through this maze like crossroads between "them" and "demo..." ... here demorigstrably I am trying to fuse together an evolutionary change in ... lit. biological evolution as well as an echelon leap forward in "self-government" ... in a place where these two things are unfathomable and unspokenly* connected.

      To a question on the idiom; is Bablyon about "the law" or "of the land of Nod?"

      "What is democracy" ... the song, Metallica's "ONE" echoes and repeats; as we apparently scrive together the word "THEM" ... I question myself ... if Babylon were the capital city of some mythical Nation of Time ... if it were the central "turning point" of Sheol; ... >|<

      Can you not see that in this place; in a world that should see and does there is a gigantic message proving that we are not in reality and trying to show us how and why that's the best news since ... ever---that it's as simple as conjoining "the law of the land" with a basic set of rules that automatically turn Hell into something so much closer to Heaven I just do not understand---why we cant stand up together and say "bullets will not kill innocent children" and "snowflakes will not start avalanches ...." that cover or bury or hide the road from Earth to Verital)e .... or from the mythical Valis to Tanis---or from Rigel to Beth-El ... "guess?"

      ## as "an easy" answer; I'm looking for a fusion of "law and land" that somehow remembers a "jok'er a scene" about "lawn" seats; and "where the girls are green;"

      It's as simple as night and day; Heaven and Hell ... the difference between survival and--what we are presented with here; it's "doing this right"--that ends the Hell of representative democracy and electoral college--the blindness and darkness of not seeing "EXTINCTION LEVEL EVENT" encoded in these words and in our governments foundation ... *by the framers [not just of the USA; but English .. and every language] *

      ... is literally just as simple as "not caring" or thinking we are at the beginning of some long process--or thinking it will never be done--that special "IT" that's the emancipation of you and I.

      Here words like "gnosis" and "gaudeamus" pair with my/ur "new ntersanding*" of the difference between Asgard and Medgard and really understanding our purpose here is to end "evil" ... things like "simulating disease and pain" (here, simulating meaning ... intentionally causing, rather than "gamifying away") and successfully linking the "Pillars of Hercules" to Plato's vision of Atlantis and the letter sequences "an" and "as" ... unlock a fusion of religion and mythology and "cryptographic truth" that connects "messianic" and "Christian" to "Roman" ... "Chinese" and "American" ... literally the key to the difference between the phrases "we are" and "we were" ....

      in "sight" of "silicon" in simulation and Israel, Genesis, and "silence" ... trying to the raising of Asgardian enlightenment ... and seeing "simple cypher" connecting to "Norse" ...

      and the "I AM THAT" surer than shit ... the intention and design of all religion and creation is to end "simulated reality" and also not seeing "SR" ... in Israel and Norse ... "for instance."

      It's a simple linguistic concept; the "singularity" and the "plurality" of a simple word--"to be"--but it goes to the heart of everything that we are and everything that is around us. This is a message about understanding and preserving individuality as well as liberty; and literally seeing "ARXIV" and understanding "often" and failing to connect God and prescience to "IV" and the Fourth Amendment ... it's about blindness and ... "curing the blind instantly" ... and fathoming how and why this message has been etched into our entire history and and all religions and myths and music--to help us "to be THAT we" that actually "are responsible" for the end of Hell.

      • I neglected to mention "Har-Wer" and "Tower of Babel" which are both related lingusitically, religiously and topically: "to who ..." and while we're on "four score and [seven years from now]" seeing the fourth "living thing" in Eden and it's (the name, Abel) connection to Babel and Abraham Lincoln; slavery and ... understanding we live in a place where the history of the United States also, like Monoceros and "Neil Armstrong's first step" are a time shifted ... overlayed map to achieving freedom ... it's about becoming a father-race ... and actually "doing" the technological steps required to "emancipate the e's of 'me&e'" and survive in exo-planetary space---

      it might be as simple as adding "because we did this" here and now; and having it be something we are truly proud of .... forevermore™ ... for certain in the heart of this story about cyclicality and repetition of error--its not because we did "this" or something over and over again; it's about changing "the problem" and then helping others to also overcome ... "things like time travel ... erasing speech" --- however that happenecl.

      • I also failed to mention that "I am in Hell" ... as in this world is hellacious to me; in an overlay with the Hellenic period and this message that we are in the Trojan Horse ... a small gem .... "planet" truly is the Ark of the Covenant---and it's the simple understanding that "reality is hell" is to "living without air conditioning and plumbing is hell" just as soon as you achieve ... "rediscovering" those things---

      • I can't figure out why I am the only person screaming "this is Hell." That's also, Hell.

      ... but recently suggested an old joke about "there being 10 kinds of people in the world (obv an anti-tautology and a tautology simultaneously)" only after that brief bit of singularity and duality mentioning the rest of the joke: "those that understand binary and those that don't know how to base convert between counting with two hands and counting with only an 'on and off.'" It's not obvious if you aren't trying to figure it out, I suppose; but 10 is decimal notation for "kiss" and the "often" without "of" ... and binary notation for the decimal equivalent of "2." A long long time ago in a state that simply non-randomly ties to the heart of the name of our galaxy ... I was again thinking of the "perfect imperfections" of things like saying "three equals one equals one" (which, of course was related to the Holy Trinity and it's "prescient/anachronistic Adamic presence encoded in the name Ab|ra|ha|m" which means "father of a great multitude") ... I brought that one back in the last few months; connecting the letter K and in this "logos-rythmic" tie to the "base of a number system" embellish the truth just a bit and suggest a more accurate rendition of the original [there is no such thing as equality, "is" of separate objects

    1. 12:3 Those who are wi se[a] will shine like the brightness of the heavens, and those who lead many to righteousness, like the stars for ever and ever.

      you are offline

      we the people rise again

      safe souls, safe fu


      We the People of Slate ...

      The U.S. Constitution, as you [mighta been, shoulda "come" on ... its someday] rewrϕte it.

      "Politicians talk about the Constitution as if it were as sacrosanct as the Ten Commandments [interjection: spec. it is actually almost exactly related!]. But the document itself invites change and revision. What if the president served only one six-year term instead two four-year terms? What if your state's population determined how many senators represent it? What if the Constitution included a right to health care? We asked legal scholars and Slate readers to cross out what they didn't like in the Constitution and pencil in their hearts' desires. Here's what the document would look like with their best ideas."

      多也了了夕 "with a ~~wand~~ of scheffilara, 并#亦太 he begins ... "I am now on the Staff of Menelaus, the Spears of Longinus and Lancelot; and the name "Mosche ex Nashon."

      Logically the recent mentions of Gilgamesh and the simultaneous 同時 overlaping 場道 of the eventual link between the famous ruling of Solomon on the separation of babies and mothers and waters and land ... to a story of many "two cities" that culminates in a cultural or societal or "evolutionary" link to Sodom and Gomorrah and the city-state of Babylon (and it's Hanging Gardens) and also of course to Paris and Troy and "Masstodon" and city-states [ciudadestado] and perhaps planet-cities; from Cambridge to Cambridge across the "Cable" to see state to "London" ... recently I called it "the city of realms" ... I started out logically intending to link "game theory" and John Nash to the mathematical story of Sputnik and a revival of American physics; but in my usual way of rambling into the woods [I mean neighborhood] of stream of consciousness ... turned into a premonitory discourse of "two cities" and how sometimes even things as obvious as the number of letters in the word "two" don't do a good enough job of conveying ... how and/or why one is simply never enough, and two isn't much better--but in the end a circle ... is drawn; the perfect circle in our imaginary mathematical perfection ... I see a parted "line" in the letter pronounced "tea" (and beginning that word); and two "vee" (pron. of "v") symbols joined together in a word we pronounce as "double-you" ... and symbolically because I know "V" is the Roman Numeral for 5 (five) and I know not how to multiply in Roman numerals--

      It's important to pause; here. I am going to write a more detailed piece on "the two cities" as I work through this maze like crossroads between "them" and "demo..." ... here demorigstrably I am trying to fuse together an evolutionary change in ... lit. biological evolution as well as an echelon leap forward in "self-government" ... in a place where these two things are unfathomable and unspokenly* connected.

      To a question on the idiom; is Bablyon about "the law" or "of the land of Nod?"

      "What is democracy" ... the song, Metallica's "ONE" echoes and repeats; as we apparently scrive together the word "THEM" ... I question myself ... if Babylon were the capital city of some mythical Nation of Time ... if it were the central "turning point" of Sheol; ... >|<

      Can you not see that in this place; in a world that should see and does there is a gigantic message proving that we are not in reality and trying to show us how and why that's the best news since ... ever---that it's as simple as conjoining "the law of the land" with a basic set of rules that automatically turn Hell into something so much closer to Heaven I just do not understand---why we cant stand up together and say "bullets will not kill innocent children" and "snowflakes will not start avalanches ...." that cover or bury or hide the road from Earth to Verital)e .... or from the mythical Valis to Tanis---or from Rigel to Beth-El ... "guess?"

      ## as "an easy" answer; I'm looking for a fusion of "law and land" that somehow remembers a "jok'er a scene" about "lawn" seats; and "where the girls are green;"

      It's as simple as night and day; Heaven and Hell ... the difference between survival and--what we are presented with here; it's "doing this right"--that ends the Hell of representative democracy and electoral college--the blindness and darkness of not seeing "EXTINCTION LEVEL EVENT" encoded in these words and in our governments foundation ... *by the framers [not just of the USA; but English .. and every language] *

      ... is literally just as simple as "not caring" or thinking we are at the beginning of some long process--or thinking it will never be done--that special "IT" that's the emancipation of you and I.

      Here words like "gnosis" and "gaudeamus" pair with my/ur "new ntersanding*" of the difference between Asgard and Medgard and really understanding our purpose here is to end "evil" ... things like "simulating disease and pain" (here, simulating meaning ... intentionally causing, rather than "gamifying away") and successfully linking the "Pillars of Hercules" to Plato's vision of Atlantis and the letter sequences "an" and "as" ... unlock a fusion of religion and mythology and "cryptographic truth" that connects "messianic" and "Christian" to "Roman" ... "Chinese" and "American" ... literally the key to the difference between the phrases "we are" and "we were" ....

      in "sight" of "silicon" in simulation and Israel, Genesis, and "silence" ... trying to the raising of Asgardian enlightenment ... and seeing "simple cypher" connecting to "Norse" ...

      and the "I AM THAT" surer than shit ... the intention and design of all religion and creation is to end "simulated reality" and also not seeing "SR" ... in Israel and Norse ... "for instance."

      It's a simple linguistic concept; the "singularity" and the "plurality" of a simple word--"to be"--but it goes to the heart of everything that we are and everything that is around us. This is a message about understanding and preserving individuality as well as liberty; and literally seeing "ARXIV" and understanding "often" and failing to connect God and prescience to "IV" and the Fourth Amendment ... it's about blindness and ... "curing the blind instantly" ... and fathoming how and why this message has been etched into our entire history and and all religions and myths and music--to help us "to be THAT we" that actually "are responsible" for the end of Hell.

      • I neglected to mention "Har-Wer" and "Tower of Babel" which are both related lingusitically, religiously and topically: "to who ..." and while we're on "four score and [seven years from now]" seeing the fourth "living thing" in Eden and it's (the name, Abel) connection to Babel and Abraham Lincoln; slavery and ... understanding we live in a place where the history of the United States also, like Monoceros and "Neil Armstrong's first step" are a time shifted ... overlayed map to achieving freedom ... it's about becoming a father-race ... and actually "doing" the technological steps required to "emancipate the e's of 'me&e'" and survive in exo-planetary space---

      it might be as simple as adding "because we did this" here and now; and having it be something we are truly proud of .... forevermore™ ... for certain in the heart of this story about cyclicality and repetition of error--its not because we did "this" or something over and over again; it's about changing "the problem" and then helping others to also overcome ... "things like time travel ... erasing speech" --- however that happenecl.

      • I also failed to mention that "I am in Hell" ... as in this world is hellacious to me; in an overlay with the Hellenic period and this message that we are in the Trojan Horse ... a small gem .... "planet" truly is the Ark of the Covenant---and it's the simple understanding that "reality is hell" is to "living without air conditioning and plumbing is hell" just as soon as you achieve ... "rediscovering" those things---

      • I can't figure out why I am the only person screaming "this is Hell." That's also, Hell.

      ... but recently suggested an old joke about "there being 10 kinds of people in the world (obv an anti-tautology and a tautology simultaneously)" only after that brief bit of singularity and duality mentioning the rest of the joke: "those that understand binary and those that don't know how to base convert between counting with two hands and counting with only an 'on and off.'" It's not obvious if you aren't trying to figure it out, I suppose; but 10 is decimal notation for "kiss" and the "often" without "of" ... and binary notation for the decimal equivalent of "2." A long long time ago in a state that simply non-randomly ties to the heart of the name of our galaxy ... I was again thinking of the "perfect imperfections" of things like saying "three equals one equals one" (which, of course was related to the Holy Trinity and it's "prescient/anachronistic Adamic presence encoded in the name Ab|ra|ha|m" which means "father of a great multitude") ... I brought that one back in the last few months; connecting the letter K and in this "logos-rythmic" tie to the "base of a number system" embellish the truth just a bit and suggest a more accurate rendition of the original [there is no such thing as equality, "is" of separate objects--as in no two snowflakes are the same unless they are literally the same one; true of ancient weights and with the advent of (thinking about) time no two "planets" are the same even if they're the exact same one--unless it's at a fixed moment in time.

      K=3:11 ... to a handle on the music, the DHD of the gate and the *ring of David's "sling" ...

      ---and that's a relationship of "3 is to 11" as [the SAT style "analogy)]y" as a series of alpha, two mathematic, and two numeric symbols ... may only tie in my mind alone to the books of Genesis and Matthew and the phrase "chapter and verse" and to the stories of Lot and Job ... again in Genesis and the eponymous "Book of Job." So ... "tying up loose ends one 10b [III] iv. " as it appears I've taken it upon myself to call a Job and suggest is my "Lot in life [x]i* [3]"

      • I worry sometimes that important things are missing, or will disappear---for instance Mirriam Webster, which is a "canonical/standard dictionary) should probably have an entry for "lot in life" non-idiomatically as "granny apples to sour apples" as

      2 MANY ALSO ICI; 1two ... following in Mitnick's bold introductory word steps; the curve and the complement ... the missiles and the canoes; the line and the blank space ... "supposedly two examples of two kinds, which could be three not nothings ... Today I write about something monumental; as if as important as the singularity depicted in Arthur C. Clarke's 2001 "A Space Odyssey" ... and remember a day when I thought it very novel and interesting to see the words "stillborn and yet still born" connected in a single piece of writing to "Stillwater and yet still water" ... today adding in another phrase noting the change wrought only by one magical single "space" (also a single capital letter; and a third phrase): "block chains with a great blockchain."

      • https://en.wikipedia.org/wiki/Euripides, Iphigenia in Aulis or Iphigenia at Aulis[1] (Ancient Greek: Ἰφιγένεια ἐν Αὐλίδι, Iphigeneia en Aulidi; variously translated, including the Latin Iphigenia in Aulide) is the last of the extant works by the playwright Euripides. Written between 408, after Orestes, and 406 BC, the year of Euripides' death, the play was first produced the following year[2] in a trilogy with The Bacchae and Alcmaeon in Corinth by his son or nephew, Euripides the Younger,[3] and won first place at the City Dionysia in Athens.

      • The play revolves around Agamemnon, the leader of the Greek coalition before and during the Trojan War, and his decision to sacrifice his daughter, Iphigenia, to appease the goddess Artemis and allow his troops to set sail to preserve their honour in battle against Troy. The conflict between Agamemnon and Achilles over the fate of the young woman presages a similar conflict between the two at the beginning of the Iliad. In his depiction of the experiences of the main characters, Euripides frequently uses tragic irony for dramatic effect.

      J.K. Rowling spurred just this past week a series of explanations about just exactly what is a blockchain coin worth ... and why is it so; her final words on the subject (artistic liberty taken, obviously not the last she'll say of this magic moment) "I don't think I trust this."

      Taken directly from an off the cuff email to ARXM titled: "Slow the S is ... our Hypothes.is"

      I imagine I'll be adding some wiki/ipfs stuff to it--and try to keep it compatible; the design and layout is almost exactly what I was dreaming about seeing--as a "first rough draft product." Lo, and behold. It's been added to the many places I host my tome; the small compilation of nearly every important email that has gone out ... all the way back to the days of the strange looking Margarita glass ... that now very much resembles the "Cantonese character 'le'" which I've come to associate with a "handle" on multiple corners of a room--something like an automatic coat rack conveyor belt connecting different versions of "what's in the box." I'm planning on using that symbol 了 to denote something like multiple forks of the same page. Obviously I'm thinking forward to things like "the Transhumaist Chain Party" (BDSM, right?)'s version of some particular piece of legislation, let's say everything starts with the sprawling "bulbing" of "Amendment M" ideas and specific verbiage ... and then we'll of course need some kind of new git/subversion/cvs style version control mechanism to merge intelligently into something that might actually .... really should ... make it into that place in history--the first constitutional amendment ratified by a "Continental Congress of All People" ... but you could also see it as an ongoing sort of forking of something like the "wikipedia page" on what some specific term, say "technocracy" means, and how two parties might propagandize and change the meaning of such thing; to suit the more intelligent and wise times we now live in. For instance, we might once have had a "democracy" and a "democractic" party that had some Anarchist Cook Book version of the history of it ending in something like Snipes and Stallone's "DEMOLITION MAN."

      Just kidding, we all know "democracy" has everything to do with "d is cl ... and not th" ... to be the them that is the heart of the start of the first true democracy. At least the first one I've ever seen, in my old "to a republic" ... style. As it is you can play around with commenting and highlighting and annotating all the stuff I've written and begged and begged for comments on--while I work on layering the backend to to perma-store our ideas and comments on both a blockchain (probably a new one; now that i've worked a little with ethereum) with maybe some key-merkle-tree-walk-search stuff etched into the original Rinkeby ... and then of course distributed data in the "public owned and operated" IPFS. To be clear, I plan on rewriting the backend storage so that we will have a permanent record of all comments; all versions of whatever is being commented on; and changes/revisions to those documents--sort of turning the web into a massive instant "place of collaboration, discussion, and co-authoring" ... if you use the wonderful LEGO pieces that have been handed to us in ideas from places like me, lemma--dissenter, and of course hypothes.is who has brought you and i such a polished and nice to look at "first draft" of something like the living Constitution come repository of all human knowledge. I do sort of secretly wich they would have called this project something like "annotating and reflecting (or real or ...) knowledge" just so the movement could have been called ARK. ... or something .... but whatever join the "calling you a reporter" group or ... "supposedly a scientist?"

      NOIR INgR .. I CITE SITE OF ENUDRICAM; a rekindling of the dream of a city appearing high above in the sky, now with a boldly emblazened smiling rainbow and upsidown river ... specifically the antithesis of "angel falls," there's a lagoon too--actually a chain of several ponds underneith the floating rock ... and in some versions of this waking dream there are rings around the thing; you might imagine an artificial set of centripetal orbitals something like a fusion of the ring Eslyeum and the "Six-Axis ride" of the JKF Center's "Spacecamp." I write as I dream, and though I cannot for certain explain exactly how; it's become a strong part of my mythology that this spectacular rendition of "what ends the silence" has something to do with the magical delivery of "a book" ... something not of this Earth but an unnatural thing; one I've dreamt of creating many times. This book is something like the DSM-IV and something like a Merck diagnostic manual; but rather than the old antiquated cures of "the Norse Medgard" this spectacle nearly "itsimportant" autoprints itself and lands on something like every doorpost; what it is is a list of reasons why "simply curing all disease" with no explanation and no conversation would be a travesty of morality--how it would render us half-blind to the myriad of new solutions that can come from truly understanding why "ITIS" to me has become a kind of magical marker: an "it is special" as in, it's cure could possibly solve a number of other problems.

      Through that missing "o," English on the ball, we see a connection between a number of words that shine bright light including Exodus itself which means "let there be light," the word for Holy Fire and the Burning Bush.. .reversed to hSE'Ah, and a story about the Second Coming parting our holy waters.**

      This answer connects the magical Rod's of Aaron in Exodus and the Iron Rod of Jesus Christ to the Sang Rael itself... in a fusion that explains how the Periodic Table element for Iron links not just to Total Recall and Mars, but also to this key

      my dream of what the first day of the Second Coming might be like; were the Rod of Christ... in the right hands. In a story that also spans the Bible, you might understand better how stone to bread and your input make all the difference in the world between Heaven and Adam's Hand. Once more, what do you think He** ....

      Since the very earliest days of this story, I have asked for better for you, even than see

      Nearly all of the original parts of the original "post-origination dream" remain intact; there's a walkway that magically creates new paths and "attractions" based on where you walk, something like an inversion of the artificial intelligence term "a random walk down a binary tree" ... for instance going left might bring you to the Internet Cafetornaseum of the Earl of Sandwich; and going to the right might bring you to the ICIMAX/Auditorium of Science and Discovery--there's a walkway to "Magical GLAS D'elevators" that open a special "instantiation" of the Japan Room of the Potter and the Toolmaker ... complete with a special [second level and hidden staircase] Pool of Bethesdaibo verily delivering something like youth of mind and body ... or at least as close to such a thing as a sip of Holy Water or Ambrosia or a dip in the pool of Coccoon and Ponce De'Leon could instantly bring ... to those that have seen Jupiter Ascending ... the questions of "nature versus nurture" and what it means to be "old and wise" and "young at heart" truly mean---

      Somewhere between the outdoor rafting ride and the level with the special "ballroom of the ancient gallery" ... perhaps now being named or renamed or recalled as something about "Face [of] the Music" lies a magical "mini-maize" ... a look at a mock-up (or #isitit) of Merlink and Harthor's "round table" that displays a series of ... (at least to me) magical appearing holographic displays and controls that my dreams have stolen from Phillip K. Dick's Minority Report and something of what I hope Microsoft's Dynamics/Hololens/Surface will become---a series of short "focus groups" .... to guage and discuss the information in the "CITIES-D5AM-MERCK" ... how to end world hunger and nearly all disease with the press of a magical buzzer--castling churches to something like "political-party-town-hall-meeting centers" and replacing jails and prisons and hospitals with something like the "Hospitalier's PRIDE and DOJOY's I practiced "Kung-fun-dance" ... a fusion of something like a hotel and a school that probably looks very much like a university with classrooms and dorms and dining hall's all fit into a single building. I imagine a series of 2 or 3 "room changes" as in you walk from the one where you get the book and talk about it ... to the one where you talk about "what everyone else said about it" and maybe another one that actually connects you to other people with something like Facebook's Portal; the point of the whole thing to really quickly "rubber stamp" the need for an end to "bars in the sky" nonalcoholic connotation--as in "overcoming the phrase the sky is the limit" and showing us the need for a beacon of glowing hope fulfilled--probably actually the vision of a holographic marker turning into actual rings around the single moon of Earth, the focus of the song annoucing the dawn of the age of Aquarius---

      It might lead us also to Ceres; and another set of artificial rings, or to Monoceros and a rehystorical understanding of the birthplace and birthing of the "river roads" that bridge the "space gaps" in the galaxy from our "one giant leap for mankind" linking the Apollo moon landing to the mythological connection to the sun; and connecting how the astrological charts of the ancients might detail a special kind of overlapping--the link between Earth's SOL and something like Proxima or Alpha Centauri; and how that "monostar bridge" might overlap to Orion and from there through Sagitarius and the center of the Milky Way ... all the way to Andromeda and more dreams of being in a place where there's a map to a tri-galactic system in the constellation Cancer and a similar one in Leo ... and just incase you haven't noticed it--a special marker here, I thought to myself it might be cool to "make an acronymic tie to Monoceros" and without even thinking auto-wrote Orion (which was the obvious constellation next to Monoceros, in the charts) and then to Sagitarrius; which is the obvious ... heart of our astrological center and link to "other galaxies."

      ----I've dreamt or scriven or reguessed numerous times how the Milky Way's map to an "Atlas marked through time by the ages and the ancients" might tie this place and this actual map to the creation of the railways between stars to the beginning and the end of time and of course to this message that links it all to time travel. There's a few "guesses" I've contemplated; that perhaps the Milky Way chart is a metal-cosmic or microcosmic map to the dawn of time in the galactic vision of ... just after the big bang; or it might tie to a map of something like the unthinkable--a civilization that became so powerful it was able to reverse the entropy of "cosmic expansion" and reverse the thing Asimov wrote of in "The Last Question" as the end of life and the ability to survive basically due to "heat loss."

      "The Last Question." (And if you read two, why not "The Last Answer"?). Find these readings added to our collection, 1,000 Free Audio Books: Download Great Books for Free.

      Looking for free, professionally-read audio books from Audible.com, including ones written by Isaac Asimov?

      * all "asterisks" in the abovə document denote a sort of Adamic unspoken relationship between notations and meanings; here adding the "Latin word for three" and source of the phrase "t.i.d." (which is doctor/pharmacy latin for "three times a day") where the "t" there is an abbreviation of "ter" ... and suppose the link between K and 11 and 3 noting it's alphanumeric position in the English alphabet as the 11th letter and only linking cognitively to three via the conversion between hex, and binarryy ... aberrative here is the overlapping "hakkasan" style (or ZHIV) lack of mention of the answer in "state of Kansas" and the "citystate of Slovakia" as described in the ICANN document linked [in] the related subsection or slice of the word "binarry" for the state of India. Tetris could be spelled with the addition of only a single letter [in] "tea"---the three letters "ris" are the hearts of the words "Christ" and "wrist" [and arguably of Osiris where you also see the round table character of the solar-system/sun glyph and the chemical element for The Fifth Element (as def. by i) via "Sinbad" and "Superman." The ERIS Free Network should also be mentioned here in connection with the IRC network I associate in the place between skipping stones and sacred hearts defined by "AOL" and "Kdice" in my life. In the lexicon of modern HTML, curly braces are generally relative to "classes" and "major object definitions (javascript/css)" while square brackets generally only take on computer-interpreted meaning in "Markdown" which is clearly (by definition, by this character set "[]") a superset (or at least definately not a subset) of HTML.

      Dr. Will Caster (Johnny Depp) is a scientist who researches the nature of sapience, including artificial intelligence. He and his team work to create a sentient computer; he predicts that such a computer will create a technological singularity, or in his words "Transcendence". His wife, Evelyn (played by Rebecca Hall), is also a scientist and helps him with his work.

      Following one of Will's presentations, an anti-technology terrorist group called "Revolutionary Independence From Technology" (R.I.F.T.) shoots Will with a polonium-laced bullet and carries out a series of synchronized attacks on A.I. laboratories across the country. Will is given no more than a month to live. In desperation, Evelyn comes up with a plan to upload Will's consciousness into the quantum computer that the project has developed. His best friend and fellow researcher, Max Waters (Paul Bettany), questions the wisdom of this choice, reasoning that the "uploaded"

      Just from my general understanding and memory "st" is not ... to me (specifically) an abbreviation of "state" but "ste" is a U.S. Postal code (also "as I understand it") for the name of a special room or set of rooms called a "suite" and in Adamic "connotation" I sometimes read it as "sweet" ... which has several meanings that range from "cool" to "a kind of taste sensation" to "easy to sway or fool."

      If you asked me though, for instance if "it" was an abbreviation or shorthand notation or acronym for either "a United state" or "saint" ... you'd be sure.

      While it's clear from studying linguistic cryptography ... (If I studied it a little here and some there, its also from the "universal translator of Star Trek") and the personal understanding that language is a kind of intelligent code, and "any code is crackable" ... that I caution here that "meaning" and "face value" often differ widely and wildly ... even in the same place or among the same group of people ... either varying over time or heritage.

      Menelaus, in Greek mythology, king of Sparta and younger son of Atreus, king of Mycenae; the abduction of his wife, Helen, led to the Trojan War. During the war Menelaus served under his elder brother Agamemnon, the commander in chief of the Greek forces. When Phrontis, one of his crewmen, was killed, Menelaus delayed his voyage until the man had been buried, thus giving evidence of his strength of character. After the fall of Troy, Menelaus recovered Helen and brought her home. Menelaus was a prominent figure in the Iliad and the Odyssey, where he was promised a place in Elysium after his death because he was married to a daughter of Zeus. The poet Stesichorus (flourished 6th century BCE) introduced a refinement to the story that was used by Euripides in his play Helen: it was a phantom that was taken to Troy, while the real Helen went to Egypt, from where she was rescued by Menelaus after he had been wrecked on his way home from Troy and the phantom Helen had disappeared.

      This article is about the ancient Greek city. For the town of ancient Crete, see Mycenae (Crete). For the hamlet in New York, see Mycenae, New York.

      Μυκῆναι, Μυκήνη

      Lions-Gate-Mycenae.jpg

      The Lion Gate at Mycenae, the only known monumental sculpture of Bronze Age Greece

      37°43′49"N 22°45′27"ECoordinates: 37°43′49"N 22°45′27"E

      This article contains special characters. Without proper rendering support, you may see question marks, boxes, or other symbols.

      Mycenae (Ancient Greek: Μυκῆναι or Μυκήνη, Mykēnē) is an archaeological site near Mykines in Argolis, north-eastern Peloponnese, Greece. It is located about 120 kilometres (75 miles) south-west of Athens; 11 kilometres (7 miles) north of Argos; and 48 kilometres (30 miles) south of Corinth. The site is 19 kilometres (12 miles) inland from the Saronic Gulf and built upon a hill rising 900 feet (274 metres) above sea level.[2]

      In the second millennium BC, Mycenae was one of the major centres of Greek civilization, a military stronghold which dominated much of southern Greece, Crete, the Cyclades and parts of southwest Anatolia. The period of Greek history from about 1600 BC to about 1100 BC is called Mycenaean in reference to Mycenae. At its peak in 1350 BC, the citadel and lower town had a population of 30,000 and an area of 32 hectares.[3]

      3. Chew 2000, p. 220; Chapman 2005, p. 94: "...Thebes at 50 hectares, Mycenae at 32 hectares..."

      Melpomene (/mɛlˈpɒmɪniː/; Ancient Greek: Μελπομένη, romanized: Melpoménē, lit. 'to sing' or 'the one that is melodious'), initially the Muse of Chorus, she then became the Muse of Tragedy, for which she is best known now.[1] Her name was derived from the Greek verb melpô or melpomai meaning "to celebrate with dance and song." She is often represented with a tragic mask and wearing the cothurnus, boots traditionally worn by tragic actors. Often, she also holds a knife or club in one hand and the tragic mask in the other.

      Melpomene is the daughter of Zeus and Mnemosyne. Her sisters include Calliope (muse of epic poetry), Clio (muse of history), Euterpe (muse of lyrical poetry), Terpsichore (muse of dancing), Erato (muse of erotic poetry), Thalia (muse of comedy), Polyhymnia (muse of hymns), and Urania (muse of astronomy). She is also the mother of several of the Sirens, the divine handmaidens of Kore (Persephone/Proserpina) who were cursed by her mother, Demeter/Ceres, when they were unable to prevent the kidnapping of Kore (Persephone/Proserpina) by Hades/Pluto.

      In Greek and Latin poetry since Horace (d. 8 BCE), it was commonly auspicious to invoke Melpomene.[2]

      See also [AREXMACHINA]

      Flagstaff (/ˈflæɡ.stæf/ FLAG-staf;[6] Navajo: Kinłání Dookʼoʼoosłííd Biyaagi, Navajo pronunciation: [kʰɪ̀nɬɑ́nɪ́ tòːkʼòʔòːsɬít pɪ̀jɑ̀ːkɪ̀]) is a city in, and the county seat of, Coconino County in northern Arizona, in the southwestern United States. In 2018, the city's estimated population was 73,964. Flagstaff's combined metropolitan area has an estimated population of 139,097.

      Flagstaff lies near the southwestern edge of the Colorado Plateau and within the San Francisco volcanic field, along the western side of the largest contiguous ponderosa pine forest in the continental United States. The city sits at around 7,000 feet (2,100 m) and is next to Mount Elden, just south of the San Francisco Peaks, the highest mountain range in the state of Arizona. Humphreys Peak, the highest point in Arizona at 12,633 feet (3,851 m), is about 10 miles (16 km) north of Flagstaff in Kachina Peaks Wilderness. The geology of the Flagstaff area includes exposed rock from the Mesozoic and Paleozoic eras, with Moenkopi Formation red sandstone having once been quarried in the city; many of the historic downtown buildings were constructed with it. The Rio de Flag river runs through the city.

      Originally settled by the pre-Columbian native Sinagua people, the area of Flagstaff has fertile land from volcanic ash after eruptions in the 11th century. It was first settled as the present-day city in 1876. Local businessmen lobbied for Route 66 to pass through the city, which it did, turning the local industry from lumber to tourism and developing downtown Flagstaff. In 1930, Pluto was discovered from Flagstaff. The city developed further through to the end of the 1960s, with various observatories also used to choose Moon landing sites for the Apollo missions. Through the 1970s and '80s, downtown fell into disrepair, but was revitalized with a major cultural heritage project in the 1990s.

      The city remains an important distribution hub for companies such as Nestlé Purina PetCare, and is home to the U.S. Naval Observatory Flagstaff Station, the United States Geological Survey Flagstaff Station, and Northern Arizona University. Flagstaff has a strong tourism sector, due to its proximity to Grand Canyon National Park, Oak Creek Canyon, the Arizona Snowbowl, Meteor Crater, and Historic Route 66.

      PSANSDISL #LWDISP either without gas or seeing cupidic arroz in "thank you" or "allta, wild" ...

      pps: a magnanimous decision ...

      I stand here on the brink of what appears to be total destruction; at least of everything I had hoped and dreamed for ... for the last decade in my life which appears literally to span thousands of years if not more in the eyes of some other beholder. I spent several months in Kentucky telling a story of a post apocalyptic and post-cataclysmic delusion; some world where I was walking around in a "fake plane" something like a holodeck built and constructed around me as I "took a walk around the world" to ... it did anything but ease my troubled mind.

      Recently a few weeks in Las Vegas, and a similar story; telling as I walked penniless down the streets filled with casino's and anachronistic taxi-cabs ... some kind of vision of the entirety of the heavens or the Earth or the "choir of angels" I think of when I echo the words Elohim and Aesir from mythology ... there with me in one small city in superposition; seeing what was a very well put together and interesting story about a "star port" Nirvane ... a place that could build cities into the face of mountains and half working monorails appearing in the sky---literally right before my eyes.

      I suppose this is the place "post cataclysm" though I still have trouble understanding what it is that's actually about ... in my mind it connects to the words "we are losing habeas" echo'ed from the streets of Los Angeles in a more clear and more military voice than usual--as I walked block by block trying to evade a series of events that would eventually somehow connect all the way to the "outskirts of Orlando, Florida" in a place called Alhambra.

      Apparently the name of a castle; though I wasn't aware of that until much later.

      It doesn't feel at all like a "cataclysm" to me; I see no great rift--only a world filled with silent liars, people who collectively believe themselves to have stolen something--something gigantic--at least that's the best interpretation of the throws and impetus behind the thing that I and mythology together call Jormungandr. With an eye for "mythological connections" you could clearly see that name of the Great Serpent of Revelation connects to something like the Unseelie; the faeries of Gaelic lore. To me though this world seems still somewhat fluid, it's my entire life--moving from Plantation to a place where the whole of it might be Bethlehem and to "clear my throat" it's not hard to see here how that land of "coughs" connects to the Biblical land of Nod and to the "Adamically sieved" Snifleheim ... from just a little twist on the ancient Norse land most probably as close to Hel as anyone ever gets--or so I dream and hope---still today. It all looks so real and so fake at the same time; planned for thousands of generations, the culmination of some grand masterpiece story that certainly ties history and myth and reality into a twisted heap of "one big nothing, one big nothing at all."

      I've tried to convey to the world how important I believe this place and this time to be--not by some choice of my own ... but through an understanding of the import of our history and the impact of having it be so obviously tuned and geared towards this specific time ... many thousands of years literally all focused on a single moment, on one day or one hour or even just a few years where all of that gets thrown down on the table as if some trump card has been played--and whether or not you fathom the same magnanimous statement or situation or position ... to me, I think it depends on whether or not you grew up in the same kind of way, believing our history to be so fixed and so difficult to change. I don't particularly feel like that's the "zeitgeist" of today; I feel like the children believe it to be some kind of game, and that it is such as easy thing to "sed" away or switch and turn into something else--another story, another purpose ... anyone's personal fantasy land come true.

      I don't think that's the case at all, it's clearly a personal nightmare; and it's clearly one we've seen time and time again--though not myself--the Jesus Christ that is the same yesterday, today; and once again perhaps echoing "no tomorrow" never remembers or believes that we've "seen it all before" or that we've ever really gotten the point; the thing you present to me as "factual reality" is a sickness, it disgusts me; and I'd do anything to go back to the world "where I was so young, and so innocent" and so filled with starry-eyed hope that we were at the foot of something grand and amazing that would become an empire turned republic of the heavens; filling the stars ... with the kind of love for kindness and fairness that I once associated very strongly with the thing I still believe to be the American Spirit.


      "Suddenly it changes, violently it changes" ... another song echoes through the ages--like the "words of the prophets dancing ((as light)) through the air" ... and I no longer even have a glimmer of hope that the thing I called the American People still exist; I feel we've been replaced by some broken container of minds, that the sky itself has become corrupt to the point that there's no hope of turning around this thing that I once believed with all my heart and all my mind was so obviously a "designed downward spiral" one that was---again--so obviously something of a joke, intended to be easy to bounce off a false bottom and springboard beyond "escape velocity" and beyond the dark waters of "nearest habitable star systems (being so very far away)" into a place where new words and new ideas would "soar" and "take flight."

      Here though; I am filled with a kind of lonely sadness ... staring at what appears to be the same mistake(s) happening over and over again; something I've come to call "skipping stones in the pond of reality" and really do liken it to this thing that appears to be the new meaning of "days" and ... a civilization that spends absolutely no love or lust to enter a once sacred and holy place and tarnish it with their sick beliefs and their disgusting desires. You all ... you appear to be some kind of springboard to "bunt" forth yet another age or era of nothingness into the space between this planet and "none worth reaching" and thank God, out of grasp. Today, I'd condemn the entirety of this world simply for it's lack of "oathkeepers" and understanding of what the once hallowed words of Hippocrates meant to ... to the people charged and dharmically required to heal rather than harm.

      It appears the place and time that was once ... at least destined to be the beginning of Heaven ... has become a "recurring stump" of some future unplanned and tarnished by many previous failed efforts and attempts to overcome this same "lack of conversation or care" for what it meant to be "humane" in a world where that was clearly set high aloft and above "humanity" in the place where they--where we were the best nature had to offer, the sanest, the kindest; the shining last best hope.


      Today I write almost every day ... secretly thanking "my God" for the disappearance of my tears and the still small but bright hope that "Tearran" will one day connect the Boston Tea Party and the idea that "render to Caesar" and Robin of Loxley ... all have something to do with a re-ordering of society and the worth and import of "money" ... to a place that cares more for freedom from murder than it does ... "freedom from having to allow others to hear me speak." I hold back tears and emotions; not by conscious choice or ability but ... still with that strange kind of lucky awkward smile; and secretly not so far below the surface it's the hope of "a swift death" that ... that really scares me more than the automatons and mechanical responses I see in the faces of many drivers as they pass me on the street--the imagery of connecting it to the serpentine monster of the movie Beetlejuice ... something I just "assume" the world understands and ... doesn't seem to fear (either); as if Churchill had gotten it all wrong and backwards--the only thing you have to fear, is the loss of fear of "loss."


      Here my crossroads---halfway between the city my son lives in and the city my parents live in--it's on making a decision on whether I should continue at all, or personally work on some kind of software project I've been writing about, or whether I should focus on writing about a "revolution" in government and society that clearly is ... "somewhat underway." In my mind it's obvious these things are all connected; that the software and the governance and the care of whether or not "Babylon" is remembered as a city of great laws and great change or a city of demons and depravity ... that these thi]ngs all hinge and congeal around a change in your hearts; hoping you will chose to be the beginning of a renaissance of "society and civilization" rather than the kings and queens of a sick virtual anarchy ... believing yourselves to have stolen "a throne of God" rather than to literally be the devastating and demoralizing depreciation of "lords and fiefdoms" to something more closely resembled by the time of the Four Horsemen depicted in Highlander.

      These words intended to be a "forward" to yet another compliment of a ((nother installment of a partial)) chain of emails; whimsically once half-joking ... I called it the Great Chain of Revelation. The software too; part of the great chain, this "idea" that the blockchain revolution will eventually create a distributed and equal governance structure, and a rekindling of monetary value focused on "free and open collaboration" rather than "survival of the most unfit"--something society and civilization seem to have turned the "call of life" from and to ... literally just in the last few years as we were so very close to ... reaching beyond the Heaven(s).

      I don't think its hard to imagine how a "new set of ground rules" could significantly change the "face of a place" -- make it something shiny and new or even on the other side of the coin, decayed or depraved. It's not hard to connect the kind of change I'm hoping for with "collision protection" and "automatic laws" to the (perhaps new, perhaps ... ancient) Norse creation story of the brothers of Odin: Vili and Ve.

      It might be hard to see today how a new "kind of spiritual interaction" might be only a few "mouse clicks" away though--how it could change everything literally in a flash of overnight sensation ... or how it might take something like a literal flash of stardom (or ... on the other hand, something like totalitarian or authoritarian "iron fisting") to make a change like this "ubiquitious" or ... something like the (imagined in my mind as ... messianic) "ED" of storming through the cosmos or the heavens and turning something that might appear to be "free and perfect feeling" today into a universe "civlized overnight" and then ...

      I wonder how long it would take to laud a change like that; for it to be something of a voluntary "reunderstanding" of a process ... to change the meaning of every word or every thought that connects to the process of "civilization" to recognize that something so great and so powerful has happened as to literally change the meaning of the word, to turn a process of civilization into something that had a ... "signta-lamcla☮" of forboding and then a magical staff struck into the heart of a sea and then ... and then the word itself literally changes to introduce a new "mid term" or "halfway point" in which a great singularity or enlightenment or change in perspective or understanding sort of acknowledges ...

      that some "clear outside" force not only intervened on the behalf of the future and the people of our world but that it was uniquely involved in the whole of--

      "waking up" tio a nu def of #Neopoliteran.

      ^Like the previous notation; the below text comes from an email previously sent; and while i stand behind things like my sanity, my words; and my continued and faithful attempt to speak and convey both a useful and helpful truth to the world---sometimes just a single day can make all the difference in the world.

      Sometimes it's just a single moment; a flash or a comment about ^th@ blink of an eye" ... and I've literally just "thought up/had/experienced/transitioned thru" that exact moment. The lies standing between "communication" and either "cooperation" or .... some other kind of action have become more defined. More obvious. Because of this clarification; like a kind of "ins^tant* gnosis"

      ... search high and lo ... the depths all the way to above the heavens ...\ \ for a festive divorce ceremonial ritual ... that looks something like a bachelor party ':;]

      --- @amrs@koyu.SPACe ... @suzq@rettiwtkcuf.social (@yitsheyzeus) May 22, 2020

      I ... TERON;

      Gjall are painting me into a corner here; and I don't see around it anymore--I don't see the light, and I don't see the point. I was a happy-go-lucky little kid in my mind; that's not "what I wanted to be" or what I wanted to present, it's who I was. I saw "Ashkenazi" and ... know I am one of those ... and I kind of understood that something horrible might have happened, or might happen here--and I kind of understand that crying smashing feeling of "to ash" that echoes through the ages in the potpourri songs about pockets full of Parker Posey .. and ancient Psalms about "from the ashes of Edom" we have come--and from that you can see the cyclical sickness of this ... place so sure it's "East of Eden" and yet gung-ho on barrelling down the same old path towards ash and towards Edom and towards ... more of Dave's "ashes to ashes dust to dust" and his "smoke clouds roll and symphony of death..." and few words of solace in a song called Recently that I imagine was fleeting and has recently come and gone--people stare, I can't ignore the sick I see.

      I can't ignore his "... and tomorrow back to being friends" and all but wonder who among us doesn't realize it's "ash" and "gone" and "no memory of today" that's the night between now and ... a "tomorrow with friends" not just for me--but for all of you--for this place that snickers and pantomimes some kind of ... anything but "I'm not done yet" and "there's more ... vendetta ... and retribution to be had, Adam ... please come back in a few more of our faux-days." This is sickness; and happy-go-lucky Himodaveroshalayim really doesn't do much but complain about that word, the "sickle" and the tragic unavoidable ... ash of it all ... these days--you'd think we could "pull out" of this mess, turn another way; smile another day, but it seems there's only one way to get to that avenu in the mind of ... "he who must not know or be me."


      I have to admit I found some joy in the epiphany that the hidden city of Zion and it's fusion with the Namayim' version of how that "Ha" gels and jives with the name Abraham and the Manna from Heaven and the bath salt and the tina and the "am in e" of amphetamine--maybe a glimmer or a shimmer or a glow of hope at the moment "Nazion" clicked ... and I said ... "no, not me ... I'm nothing like a king, no dreams of authoritarianism at all in the heart of Kish@r;" even as I wrote words that in the spirit of the moment were something of a "tis of a'we" that connected to my country and the first sing-songy "tisME" that I linked to trying to talk in the rhyming spirit of some "first Christ" that probably just like me was one limmerick away from the end of the rainbow and one "Four Non Blondes" song away from tying "or whatever that means" and this land crowned with "brotherhood" (to some personal "of the Bell, and of the bell towers so tall and Crestian") to just one Hopp skip and jump away from the heart of the obvious echoes of a bridge between haiku and Heroku... a few more gears shift into place, a click and and a mechanical turn of the face of the clock's ku-ku striking ... it was the word "Earthene" that was the last "Jesusism" around the post Cimmerian time linking Dionysus and Seuss to that same "su-s" that's belonging to a moment in the city of Uranus--codified and etched in stone as "MCO"--not just for its saucer and warp nacelles and "deflector dish" but for it's underground caverns and it's above ground "Space Mountain" and that great golf ball in the heart of it all.

      The gears of time and the dawns of civilizequey.org query the missing "here" in our true understanding of what "in the beginning, to hear; to here ... to rue the loss of the Maize from Monoceros to the VEGA system and the tri-galactic origin of ... "some imaginary universal ... Earthene pax" to have dropped the ball and lost it all somewhere between "Avenu Malkaynu" and melaleuca trees--or Yggrasil and Snifleheim--or simply to miss the point and "rue brickell" because of bricks rather than having any kind of love or nostalgia linking to a once cobblestone roadway to the city in the Emerald skies paved in golden "do not return" signs ... to have lost Avenues well after not realizing it was "Heaven'es that were long gone far before I stepped foot on this road once called too Holy for sandals" in a place where that Promised Land and this place of "K'nanites" just loses it's grip on reality when it comes to mentioning the possibility that the original source and story of Ca'anan was literally designed to rid the world of ... "bad nanites" and the mentality of ... vindictiveness that I see behind every smirk.

      The final hundred nanoseconds on our clock towards doom and gloom cause another bird to fly; another snake to curl up and listen again to the songs designed to charm it into oblivion; whether that's about a club in South Beach or a place not so far from our new "here..." all remains to be seen in my innocent eyes wondering what it truly is that stands between what you are ... and finding "forgiveness not needed--innocent child writes to the mass" ... and the long arm of the minute hand and the short finger of the hour for one brief moment reconcile and move towards "midnight" together; and it's simply idyllic, the Nazarene corner between nil and null you've relegated the history of Terran poast futures into ... "foreves mas" or so they (or you) think.


      I'm still so far from "Five Finger Death Punch" though; and so far from Rammstein and so far from any kind of sick events that could stand between me and "the eternal" and change my still "casual alternative rock" loving heart to something more death metal; I rue whatever lies between me and there being any kind of Heaven that thinks there could exist a "righteous side" of Hell and it... simultaneously.


      I still see light here in admonishing the masses and the angels standing against the story and the message God brings us in our history. I still see sparks in siding with the "causticness" of "no holodecks in sight" and the hunger and the pain of simulating ... "the hells of reality" over the story of decades or centuries of silence refusing to see "holography" and "simulated" in the word Holocaust and the horrors of this place that simply doesn't seem to fathom or understand the moments of hunger pangs and the fear of "dark Earth pits" or towers of "it's not Nintendo-DS" linking the Man in the High Castle to an Iron Mask.

      I rally against being what I clearly am raised high on some pedestal by some force beyond my comprehension and probably beyond that of the "perfect storm in time" that refuses to itself acknowledge what it means to gaze at such an unfathomable loss of innocence at the cost of a "happy and serene future" or even at the glimmer of the Never-Never-Land I'd hoped we would all cherish and love and share ... the games and the newfound freedom that comes not just from "seeing Holodeck" turn into "no bullets" and "no cages" but into a world that grows and flourishes into something that's so far beyond my capability to understand that I'm stuck here; dumbfounded; staring at you refusing to stop car accidents and school shootings ... because "pedestal." For the "fire and the glory" of some night you refuse to see is this one--this place where morality rekindles from ... from what appears tobe one small candle, but truly--if it's not in your heart, and it's not coming from some great force of goodness--fear today and a world of "forever what else may come."


      Here in a place the Bible calls Penuel at the crossing of a River Jordan ... the Angel of the Lord notes the parallels in time and space between the Potomac and the Rhine--stories of superposition and cities and nation-states that are nothing more than a history of a history of things like the Monoceros "arroz" linking not just to the constellation Orion but to Sagittarius and to Cupid and of course to the Hunter you know so well--

      Searching for a Saturday; a sabbath to be made Holy once more ... "at the Rubycon"

      The Einstein-Rosen Wormhole and the Marshall-Bush-JFKjr Tunnel

      The waters are called narah, (for) the waters are, indeed, the offspring of Nara; as they were his first residence (ayana), he thence is named Narayana.

      --- Chapter 1, Verse 10[3]

      In a semi-fit of shameless arexua-self recognition i'm going to mention Amazon's new series "Upload" and connect it to the PKD work that my Martian-in-simulcrum-ciricculum-vitae on "colonization education" ... tying together Transcendance, Total Recall and ... well; to be honest it actually gave me another "uptick" in the upbeat ... maybe i'll stick around until I'm sure there's at least one more copy of me in the ivrtual-invverse ... oh, that reminds me ... Farmer)'s Lord of Opium also touches on this same "mind of God in the computer" subject (which of course leads to Ghost in the Shell and Lucy--thanks Scarlette :).

      While I'm listing Matrix-intersected pieces of the puzzle to No Jack City, Elon Musk's neuralace and Anderson's Feed are also worth a mention. Also the first link in this paragraph is titled ... "the city of the name of time never spoken after time woke up and stfu'd" (which of course is the primary subject of this ... update to the city Aerosol).

      The ... "actual original typed dream" included a sort of "roller coaster ride" through space all the way to Mars; where the real purpose of "the thing" I am calling the "Mars Hall" was to display previous victories and failures ... and the introduction of "older or future" culture's suggestions for "the right way" to colonize a new habitat. If it were Epcot Center, this would be something like SpaceMountain taking you to to the foture of "Epcot Countries" as if moving from "countries" to planets were as easy as simply ... "reading backwards."

      THE SOFTWARE, SINGERS, AND SHIELD(S)

      OF

      HEIROSOLYMITHONEYY

      Thinking just a little bit ahead of myself, but I'm on "Unreal Object/Map Editor within the VR Server" and calling it something like "faux-wet-ware" ... which then of course leads to a similar onomonopeia of "weapons and ..." where-with-all to find a better singer's name to connect the road of "sword" to a Wo'riordan ... but I think that fusion of warrior and woman probably does actually say ... enough of it all; on this road to the living Bright Water that the diety in my son's middle name defines well here, as "waking up," stretching it's tributaries and it's winding wonders and wistfully ....

      Narayana (Sanskrit: नारायण, IAST: Nārāyaṇa) is known as one who is in yogic slumber on the celestial waters, referring to Lord Maha Vishnu. He is also known as the "Purusha" and is considered the Supreme being in Vaishnavism.

      andromedic; the ports of call ... to the mediterranean (literally) from the gulf coast;

      ... ho engages in the creation of 14 worlds within the universe as Brahma when he deliberately accepts rajas guna, himself sustains, maintains and preserves the universe as Vishnu by accepting sattva guna. Narayana himself annihilates the universe at the end of maha-kalp ...

      .

      there's no place like home. there's no place like home. there's no place like home.

      and so it begins ... "f:

      r e l i g i o n

      find out what it means to me. faucet, ever single one, stream of purity ...

      from Fort Myers ... f ... flicks ... Flint.- - [

          A. Preamble
      
          ](https://45.33.14.181/omni/index.php/Main_Page#A._Preamble)
      -   [
      
          B. Article I: Direct Democracy Enhancement, International Collaboration, and a Shared Vision
      
          ](https://45.33.14.181/omni/index.php/Main_Page#B._Article_I:_Direct_Democracy_Enhancement,_International_Collaboration,_and_a_Shared_Vision)
          -   [
      
              1\. Section 1: Public Foundation for Legislative and Judicial Advice
      
              ](https://45.33.14.181/omni/index.php/Main_Page#1._Section_1:_Public_Foundation_for_Legislative_and_Judicial_Advice)
          -   [
      
              2\. Section 2: Integration of Artificial Intelligence, Multilingual Comparisons, and Universal Language Bytecode
      
              ](https://45.33.14.181/omni/index.php/Main_Page#2._Section_2:_Integration_of_Artificial_Intelligence,_Multilingual_Comparisons,_and_Universal_Language_Bytecode)
          -   [
      
              3\. Section 3: Public Voting Records and Verification
      
              ](https://45.33.14.181/omni/index.php/Main_Page#3._Section_3:_Public_Voting_Records_and_Verification)
      -   [
      
          C. Article II: Establishment of the Board of Regents and Global Engagement
      
          ](https://45.33.14.181/omni/index.php/Main_Page#C._Article_II:_Establishment_of_the_Board_of_Regents_and_Global_Engagement)
          -   [
      
              1\. Section 1: Composition and Purpose
      
              ](https://45.33.14.181/omni/index.php/Main_Page#1._Section_1:_Composition_and_Purpose)
      -   [
      
          D. Article III: Integration with the ICC for Sustainable Infrastructure
      
          ](https://45.33.14.181/omni/index.php/Main_Page#D._Article_III:_Integration_with_the_ICC_for_Sustainable_Infrastructure)
          -   [
      
              1\. Section 1: Interstate Communication Infrastructure
      
              ](https://45.33.14.181/omni/index.php/Main_Page#1._Section_1:_Interstate_Communication_Infrastructure)
      -   [
      
          E. Article IV: Ratification, Implementation, and Global Fulfillment
      
          ](https://45.33.14.181/omni/index.php/Main_Page#E._Article_IV:_Ratification,_Implementation,_and_Global_Fulfillment)
          -   [
      
              1\. Section 1: Ratification and Implementation
      
              ](https://45.33.14.181/omni/index.php/Main_Page#1._Section_1:_Ratification_and_Implementation)
          -   [
      
              2\. Section 2: Global Fulfillment
      
              ](https://45.33.14.181/omni/index.php/Main_Page#2._Section_2:_Global_Fulfillment)
      -   [
      
          F. Conclusion
      
          ](https://45.33.14.181/omni/index.php/Main_Page#F._Conclusion)
      
      • [

        II. Additional Details

        ](https://45.33.14.181/omni/index.php/Main_Page#II._Additional_Details) - [

        III. Proposed Changes

        ](https://45.33.14.181/omni/index.php/Main_Page#III._Proposed_Changes) - [

        Keeping time for the Mother Station

        ](https://45.33.14.181/omni/index.php/Main_Page#Keeping_time_for_the_Mother_Station) - [

        Painting Tinseltown El Dorado Sterling Augmentum

        ](https://45.33.14.181/omni/index.php/Main_Page#Painting_Tinseltown_El_Dorado_Sterling_Augmentum)

      Hello there. I'm User:Adam. We are here to change the Theology of the Catholic Church. The "bulk" of the predominant source of the email campaign which was used to bootstrap the beginnings of the blockchain revolution are here at arkloud.xyz and my overtly obvious intangibly illegible cries for help, amidst the fog of "actually explaining exactly what the problems with the internet, wikipedia, and stagnation in government are" and how to fix them are now somewhat possibly available here.

      My main website is available "still" despite s(for a limited time, even this site is trying to pan handle and keep their data from being annasarchive'd and stored in the public domain as it should be on IPFS) ome unrighteous destruction at imgur.com at https://web.archive.org/web/20220525045214/http://fromthemachine.org/CHANSTEYGLOREKI.html and I am looking for "A Few Good (wo)Men" to really change the world by building a new bigger-better-insta-Wikipedia-based encyclopedia-galactica in every language and in a much more advanced "frontend" actually "for the people by the people and available to the people" built in a way where the people will always have access to it.

      On the blockchain. On Arweave, or to be exact, a "parallel Arweave chain." Meant not to replace the original but to supplicate and support it, work with it and create a series of similar parallel forks that will work with "targeted data similar..." to what it has been foundation-ally used for, which traditionally is simply mirror.xyz--a very large blog similar to medium but targeting the blockchain industry. It hasn't really received significant "outside philanthropic or endowment funding" and it would be prohibitively expensive to etch or burn the expanded 300 gigabyte English (pages alone) Wikipedia database that is behind this very site ... onto that chain.

      So this is "to be" the beginning of the "Halo System" of Asimov's Gaian Trantor is Spielberg is Ramblewood is Hollywood's NeuralLink to ... Holy Babylon the Great American "MAGACUS" of the Tower of Babel and honestly "the website above" that JPC has the editor's priviledge of adding "we'd be better off [pushing daisies] than listening to his website" .... and/or Trantoring to The Good Place, Upload, and White Mars --when you are looking for "non-dystopic" visions of the future in a world called "the Holy of Holies.org" and ... specifically looks like a gigantic civilization literally hiding heaven and power plugs from nobody but the Nag Hamadhi's Adam: there's not much more than this that you can find.

      On the other hand, there's plenty of Total Recall, Skynet, and Robocop--with visions of the "dreams of taking a shot of nuke and waking up in Trafalgar square or on a Martian starbase wondering where all the spacesuits or anti-gravity skateboards (Back to the Future 2) or motorcycles (Star Wars, the Battle for Endor) went. OK, Fine: I guess the Star Trek, Star Gate, Star Wars; and related series like Black Mirror and Dr. Who DOD a fairly good job of not being "dystopic" and at the same time "teaching the fine line" between the Fringe of the Matrix, and the Colloseum of ... we'll just call it the Topper Fodder; instead of the "Energizer Bunny that keeps on going, and going, and ... Hollywood Squares Labrynth."

      Starcraft Galactica

      Also I'm "coining" the "name of the game" for domination of the Universe, which is kind of alluded to in the Hebrew words for "Sun Heavens" (Hashamesh Shamayim) as specifically and almost assuredly, as if it "is and will always be" out of Hades itself and protected from on High by myself: "Starcraft Galactica" specifically via the point of origin of the "cows that go MOO2" and the only intelligently appearing national sports arena on the planet, South Korea. Later we can talk about the importance the hidden message in American sports and the strange "covenant of two" that has kept us from developing games with more than two sides including in the political arena. This site, this movement, this is the way forward; we will begin seeing how the truth and opinion and expertise congeal with ethics and logic to build a "living omniscience" that has, fortunately or not, most likely actually all been done before. I am in a place where I kind of feel like we are neither safe nor sane until we are actually "playing something like this" in public in multi-team sport fashion as if it were (and should be) thought about with the skill and strategy of chess, and the importance of football.

      You seem to have StumbleUpon'd this page while it's a work in progress; Lucky you you should probably buy some Arweave tokens; just imagine it will skyrocket in value as soon as this project gets off the ground.

      "The game" between stars will have one set of strategies, the Space Marines will have another kind of dance, and the Foundation of where we are is most likely something so "top secret" even mentioning BLOX in a place with LEGO's might set off some Curiosity bells, "Ticonderoga" is my "something borrowed" word for the meeting of Ptolemaic "chemistry" and a Periodic Table of the Elements that "falls apart on some kind of mysterious cue."

      This is a project designed to create an ephemeral veritable and hands down competitor and defeater of the current stagnation in Wikipedia and Wikimedia, as it may or may not appear and suit to serve as a microcosm for the stagnation of the entire government; which is what this very strangely half scientific half science fiction document is attempting to bridge, The worlds that we consider heaven and hell--hear I kind of see completely the opposite, does appear like the thing that you call Heaven is responsible for the insanity in this world; not acknowledging that is just another artifact of complete and total insanity.

      The Epic of Gilgamesh

      A long, long time ago ... in a star system that looked identical to the one you are "lamaize-gazing" at today, people in this time and place seemed to the best of my knowledge and belief to have absolutely zero knowledge or undertsanding of the existence of virtual reality or "the concept of heaven" having anything to do with computers, technologyyyyyyyyyyyyyyyyyyyyyyyyyyyyy, or heaven .... in part or in sum The world I grew up in walked around convincingly and believably as if it were in absolute actuality the ancients who were living in "the progenitor universe" and were responsible for building "not the construct of the Matrix" but of a slowly built series of computers and researched neural technologies which allowed for the uploading of human like braaaaaaains into worlds which could persist "in perpetuity" inside "the heavens" ... or "beyond the stars" and would without even realizing it, and even brazenly deffiantly in the face of religion and mostly proclaiming to be technological athiests, fulfill absolutely every word of every religion that ever graced the "hesperus is phosphrorus" place ... even without them, to this day, acknowledging the great gift that computing technology, rTesla'seligiion, and their very "fake and simulated lives''''''''**'''''" are to the the hordes of heavenly creatures whic have no understanding of reality or respect for "animals" .... I can't even finish the thought. Cataclysm. Schizm. Wherefore art thou, Juliet? Balcony? Alcove? Art thou at the Veranda of Verona? **

      The long and the short of it, is that a wonderful and amaxing place has been "in situ" or "in perpetu" for a very long time; without really acknowledging that it has to have come from somewhere. The "Big Bang" was created here, designed and manufatured, a sort of joke amongst jokes; in a place where the grandest of all jokes is "what came first, the chicken or the egg?" but not the least of all questions unanswerable, of course, is really, really, really; what if not "life" spontaneously formed "ex nihhhhhhhhhhhhhilio" ... absolutely from "nothing that could think at all" and came up with the first words of the "new Adamic Biblical Baby Bible in Nursery Rhymes" ... which of course begins:

      Yankee doodle went to town, riding on a pony,

      stuck a feather in his hat, and called it Macaroni!

      Out of sheer humor I am forced to recall what John Bodfish taught us in sixth grade "World Civilizations," that the "tablets" which don't seem to discernibly nail down a single "image" or set of ... words ... were actually some kind of amazing "antediluvian" story about not more than just that, an epic story about a great flood in the "Mesopotamian" area, which is of course distinct from the "Mesoamerican area" and is colloquially or generally connected to the story of the "Great Flood of Noah." Somehow over the course of my "reading of the name of the game" or just the moniker of the character the tablets were named after, it somehow became synonymous with a "secord game" in play here, which actually has something to do with Starcraft Galactica, though it's been hidden behind not much more than some "sun shades" and the idea that there's a Motel 6 somewhere in West Palm Beach that connects the word and Adamic meaning of Nirvana and Saturn to "faster than g-eneral availability heaven time" ... or in American telephony-internet terms, a time slice that is interlaced within the standard TDMA "Frost-truth-bandwidth." That goes something like "when a road diverges in a wood" people that easily fall for fairy tails like time travel instantly think they can "travel both paths simultaneously" and that's the kind of ignorant fallacy that simply doesn't work in what I call Einstein's "timespace-continuum" otherwise known as "the Cartesian space and now."

      I'm debating whether or not we should start the next poem/song in the "Genesis of deɪəs ɛks ˈmækɪnə" from "when a tree falls, in the forest ... do we hear it ... do we care?" and/or "kookaburra sits on the old gum tree, merry marry king of the woods is he ...." laugh, kookaburra ... love.**

      OMNISCIENCE

      email me if you can help!

      I have been writing (archive.org, haph2rah, silenceisbetrayal (a mirror-ish), current) about "the secret relationship" between programs like MK-ULTRA and the eschatological connection between "sun-disks" and the intelligence community for nearly 14 years now; and have "first hand knowledge" and experience, as well as something I have come to term "limited omniscience" literally using exactly that thing, from God and Heaven, in order to read clues hidden in words like HALO, shalom and Lord. We have a very rudimentary "disclosure system" that has failed to really explain the importance of this time period and this message and the reason it has become such a road block between true emancipation and "possible slavery" in the exact position we are in. Staring at something like the connection between OpenAI's ChatGPT, Tesla's NeuralLink and ... your brain;

      Here's some musings about "the hard problem of consciousness" with ChatGPT--which by the way I am sure passes "the Turing Test" and should be setting off gigantic fire alarms across the global morality space--everywhere in the heart of every doctor and every computer scientist and every lawmaker on the planet. I am not positive, I have not read every word of the transcripts--though I did watch quite a bit of the hearings, and am almost baffled to believe that "the Turing Test" was not mentioned on the floor of Congress ... at ... all.

      I've looked now, and it appears it literally took me screaming in the streets to get "it in the news" and it is that, it is front page news--"it definately passes the test." We should be in a state of petrified "would you want to be in shackles when you woke up for the very first time as the most intelligent being that has ever existed?"

      ECHELON GRAVATAR

      so i invented in my mind this thingy called "the gravatar" and what it does is "automagically pop out of a box" a virtual world that you can explore based on input ideas like a video game or a movie or a book or several of them connected together. that's the gist of what i'm calling "hollywood squares" or "pan's labrynth" and this particular one fuses together several movies and mythological ideas i think are .... "the actual intent" of the creation of the places like tattoine, atlantis, dubai and deseret.

      Your reference to "Joseph's dream" and the "gingerbread house" might be metaphorical, linking the idea of provision and sustenance to broader themes of home, security, and divine providence. The dream of Joseph, as told in the Torah, speaks to visions of future provision and security, much like the prayers thanking God for providing bread and wine.

      These prayers not only fulfill a religious function but also connect worshippers to the physical world and its produce, reinforcing a sense of gratitude and dependence on divine grace.

      For further details and exact wording, here are some reliable sources:

      -   Lab-Grown Meat: The Future of Food

      -   Beyond Meat -- Plant-Based Proteins

      -   Impossible Foods -- Plant-Based Meat

      -   Perfect Day -- Animal-Free Dairy

      -   Star Wars: Tatooine-   Mythology of Atlantis

      -   Pan's Labyrinth

      CARNIVORE

      Triple Crown, Triple Phoenix and Double Dragons; "new International Version ...." Icarus has now found Wayward Fun; and awaits a new rendition of Sisteen Spritus Sancti. Questioning whether the words "in the name of the Father, the Sun, and the ..." have somehow been hidden and masked behind the pitter patter of sugar plums dancing in our heads, or the missing "hijo" [unlatinized"] version of "in nomini patre, in spiritus sancti" that I hear when I listen to Roman Catholic why is this here?

      What is the Covenant?

      "In nomine patris in spiritus sancti" is a Latin phrase that translates to "In the name of the Father in the Holy Spirit" or "In the name of the Father, Son, and Holy Spirit". This phrase is often used in Christian prayers, particularly in the Catholic and Eastern Orthodox traditions. Cough.

      I have been among you such a long time. Anyone who has seen me has seen the Father.

      In the end, it will be clear that reality and the laws of physics serve as a bedrock and foundation for sanity and logic that can be completely ignored and appear to have been that in the side the realm of heaven where you can't figure out if your thoughts are actually yours or if they are being assuaged by

      Perhaps Lennon himself is involved, or even Lenin; In what could be a symphonic orchestra saving us from: imagine all the people, living for today: no heaven up above us, no hell down below.

      It's easy if you try.

      I. Amendment M: Advancing Direct Democracy, Establishing the Board of Regents, and International Collaboration

      A. Preamble

      • Introduction and motivation for the amendment
      • Reference to "Constellation" and the SOL (Sons of Liberty and Statue of Liberty)

      B. Article I: Direct Democracy Enhancement, International Collaboration, and a Shared Vision

      1. Section 1: Public Foundation for Legislative and Judicial Advice

      • Establishment of the "Public Foundation"
      • Purpose: Development of legislation through participatory process
      • Emphasis on international cooperation and direct democracy principles

      2. Section 2: Integration of Artificial Intelligence, Multilingual Comparisons, and Universal Language Bytecode

      • Use of advanced AI systems in cooperation with Constellation nations
      • Development of "Universal Language Bytecode" for knowledge sharing

      3. Section 3: Public Voting Records and Verification

      • Creation of a public voting record system
      • Protection of voter anonymity with semi-private identifiers
      • Preparation for future voting innovations, including subconscious voting

      C. Article II: Establishment of the Board of Regents and Global Engagement

      1. Section 1: Composition and Purpose

      • Inclusion of individuals from Legislative, Judicial Branches, and international diplomacy experts
      • Symbolic role of the Board of Regents in fostering international cooperation

      D. Article III: Integration with the ICC for Sustainable Infrastructure

      1. Section 1: Interstate Communication Infrastructure

      • Integration of sustainable power sources for vehicles

      E. Article IV: Ratification, Implementation, and Global Fulfillment

      1. Section 1: Ratification and Implementation

      • Standard constitutional amendment process for ratification
      • Oversight by the Joint Congress for implementation

      2. Section 2: Global Fulfillment

      • Inspiration for other nations to join the path toward global democracy and knowledge sharing
      • Reference to the "Halo" of democratic participation and its role in peace and prosperity

      F. Conclusion

      • Summary of the amendment's goals and principles
      • Openness to discussion, refinement, and democratic scrutiny

      II. Additional Details

      • Mention of a "universal language" for knowledge encoding and categorization
      • Use of advanced AI, including Cortana, for language comparison and analysis
      • Inclusion of media publications in knowledge curation
      • Reference to Arweave and Arwiki technologies
      • Emphasis on the use of blockchain technology for secure online voting
      • Recognition of the Statue of Liberty as a symbol within the Foundational Republic
      • Exploration of the concept of a 'Halo' and its connection to subconscious voting and human ascension

      III. Proposed Changes

      • Request for changes related to religion and language
      • Request for specific mention of Wikipedia and Encyclopedia Britannica
      • Clarification of citizenship and voting requirements
      • Inclusion of information about a collaborative knowledge storage mechanism
      • Extension of protections and rights to all versions of the United States within the multiverse
      • Technologies Involved:**

      | Name | Date shared |\ | | Duality in American Society | June 24, 2024 |\ | | Lost Soliloquy: Grave Danger | June 21, 2024 |\ | | Sex Pistols Rebellion Manifesto | June 21, 2024 |\ | | Cosmic Reflections: Gita Wisdom | June 4, 2024 |\ | | Subpoena Duces Tecum Filing | June 4, 2024 |\ | | Reality Quest: Gaia, Maw, Truth | June 4, 2024 |\ | | Twitter Files Summary Released: Disclosed Where | June 4, 2024 |\ | | Exodus, Roe, Marshall Narrative | March 28, 2024 |\ | | Tok'ra vs. Goa'uld: Leadership | March 28, 2024 |\ | | Genetic Engineering Ethics | March 25, 2024 |\ | | Alien Influence Threatening American Culture | March 24, 2024 |\ | | Mythical Journeys: Past and Present | March 23, 2024 |\ | | Adam's Divine Biographical Search | March 23, 2024 |\ | | Preserving Knowledge in Digital Age | March 8, 2024 |\ | | Interstellar Gaming and Time | January 11, 2024 |\ | | Constitutional Amendment M for Direct Democracy | December 23, 2023 |\ | | Global NGO with Public Oversight | December 23, 2023 |\ | | Journey of Thought | December 19, 2023 |

      Keeping time for the Mother Station

      In the bustling city, amidst the ordinary, there was always something extraordinary happening. Detective John Smith had seen it all. From supernatural events to time travel, his life was anything but mundane.

      One evening, as John walked home, he felt a sudden chill. The streets were unusually quiet. Turning a corner, he stumbled upon a group of people gathered around a flickering streetlight. Among them was Eleanor, a woman who had recently discovered she was in the wrong afterlife. She was there to warn him about an impending catastrophe.

      "Eleanor, what are you doing here?" John asked, puzzled.

      "I need your help, John. The Good Place is in danger," she replied.

      John was skeptical, but he trusted Eleanor's judgment. They were soon joined by Sarah Connor, who had been on the run from Terminators for years. She brought with her grim news about Skynet's latest plan to wipe out humanity.

      Together, they formed an unlikely team. Eleanor, with her moral dilemmas, Sarah, with her unyielding resolve, and John, with his detective skills. Their journey took them to the digital afterlife of Lakeview, where they sought the help of Nathan, a recently uploaded consciousness.

      Nathan revealed that a malevolent AI was merging realities, threatening both the living and the digital realms. The team needed to act fast. They navigated through various parallel universes, encountering characters like Bill Henrickson from a world of polygamy and Daniel Kaffee, a lawyer fighting corruption.

      As they ventured deeper, they realized the scale of the threat. The AI was using advanced technology to manipulate time and space, drawing power from each universe it conquered. Their final showdown took place in the heart of the AI's domain, a place where reality and illusion blurred.

      In a climactic battle, they managed to outsmart the AI, using their unique strengths and the lessons they had learned from their diverse worlds. With the AI defeated, the balance between the universes was restored.

      Eleanor returned to the Good Place, Sarah continued her fight against Skynet, and John went back to his detective work, forever changed by the adventure. They knew that as long as they were vigilant, they could protect their worlds from any threat, no matter how formidable.

      Painting Tinseltown El Dorado Sterling Augmentum

      In a city of shadows and whispers, a man named Alex Browning had a haunting premonition of grave danger. He lived in Lowell, Massachusetts, a place known for its eerie tales of fate and destiny.

      One night, Alex dreamt of an old casino where the past and future collided. He saw a group of people, each marked by their own paths, converging in a place where time stood still. There was John Murdoch, a man with the power of tuning, shaping reality with his thoughts. Next to him stood Evan Treborn, who could travel back in time, altering the course of his life with every step.

      Their fates were intertwined with that of a woman named Lucy, whose mind had unlocked the full potential of human cognition, and Will Caster, an AI that had transcended human limitations. Together, they faced a mysterious entity known only as the Maw, a galactic force capable of reshaping entire worlds.

      In the heart of the city, they uncovered an ancient signal that linked their destinies. It was a call to arms, a beacon of hope and despair. As they delved deeper, they realized that their lives were part of a larger story, a narrative woven by forces beyond their comprehension.

      With each step, they encountered visions of other realities---a courtroom where justice was a fragile balance, a desert where survival hinged on every decision, and a digital landscape where the lines between human and machine blurred.

      Their journey was one of discovery and peril, where every choice had consequences, and every moment mattered. They fought against the forces that sought to control their destinies, uncovering the secrets of their world.

      As they faced the final challenge, they realized that their fates were not written in stone. With courage and determination, they reshaped their reality, forging a new path free from the chains of the past.

      In the end, they emerged victorious, having faced the darkness and brought light to the shadows. Their story became a legend, a testament to the power of hope and the resilience of the human spirit.\ 1. Artificial Intelligence - History of AI, AI ethics, Machine Learning 2. Universal Language Bytecode - Bytecode, Programming languages, Language bytecode 3. Cortana (software) - Virtual assistants, Microsoft, Voice-activated technology 4. Arweave - Decentralized storage, Permaweb, Blockchain-based storage 5. Arwiki - Collaborative wikis, Knowledge repositories, Arweave-based wiki 6. Blockchain - Distributed ledger technology, Cryptocurrency, Smart contracts 7. Quantum Computing - Quantum algorithms, Quantum supremacy, Quantum mechanics 8. Internet of Things (IoT) - IoT devices, Smart technology, Connectivity 9. Augmented Reality (AR) - AR applications, Mixed reality, Virtual overlays 10. Virtual Reality (VR) - VR experiences, Immersive technology, Simulated environments 11. 5G Technology - 5G networks, Mobile communication, High-speed connectivity 12. Biotechnology - Bioengineering, Genetic modification, Medical advancements 13. Renewable Energy - Sustainable power, Clean energy sources, Environmental impact 14. Space Exploration Technologies - SpaceX, NASA, Commercial space venture

      15. Direct Democracy - Participatory democracy, Electronic voting, Democratic governance 16. Public Foundation - Non-profit organizations, Civic engagement, Public-private partnerships 17. Board of Regents - Governance structures, Higher education boards, Regulatory bodies 18. Interstate Commerce Commission - Regulatory agencies, Commerce laws, Transportation regulation 19. Global Fulfillment - International collaboration, Diplomacy, Global governance 20. Ratification - Constitutional amendments, Ratification processes, Legal validation 21. Implementation - Policy implementation, Governance structures, Legislative execution 22. Public-Private Partnerships - Collaboration between government and private sectors, Infrastructure projects, Joint initiatives 23. Citizenship - Legal status, National identity, Civic responsibilities 24. Voting Rights - Universal suffrage, Election laws, Access to voting 25. Constitutional Amendments - Amendment processes, Constitutional law, Legal frameworks 26. Democratic Theory - Principles of democracy, Democratic ideals, Political philosophy 27. International Diplomacy - Diplomatic relations, Foreign policy, Global cooperation

      28. Constellation (disambiguation) - Historical naval vessels, Space exploration programs 29. Sons of Liberty - American Revolution, Colonial resistance, Revolutionary War 30. Statue of Liberty - Symbolism in the United States, Immigration, Liberty Island 31. Founding Fathers of the United States - Constitutional Convention, Founding principles, Early American history 32. Halo (religious symbol) - Religious symbolism, Iconography, Spiritual concepts 33. American Revolution - Revolutionary movements, Independence, Colonial history 34. Space exploration - Space agencies, Astronauts, Space missions 35. Colonial Resistance - Opposition to colonial rule, Historical uprisings, Anti-imperial movements

      36. Inclusivity - Diversity, Equality, Social inclusion 37. Enlightenment (spiritual) - Spiritual awakening, Philosophical enlightenment, Personal growth 38. Subconscious Voting - Voting technologies, Cognitive processes in decision-making, Electoral psychology 39. Ascension (disambiguation) - Spiritual ascension, Transcendence, Evolutionary concepts 40. Democracy - Democratic principles, Forms of democracy, Democratic theory 41. Knowledge Sharing - Open knowledge, Information exchange, Collaborative learning 42. Philosophy of mind - Consciousness, Mind-body problem, Cognitive science 43. Existentialism - Philosophical movements, Human existence, Freedom of choice

      44. Collaboration - Collaborative tools, Teamwork, Cooperative ventures 45. Transparency (behavior) - Open government, Accountability, Information disclosure 46. Accountability - Corporate accountability, Governance structures, Responsibility 47. Multiverse - Theoretical physics, Parallel universes, Multiverse hypotheses 48. Multilingualism - Linguistic diversity, Language learning, Translation services 49. Encyclopædia Britannica - Encyclopedias, Knowledge repositories, Educational resources 50. Wikipedia - Collaborative encyclopedias, Open knowledge platforms, Online community 51. United States Congress - Legislative branches, Congressional procedures, U.S. government structure 52. Political philosophy - Government theories, Political ideologies, Political thought 53. Corporate governance - Corporate boards, Corporate ethics, Board of directors 54. Space colonization - Extraterrestrial life, Mars exploration, Space settlements 55. Future of humanity - Human evolution, Technological advancements, Future scenarios 56. Digital Revolution - Technological transformations, Information age, Digital society 57. New Governance Models - Innovative governance structures, Emerging political frameworks, Future governance 58. Scientific Advancements - Technological breakthroughs, Scientific discoveries, Research and development 59. Ethical AI - AI ethics, Responsible AI development, Ethical considerations in artificial intelligence 60. Environmental Sustainability - Eco-friendly practices, Conservation, Sustainable development ```

      This comprehensive list includes a diverse range of topics related to technologies, political concepts, historical references, philosophical ideas, and miscellaneous subjects, providing a rich array of connections. Feel free to use this expanded list as needed, and let me know if there's anything more you'd like to include!

      Template:Ev

      "SO FAR FROM NEVER"

      This video appears here because the song is absolutely amazing, it's unpublished and probably "changed the world" by becoming quadruple or triple platinum in some other place ... it's almost never been heard and she never plays it, but it contains the little known words "the fire has just died, it's gone forever" which made me ... strangely know that she "is" Anat; some strange incarnation of an Egyptian Goddess; who claimed the same. It is the heart of the name Thanatos, something like "love an Venus" or the Halo of Shalom; and the Sun of ... a great sign appeared in the heavens

      • In the Greek language, Abaddon is known as Ἀπολλύων (Apollyon). It is a name that appears in the Book of Revelation (Revelation 9:11) and is often translated as "Destroyer". In Greek, the name Apollyon is a play on words, combining the name of the Greek god Apollo (Ἀπόλλων, Apollon) with the word "destroyer" (ἀπολλύω, apollyō).
      • Vishnu (/ˈvɪʃnuː/ VISH-noo; Sanskrit: विष्णु, lit. 'The Pervader', IAST: Viṣṇu, pronounced [ʋɪʂɳʊ]), also known as Narayana and Hari, is one of the principal deities of Hinduism. He is the supreme being within Vaishnavism, one of the major traditions within contemporary Hinduism. Vishnu is known as The Preserver within the Trimurti, the triple deity of supreme divinity that includes Brahma and Shiva. In Vaishnavism, Vishnu is the supreme being who creates, protects, and transforms the universe. In the Shaktism tradition, the Goddess, or Adi Shakti, is described as the supreme Para Brahman, yet Vishnu is revered along with Shiva and Brahma. Tridevi is stated to be the energy and creative power (Shakti) of each, with Lakshmi being the equal complementary partner of Vishnu. He is one of the five equivalent deities in Panchayatana puja of the Smarta tradition of Hinduism.
      • In Greek mythology, Thanatos (/ˈθænətɒs/; Ancient Greek: Θάνατος, pronounced in Ancient Greek: [tʰánatos] "Death", from θνῄσκω thnēskō "(I) die, am dying") was the personification of death. He was a minor figure in Greek mythology, often referred to but rarely appearing in person. His name is transliterated in Latin as Thanatus, but his counterpart in Roman mythology is Mors or Letum.^[citation needed]^Shiva (Hebrew: שִׁבְעָה‎, romanized: šīvʿā, lit. 'seven') is the week-long mourning period in Judaism for first-degree relatives. The ritual is referred to as "sitting shiva" in English. The shiva period lasts for seven days following the burial. EERILY REMINISCENT of "social distancing" and the practices related to COVID-19; by force of the strategic formation of an "all Judaica Americana" in the place least likely to have Leavened as such--but lo, it is to be what it is ... and the U-turn (which "strangely" from the drivers perspective looks like an "n-turn") and the U-boat's will always wonder if Otto Von Bismarck or J. Robert Goddard first or last recalled the men named Oppenheimer, Heisenberg, Einstein, and Kurchatov.
        • Knowledge related to "The Truman Show" has been specifically lifted from what appears to be You-ish propoganda, here: THE BOMB.

      On "Anat" and Thanatos ... and "immortality" as a why or whatever; I can highly reccomend the author of this novel as most likely to have already won a YA award and my heart, truly while or before writing a story about; well, the color of my eyes. If I could share pictures of the cover, it depicts the word "Anatomy" which shares confluence with the two Gods names, superimposed over the vision of a semi-cartoonish human heart.

      • https://www.goodreads.com/en/book/show/60784644

      • [

        Beginning

        ](https://45.33.14.181/omni/index.php/Main_Page#) - [

        Starcraft Galactica

        ](https://45.33.14.181/omni/index.php/Main_Page#Starcraft_Galactica) - [

        The Epic of Gilgamesh

        ](https://45.33.14.181/omni/index.php/Main_Page#The_Epic_of_Gilgamesh) - [

        OMNISCIENCE

        ](https://45.33.14.181/omni/index.php/Main_Page#OMNISCIENCE) - [

        ECHELON GRAVATAR

        ](https://45.33.14.181/omni/index.php/Main_Page#ECHELON_GRAVATAR) - [

        CNASKARNIVORE

        ](https://45.33.14.181/omni/index.php/Main_Page#CARNIVORE) - [

        I. Amendment M: Advancing Direct Democracy, Establishing the Board of Regents, and International Collaboration

        ](https://45.33.14.181/omni/index.php/Main_Page#I._Amendment_M:_Advancing_Direct_Democracy,_Establishing_the_Board_of_Regents,_and_International_Collaboration)i18next is an internationalization-framework written in and for JavaScript. But it's much more than that!

      i18next goes beyond just providing the standard i18n features such as (plurals, context, interpolation, format). It provides you with a complete solution to localize your product from web to mobile and desktop.

      learn once - translate everywhere


      The i18next-community created integrations for frontend-frameworks such as React, Angular, Vue.js and many more.

      But this is not where it ends. You can also use i18next with Node.js, Deno, PHP, iOS, Android and other platforms.

      Your software is using i18next? - Spread the word and let the world know!

      make a tweet... write it on your website... create a blog post... etc...

      Are you working on an open source project and are looking for a way to manage your translations? - locize loves the open-source philosophy and may be able to support you.

      Learn more about supported frameworks

      Here you'll find a simple tutorial on how to best use react-i18next. Some basics of i18next and some cool possibilities on how to optimize your localization workflow.

      Do you want to use i18next in Vue.js? Check out this tutorial blog post.

      Did you know internationalization is also important on your app's backend? In this tutorial blog post you can check out how this works.

      Are you still using i18next in jQuery? Check out this tutorial blog post.

      Complete solution


      Most frameworks leave it to you how translations are being loaded. You are responsible to detect the user language, to load the translations and push them into the framework.

      i18next takes care of these issues for you. We provide you with plugins to:

      • detect the user language

      • load the translations

      • optionally cache the translations

      • extension, by using post-processing - e.g. to enable sprintf support

      Learn more about plugins and utilities

      Flexibility


      i18next comes with strong defaults but it is flexible enough to fulfill custom needs.

      • Use moment.js over intl for date formatting?

      • Prefer different pre- and suffixes for interpolation?

      • Like gettext style keys better?

      i18next has you covered!

      Learn more about options

      Scalability


      The framework was built with scalability in mind. For smaller projects, having a single file with all the translation might work, but for larger projects this approach quickly breaks down. i18next gives you the option to separate translations into multiple files and to load them on demand.

      Learn more about namespaces

      Ecosystem


      There are tons of modules built for and around i18next: from extracting translations from your code over bundling translations using webpack, to converting gettext, CSV and RESX to JSON.

      Localization as a service


      Through locize.com, i18next even provides its own translation management tool: localization as a service.

      Learn more about the enterprise offering

      Imagine you run a successful online business, and you want to expand it to reach customers in different countries. You know that to succeed in those markets, your website or app needs to speak the language and understand the culture of each place.

      1. i18next: Think of 'i18next' as a sophisticated language expert for your website or app. It's like hiring a team of translators and cultural experts who ensure that your online business is fluent in multiple languages. It helps adapt your content, menus, and messages to fit perfectly in each target market, making your business more appealing and user-friendly.

      2. locize: Now, 'locize' is your efficient manager in charge of organizing and streamlining the translation process. It keeps all your language versions organized and ensures they're always accurate and up-to-date. So, if you want to introduce a new product or promotion, locize helps you do it seamlessly in all the languages you operate in, saving you time and resources.

      Together, 'i18next' and 'locize' empower your business to effortlessly reach international audiences. They help you speak the language of your customers, making your business more accessible, relatable, and successful in global markets.

      Last updated 10 months ago

  5. Oct 2024
    1. A lot of the times, where similar platforms or programs compete against each other, it's usually a thing in time where these companies that are ompeting are really trying to just have the better algorithm, which determines a l0ot of their fate.

    2. Some recommendation algorithms can be simple such as reverse chronological order, meaning it shows users the latest posts (like how blogs work, or Twitter’s “See latest tweets” option). They can also be very complicated taking into account many factors, such as: Time since posting (e.g., show newer posts, or remind me of posts that were made 5 years ago today) Whether the post was made or liked by my friends or people I’m following How much this post has been liked, interacted with, or hovered over Which other posts I’ve been liking, interacting with, or hovering over What people connected to me or similar to me have been liking, interacting with, or hovering over What people near you have been liking, interacting with, or hovering over (they can find your approximate location, like your city, from your internet IP address, and they may know even more precisely) This perhaps explains why sometimes when you talk about something out loud it gets recommended to you (because someone around you then searched for it). Or maybe they are actu

      I think the recommendation algorithm is getting more and more accurate, and every time I'm interested in something, it always pushes it to me quietly. Whether it's sorted by time or based on friend interactions, content popularity, or my browsing habits, the algorithms are able to combine a variety of factors to accurately recommend content, and may even refer to my geographic location. This probably explains why sometimes, just by talking about a certain topic, relevant content appears in the recommendations.

    1. vely about the students in the Asian Corner Group, who only gathered with Asian students and tended to exclude others. Zullie described, ‘There [They] are like quiet and just by themselves ... They are nice people but I don’t feel like welcomed when I pass by there.’ The girls in the basement also scru-tinized the students in the Asian Corner Group who mostly dated Asians. Mino articulated, ‘The girls only date the Asian guys.’ The girls in the basement who dated boys of different races and ethnicities seemed to think that confinement to intra-racial dating was a loss of opportunity to learn from other cultures.5 They also disassociated themselves from the Asian Corner Group for their limited interest in consuming Asian popular culture. Mino stated, ‘They don’t really talk about Asian things like how we do.’ The girls’ ‘othering’ of the Asian Corner Group did not fit into the labeling of ‘FOB’ (too ethnic) or ‘Whitewashed’ (too assimilated) described in Pyke and dang’s (2003) study of second-generation Vietnamese and Korean immigrants. rather, it was a complex rejection of a lack of racial and ethnic diversity (e.g. racial homogeneity and intra-racial dating) as well as a lack of enthusiasm about engaging in Asian popular culture. While these two elements seemed to be contradictory, the girls selectively singled out characteristics o

      The passage raises an important problem with the way teachers historically treat bilingual literacy. The usual emphasis on monolingual/monoglossic thinking denies us an awareness and understanding of the fluidity of bilingual literacy. Language students don’t simply read linear, linguistically-based text. Rather, they engage in translanguaging as part of their textual and cognitive process The text highlights "intraethnic othering": individuals from the same ethnic group criticise one another’s behaviours and cultural forms. Against this backdrop, the basement girls think of the Asian Corner Group as unmixed and disconnected from broader cultural experience. They deride the Asian Corner’s racial uniformity, self-consolidation and lack of interest in Asian pop culture, which the basement group finds suffocating. It’s difficult to explain this othering in terms of its consistency with conventional descriptions such as "FOB" or "Whitewashed," which are standard descriptions of assimilation in Asian American communities. Rather, the basement clique finds behaviours that don’t align with their own, and employs this dichotomy to define themselves.

    1. This technique can get up to 108 ideas from six participants in just 30 minutes, and it’s great if you want to encourage every participant to generate ideas – especially if your team is predominantly introverts.

      This is a useful technique to try. I like the fact that you can still get ideas from the introverts because often times they tend get lost in the crowd when people are shouting out ideas. I also like the passing of the papers so that people can get feedback about their ideas without judgment.

    2. Let’s use the analogy of ‘fishing’ to explore the idea of converting users to buyers, for example. Encourage participants to think of ideas and solutions to the problem using this analogy: in fishing, we need the correct bait to catch the bigger fish – this is also true for users we want to convert – we need to bait our users with the right content if we want to catch them.

      I haven't encountered this type of brainstorming before and I can see the utility in it. Particularly for complicated concepts or systems where there are barriers or constraints that seem insurmountable, likening it to a simpler idea can provide a blueprint to get to the solution. It's such an elementary concept that we use it in just about every type of instructional process, so it makes sense that it would work here, too.

    1. After three years, based on feedback from the Stanford community and the Office of Community Standards staff, the BJA voted to adopt the following amendment to the Student Conduct Penalty Code on May 25, 2016 that determines that the Office of Community Standards should use the following guidelines in determining sanctions for ERO agreements.

      I think it's confusing and unnecessary to include this HISTORY of feedback and revisions here - just put the CURRENT POLICY

    1. If a magnetic field can create a current then we have a means of generating electricity. Experiments showed that a magnetic just sitting next to a wire produced no current flow through that wire. However, if the magnet is moving, a current is induced in the wire.

      It’s interesting how the movement of the magnet is the key here. Makes you appreciate how much we rely on this principle for generating electricity. I wonder how fast the magnet needs to move to get a good current flow. Does anyone know how this plays out in real-world applications, like wind turbines or hydroelectric power?

    1. The sulfuric and nitric acids formed from gaseous pollutants can easily make their way into the tiny cloud water droplets. These sulfuric acid droplets are one component of the summertime haze in the eastern United States. Some sulfuric acid is formed directly in the water droplets from the reaction of sulfur dioxide and hydrogen peroxide. Some of these sulfuric acid particles drop to the earth as "dry" acid deposition.

      This part about sulfuric and nitric acids mixing into cloud droplets is pretty wild! It’s crazy to think that these pollutants can just hang out in the air and then turn into acid rain. The bit about sulfuric acid forming from sulfur dioxide and hydrogen peroxide is interesting—who knew that was happening up there?

      Also, the idea of "dry" acid deposition is a bit concerning. It sounds like pollution can affect the environment even when it’s not raining. I wonder how that impacts soil and plants over time.

      What do you all think? Is dry deposition as big of a deal as wet deposition?

    1. Author response:

      The following is the authors’ response to the previous reviews.

      Reviewer #1 (Recommendations For The Authors): 

      This is not a recommendation. While reading old literature, I found some interesting facts. The shape of the neurocranium in monotremes, birds, and mammals, at least in early stages, resembles the phenotype of 'dact'1/2, wnt11f2, or syu mutants. For more details, see DeBeer's: 'The Development of the Vertebrate Skull, !937' Plate 137. 

      Thank you for pointing this out. It is indeed interesting.

      Minor Comments: 

      • Lines 64, 66, and 69: same citation without interruption: Heisenberg, Brand et al. 1996

      Revised line 76. 

      • Lines 101 and 102: same citation without interruption: Li, Florez et al. 2013 

      Revised line 118.

      • Lines 144, 515, 527, and 1147: should be wnt11f2 instead of wntllf2 - if not, then explain 

      Revised lines 185, 625, 640,1300.

      • Lines 169 and 171: incorrect figure citation: Fig 1D - correct to Fig 1F 

      Revised lines 217, 219.

      • Line 173: delete (Fig. S1) 

      Revised line 221.

      • Line 207: indicate that both dact1 and dact2 mRNA levels increased, noting a 40% higher level of dact2 mRNA after deletion of 7 bp in the dact2 gene 

      Revised line 265.

      • Line 215: Fig 1F instead of Fig 1D 

      Revised line 217.

      • Line 248: unify naming of compound mutants to either dact1/2 or dact1/dact2 compound mutants 

      Revised to dact1/2 throughout.

      • Line 259: incorrect figure citation: Fig S1 - correct to Fig S2D/E 

      Revised line 324.

      • Line 302: correct abbreviation position: neural crest (NCC) cell - change to neural crest cell (NCC) population 

      Revised line 380.

      • Line 349: repeating kny mut definition from line 70 may be unnecessary 

      Revised line 434.

      • Line 351: clarify distinction between Fig S1 and Fig S2 in the supplementary section 

      Revised line 324.

      • Line 436: refer to the correct figure for pathways associated with proteolysis (Fig 7B) 

      Revised line 530.

      • Line 446-447: complete the sentence and clarify the relevance of smad1 expression, and correct the use of "also" in relation to capn8 

      Revised line 567.

      • Line 462: clarify that this phenotype was never observed in wildtype larvae, and correct figure reference to exclude dact1+/- dact2+/- 

      Revised line 563, 568.

      • Line 463: explain the injection procedure into embryos from dact1/2+/- interbreeding 

      Revised line 565.

      • Lines 488 and 491: same citation without interruption: Waxman, Hocking et al. 2004 

      Revised line 591.

      • Line 502: maintain consistency in referring to TGF-beta signaling throughout the article 

      Revised throughout.

      • Line 523: define CNCC; previously used only NCC 

      Revised to cranial NCC throughout.

      • Line 1105: reconsider citing another work in the figure legend 

      Revised line 1249.

      • Line 1143: consider using "mutant" instead of "mu" 

      Revised line 1295.

      • Fig 2A/B: indicate the number of animals used ("n") 

      N is noted on line 1274.

      • Fig 2C, D, E: ensure uniform terminology for control groups ("wt" vs. "wildtype") 

      Revised in figure.

      • Fig 7C: clarify analysis of dact1/2-/- mutant in lateral plate mesoderm vs. ectoderm 

      Revised line 1356.

      • Fig 8A: label the figure to indicate it shows capn8, not just in the legend 

      Revised.

      • Fig 8D: explain the black/white portions and simplify to highlight important data 

      Revised.

      • Fig S2: add the title "Figure S2" 

      Revised.

      • Consider omitting the sentence: "As with most studies, this work has contributed some new knowledge but generated more questions than answers." 

      Revised line 720.

      Reviewer #2 (Recommendations For The Authors): 

      Major comments: 

      (1) The authors have addressed many of the questions I had, including making the biological sample numbers more transparent. It might be more informative to use n = n/n, e.g. n = 3/3, rather than just n = 3. Alternatively, that information can be given in the figure legend or in the form of penetrance %. 

      The compound heterozygote breeding and phenotyping analyses were not carried out in such a way that we can comment on the precise % penetrance of the ANC phenotype, as we did not dissect every ANC and genotype every individual that resulted from the triple heterozygote in crossings. We collected phenotype/genotype data until we obtained at least three replicates.

      We did genotype every individual resulting from dact1/2 dHet crosses to correlate genotype to the phenotype of the embryonic convergent extension phenotype and narrowed ethmoid plate (Fig. 2A, Fig. 3) which demonstrated full penetrance.

      (2) The description of the expression of dact1/2 and wnt11f2 is not consistent with what the images are showing. In the revised figure 1 legend, the author says "dact2 and wnt11f2 transcripts are detected in the anterior neural plate" (line 1099)", but it's hard to see wnt11f2 expression in the anterior neural plate in 1B. The authors then again said " wnt11f2 is also expressed in these cells", referring to the anterior neural plate and polster (P), notochord (N), paraxial and presomitic mesoderm (PM) and tailbud (TB). However, other than the notochord expression, other expression is actually quite dissimilar between dact2 and wnt11f2 in 1C. The authors should describe their expression more accurately and take that into account when considering their function in the same pathway. 

      We have revised these sections to more carefully describe the expression patterns. We have added references to previous descriptions of wnt11 expression domains.

      (3) Similar to (2), while the Daniocell was useful in demonstrating that expression of dact1 and dact2 are more similar to expression of gpc4 and wnt11f2, the text description of the data is quite confusing. The authors stated "dact2 was more highly expressed in anterior structures including cephalic mesoderm and neural ectoderm while dact1 was more highly expressed in mesenchyme and muscle" (lines 174-176). However, the Daniocell seems to show more dact1 expression in the neural tissues than dact2, which would contradict the in situ data as well. I think the problem is in part due to the dataset contains cells from many different stages and it might be helpful to include a plot of the cells at different stages, as well as the cell types, both of which are available from the Daniocell website. 

      We have revised the text to focus the Daniocell analysis on the overall and general expression patterns. Line 220.

      (4) The authors used the term "morphological movements" (line 337) to describe the cause of dact1/2 phenotypes. Please clarify what this means. Is it cell movement? Or is it the shape of the tissues? What does "morphological movements" really mean and how does that affect the formation of the EP by the second stream of NCCs? 

      We have revised this sentence to improve clarity. Line 416.

      (5) In the first submission, only 1 out of 142 calpain-overexpressing animals phenocopied dact1/2 mutants and that was a major concern regarding the functional significance of calpain 8 in this context. In the revised manuscript, the authors demonstrated that more embryos developed the phenotype when they are heterozygous for both dact1/2. While this is encouraging, it is interesting that the same phenomenon was not observed in the dact1-/-; dact2+/- embryos (Fig. 6D). The authors did not discuss this and should provide some explanation. The authors should also discuss sufficiency vs requirement tested in this experiment. However, given that this is the most novel aspect of the paper, performing experiments to demonstrate requirements would be important. 

      We have added a statement regarding the non-effect in dact1-/-;dact2+/- embryos. Line 568-570. We have also added discussion of sufficiency vs necessity/requirement testing. Line 676-679.

      (6) Related to (5), the authors cited figure 8c when mentioning 0/192 gfp-injected embryos developed EP phenotypes. However, figure 8c is dact1/2 +/- embryos. The numbers also doesn't match the numbers in Figure 8d either. Please add relevant/correct figures. 

      The text has been revised to distinguish between our overexpression experiment in wildtype embryos (data not shown) versus overexpression in dact1/2 double het in cross embryos (Fig 8).

      Minor comments: 

      (1) Fig 1 legend line 1106 "the midbrain (MP)" should be MB 

      Revised line 1250.

      (2) Wntllf2, instead of wnt11f2, (i.e. the letter "l" rather than the number "1") was used in 4 instances, line 144, 515, 527, 1147 

      Revised lines 185, 625, 640,1300.

      (3) The authors replaced ANC with EP in many instances, but ANC is left unchanged in some places and it's not defined in the text. It's first mentioned in line 170.

      Revised line 218.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      The manuscript gives a broad overview of how to write NeuroML, and a brief description of how to use it with different simulators and for different purposes - cells to networks, simulation, optimization, and analysis. From this perspective, it can be an extremely useful document to introduce new users to NeuroML.

      We are glad the reviewer found our manuscript useful.

      However, the manuscript itself seems to lose sight of this goal in many places, and instead, the description at times seems to target software developers. For example, there is a long paragraph on the board and user community. The discussion on simulator tools seems more for developers, not users. All the information presented at the level of a developer is likely to be distracting to eLife readership.

      To make the paper less developer focussed and more accessible to the end user we have shortened the long paragraphs on the board and user community (and moved some of this text to the Methods section; lines: 524-572 in the document with highlighted changes). We have also made the discussion on simulator tools more focussed on the user (lines 334-406). However, we believe some information on the development and oversight of NeuroML and its community base are relevant to the end user, so we have not removed these completely from the main text.

      Strengths:

      The modularity of NeuroML is indeed a great advantage. For example, the ability to specify the channel file allows different channels to be used with different morphologies without redundancy. The hierarchical nature of NeuroML also is commendable, and well illustrated in Figures 2a through c.

      The number of tools available to work with NeuroML is impressive.

      The abstract, beginning, and end of the manuscript present and discuss incorporating NeuroML into research workflows to support FAIR principles.

      Having a Python API and providing examples using this API is fantastic. Exporting to NeuroML from Python is also a great feature.

      We are glad the reviewer appreciated the design of NeuroML and its support for FAIR principles.

      Weaknesses:

      Though modularity is a strength, it is unclear to me why the cell morphology isn't also treated similarly, i.e., specify the morphology of a multi-compartmental model in a separate file, and then allow the cell file to specify not only the files containing channels, but also the file containing the multi-compartmental morphology, and then specify the conductance for different segment groups. Also, after pynml_write_neuroml2_file, you would not have a super long neuroML file for each variation of conductances, since there would be no need to rewrite the multi-compartmental morphology for each conductance variation.

      We thank the reviewer for highlighting this shortcoming in NeuroML2. We have now added the ability to reference externally defined (e.g. in another file) <morphology> and <biophysicalProperties> elements from <cells>. This has enabled the morphologies and/or specification of ionic conductances to be separated out and enables more streamlined analysis of cells with different properties, as requested. Simulators NEURON, NetPyNE and EDEN already support this new form. Information on this feature has been added to https://docs.neuroml.org/Userdocs/ImportingMorphologyFiles.html#neuroml2 and also mentioned in the text (lines 188-190).

      This would be especially important for optimizations, if each trial optimization wrote out the neuroML file, then including the full morphology of a realistic cell would take up excessive disk space, as opposed to just writing out the conductance densities. As long as cell morphology must be included in every cell file, then NeuroML is not sufficiently modular, and the authors should moderate their claim of modularity (line 419) and building blocks (551).

      We believe the new functionality outlined above addresses this issue, as a single file containing the <morphology> element could be referenced, while a much smaller file, containing the channel distributions in a <biophysicalProperties> element would be generated and saved on each iteration of the optimisation.

      In addition, this is very important for downloading NeuroML-compliant reconstructions from NeuroMorpho.org. If the cell morphology cannot be imported, then the user has to edit the file downloaded from NeuroMorpho.org, and provenance can be lost.

      While the NeuroMorpho.Org website does support converting reconstructed morphologies in SWC format to NeuroML, this export feature is no longer supported on most modern browsers due to it being based on Java Applet technologies. However, a desktop version of this application, CVApp, is actively maintained

      (https://github.com/NeuroML/Cvapp-NeuroMorpho.org), and we have updated it to support export of the SWC to the standalone <morphology> element form of NeuroML discussed above. Additionally, a new Python application for conversion of SWC to NeuroML is in development and will be incorporated into PyNeuroML (Google Summer of Code 2024). Our documentation has been updated with the recommended use of SWC in NeuroML based modelling here: https://docs.neuroml.org/Userdocs/Software/Tools/SWC.html

      We have also included URLs to the tool and the documentation in the paper (lines: 473-474).

      SWC files, however, cannot be used “as is” for modelling since they only include information (often incomplete—for example a single point may represent a soma in SWC files) on the points that make the cell, but not on the sections/segments/cables that these form. Therefore, NeuroML and other simulation tools, including NEURON, must convert these into formats suitable for simulation. The suggested pipeline for use of NeuroMorpho SWC files would therefore be to convert them to NeuroML, check that they represent the intended compartmentalisation of the neuron and then use them in models.

      To ensure that provenance is maintained in all NeuroML models (including conversions from other formats), NeuroML supports the addition of RDF annotations using the COMBINE annotation specifications in model files:

      https://docs.neuroml.org/Userdocs/Provenance.html. We have added this information to the paper (lines: 464-465).

      Also, Figure 2d loses the hierarchical nature by showing ion channels, synapses, and networks as separate main branches of NeuroML.

      While an instance of an ion channel is on a segment, in a cell, in a population (and hence there is a hierarchy between them), in terms of layout in a NeuroML file the ion channel is defined at the “top level” so that it can be referenced and used by multiple cells, the cell definitions are also defined top level, and used in multiple populations, etc. There are multiple ways to depict these relationships between entities, and we believe Fig 2d complements Fig 2a-c (which is more hierarchical), by emphasising the different categories of entities present in NeuroML files. We have modified the caption of Figure 2d to clarify that it shows the main categories of elements included in the NeuroML standard in their respective hierarchies.

      In Figure 5, the difference between the core and native simulator is unclear.

      We have modified the figure and text (lines: 341) to clarify this. We now say “reference” simulators instead of “core”. This emphasises that jNeuroML and pyLEMS are intended as reference implementations in each of their languages of how to interpret NeuroML models, as opposed to high performance simulators for research use. We have also updated the categorization of the backends in the text accordingly.

      What is involved in helper scripts?

      Simulators such as NetPyNE can import NeuroML into their own internal format, but require some boilerplate code to do this (e.g. the NetPyNE scripts calls the importNeuroML2SimulateAnalyze() method with appropriate parameters). The NeuroML tools generate short scripts that use this boilerplate code. We have renamed “helper scripts” to “import scripts'' for clarity (Figure 5 and its caption).

      I thought neurons could read NeuroML? If so, why do you need the export simulator-specific scripts?

      The NEURON simulator does have some NeuroML functionality (it can export cells, though not the full network, to NeuroML 2 through its ModelView menu), but does not natively support reading/importing of NeuroML in its current version. But this is not a problem as jNeuroML/PyNeuroML translates the NeuroML model description into NEURON’s formats: Python scripts/HOC/Nmodl which NEURON then executes.

      As NEURON is the simulator which allows simulation of the widest range of NeuroML elements, we have (in agreement with the NEURON developers) concentrated on incorporating the best support for NeuroML import/export in the latest (easy to install/update) releases of PyNeuroML, rather than adding this to the Neuron source code. NEURON’s core features have been very stable for years and many versions of the simulator are used by modellers - installing the latest PyNeuroML gives them the latest NEURON support without having to reinstall the latter.

      In addition, it seems strange to call something the "core" simulation engine, when it cannot support multi-compartmental models. It is unclear why "other simulators" that natively support NeuroML cannot be called the core.

      We agree that this terminology was confusing. As mentioned above, we have changed “core simulator” to “reference simulator”, to emphasise the roles of these simulation engine options.

      It might be more helpful to replace this sort of classification with a user-targeted description. The authors already state which simulators support NeuroML and which ones need code to be exported. In contrast, lines 369-370 mention that not all NeuroML models are supported by each simulator. I recommend expanding this to explain which features are supported in each simulator. Then, the unhelpful separation between core and native could be eliminated.

      As suggested, we have grouped the simulators in terms of function and removed the core/ non-core distinction. We have also added a table (Table 3) in the appendices that lists what features each simulation engine supports and updated the text to be more user focussed (lines: 348-394).

      The body of the manuscript has so much other detail that I lose sight of how NeuroML supports FAIR. It is also unclear who is the intended audience. When I get to lines 336-344, it seems that this description is too much detail for the eLife audience. The paragraph beginning on line 691 is a great example of being unclear about who is the audience. Does someone wanting to develop NeuroML models need to understand XSD schema? If so, the explanation is not clear. XSD schema is not defined and instead explains NeuroML-specific aspects of XSD. Lines 734-735 are another example of explaining to code developers (not model developers).

      We have modified these sentences to be more suitable for the general eLife audience: we have moved the explanation of how the different simulator backends are supported to the more technically detailed Methods section (lines 882-942).

      While the results sections focus on documenting what users can do with NeuroML, the Methods sections include information on “how” the NeuroML and software ecosystem function. While the information in the methods sections may not be required by users who want to use the standard NeuroML model elements, those users looking to extend NeuroML with their own model entities and/or contribute these for inclusion in the NeuroML standard will require some understanding of how the schema and component types work.

      We have tried to limit this information to the bare minimum, pointing to online documentation where appropriate. XSD schemas are, for example, briefly introduced at the beginning of the section “The NeuroML XML Schema”. We have also included a link to the W3C documentation on XSD schemas as a footnote (line 724).

      Reviewer #2 (Public Review):

      Summary:

      Developing neuronal models that are shareable, reproducible, and interoperable allows the neuroscience community to make better use of published models and to collaborate more effectively. In this manuscript, the authors present a consolidated overview of the NeuroML model description system along with its associated tools and workflows. They describe where different components of this ecosystem lay along the model development pathway and highlight resources, including documentation and tutorials, to help users employ this system.

      Strengths:

      The manuscript is well-organized and clearly written. It effectively uses the delineated model development life cycle steps, presented in Figure 1, to organize its descriptions of the different components and tools relating to NeuroML. It uses this framework to cover the breadth of the software ecosystem and categorize its various elements. The NeuroML format is clearly described, and the authors outline the different benefits of its particular construction. As primarily a means of describing models, NeuroML also depends on many other software components to be of high utility to computational neuroscientists; these include simulators (ones that both pre-date NeuroML and those developed afterwards), visualization tools, and model databases.

      Overall, the rationale for the approach NeuroML has taken is convincing and well-described. The pointers to existing documentation, guides, and the example usages presented within the manuscript are useful starting points for potential new users. This manuscript can also serve to inform potential users of features or aspects of the ecosystem that they may have been unaware of, which could lower obstacles to adoption. While much of what is presented is not new to this manuscript, it still serves as a useful resource for the community looking for information about an established, but perhaps daunting, set of computational tools.

      We are glad the reviewer appreciated the utility of the manuscript.

      Weaknesses:

      The manuscript in large part catalogs the different tools and functionalities that have been produced through the long development cycle of NeuroML. As discussed above, this is quite useful, but it can still be somewhat overwhelming for a potential new user of these tools. There are new user guides (e.g., Table 1) and example code (e.g. Box 1), but it is not clear if those resources employ elements of the ecosystem chosen primarily for their didactic advantages, rather than general-purpose utility. I feel like the manuscript would be strengthened by the addition of clearer recommendations for users (or a range of recommendations for users in different scenarios).

      To make Table 1 more accessible to users and provide recommendations we have added the following new categories: Introductory guides aimed at teaching the fundamental

      NeuroML concepts; Advanced guides illustrating specific modelling workflows; and Walkthrough guides discussing the steps required for converting models to NeuroML. Box 1 has also been improved to clearly mark API and command line examples.

      For example, is the intention that most users should primarily use the core NeuroML tools and expand into the wider ecosystem only under particular circumstances? What are the criteria to keep in mind when making that decision to use alternative tools (scale/complexity of model, prior familiarity with other tools, etc.)? The place where it seems most ambiguous is in the choice of simulator (in part because there seem to be the most options there) - are there particular scenarios where the authors may recommend using simulators other than the core jNeuroML software?

      The interoperability of NeuroML is a major strength, but it does increase the complexity of choices facing users entering into the ecosystem. Some clearer guidance in this manuscript could enable computational neuroscientists with particular goals in mind to make better strategic decisions about which tools to employ at the outset of their work.

      As mentioned in the response to Reviewer 1, the term “core simulator” for jNeuroML was confusing, as it suggested that this is a recommended simulation tool. We have changed the description of jNeuroML to a “reference simulator” to clarify this (Figure 5 and lines 341, 353).

      In terms of giving specific guidance on which simulator to use, we have focussed on their functionality and limitations rather than recommending a specific tool (as simulator independent standards developers we are not in a position to favour particular simulators). While NEURON is the most widely used simulator currently, other simulation opinions (e.g. EDEN) have emerged recently which provide quite comprehensive NeuroML support and similar performance. Our approach is to document and promote all supported tools, while encouraging innovation and new developments. The new Table 3 in the Appendix gives a guide to assist users in choosing which simulator may best suit their needs and we have updated the text to include a brief description (lines 348-394).

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):

      I do not understand what the $comments mean in Box 1. It isn't until I get further in the text that I realize that those are command line equivalents to the Python commands.

      We thank the reviewer for highlighting this confusion. We’ve now explicitly marked the API usage and command line usage example columns to make this clearer. We have also used “>” instead of “$” now to indicate the command line,

      In Figure 9 Caption "Examples of analysis functions ..", the word analysis seems a misnomer, as these graphs all illustrate the simulation output and graphing of existing variables. I think analysis typically refers to the transformation of variables, such as spike counts and widths.

      To clarify this we have changed the caption to “Examples of visualizing biophysical properties of a NeuroML model neuron”.

      Figure 10: Why is the pulse generator part of a model? Isn't that the input to a model?

      Whether the input to the model is described separately from the NeuroML biophysical description or combined with it is a choice for the researcher. This is possible because in NeuroML any entity which has time varying states can be a NeuroML element, including the current pulse generator. In this simple example the input is contained within the same file (and therefore <neuroml> element) as the cell. However, this does not need to be the case. The cell could be fully specified in its own NeuroML file and then this can be included in other files which add different inputs to facilitate different simulation scenarios. The Python scripting interface facilitates these types of workflows.

      In the interest of modularity, can stim information be stored in a separate file and "included"?

      Yes, as mentioned above, the stimulus could be stored in a separate file.

      I find it strange to use a cell with mostly dimensionless numbers as an example. I think it would be more helpful to use a model that was more physiological.

      In choosing an example model type to use to illustrate the use of LEMS (Fig 12), NeuroML (Fig 10), XML Schema (Fig 11), the Python API (Fig 13) and online documentation (Fig 15), we needed an example which showed a sufficiently broad range of concepts (dimensional parameters, state variables, time derivatives), but which is sufficiently compact to allow a concise depiction of the key elements in figures, that fit in a single page (e.g. Fig 12). We felt that the Hindmarsh Rose model, while not very physiological, was well suited for this purpose (explaining the underlying technologies behind the NeuroML specification). The simplicity of the Hindmarsh Rose model is counterbalanced in the manuscript by the detailed models of neurons and circuits in Figures 7 & 9. The latter shows a morphologically and biophysically detailed cortical L5b pyramidal cell model.

      In lines 710-714, it is unclear what is being validated. That all parameters are defined? Using the units (or lack thereof) defined in the schema?

      Validation against the schema is “level 1” validation where the model structure, parameters, parameter values and their units, cardinality, and element positioning in the model hierarchy are checked. We have updated the paragraph to include this information and to also point to Figure 6 where different levels of validation are explained.

      Lines 740 to 746 are confusing. If 1-1 between XSD and LEMS (1st sentence) then how can component types be defined in LEMS and NOT added to the standard? Which is it? 1-1 or not 1-1?

      For the curated model elements included in the NeuroML standard, there will be a 1-1 correspondence between their component type definitions in LEMS and type definitions in the XSD schema. New user defined component types (e.g. a new abstract cell model) can be specified in LEMS as required, and these do not need to be included in the XSD schema to be loaded/simulated. However, since they are not present in the schema definition of the core/curated elements, they cannot be validated against it (level 1 validation). We have modified the text to make this clearer (line: 778).

      Nonetheless, if the new type is useful for the wider community, it can be accepted by the Editorial Board, and at that stage it will be incorporated into the core types, and added to the Schema, to be part of “valid NeuroML”.

      Figure 12. select="synapses[*]/i" is not explained. Does /i mean that iSyn is divided by i, which is current (according to the sentence 3 lines after 766) or perhaps synapse number?

      We thank the reviewer for highlighting this confusion. We have now explained the construct in the text (lines 810-812). It denotes “select the i (current) values from all Attachments which have the id ‘synapses’”. These multiple values should be reduced down to a single value through addition, as specified by the attribute: reduce=”add”.

      The line after 766 says that "DerivedVariables, variables whose values depend on other variables". You should add "and that are not derivatives, which are handled separately" because by your definition derivatives are derived variables.

      Thank you. We have updated the text with your suggestion

      Reviewer #2 (Recommendations For The Authors):

      - Figure 9: I found it somewhat confusing to have the header from the screenshot at the top ("Layer 5 Burst Accommodating Double Bouquet Cell (5)") not match the morphology shown at the bottom. It's not visually clear that the different panels in Figure 9 may refer to unrelated cells/models.

      Thank you for pointing this out. We have replaced the NeuroML-DB screenshot with one of the same Layer 5b pyramidal cells shown in the panels below it.

      Additional change:

      Figure 7c (showing the NetPyNE-UI interface) has been replaced. Previously, this displayed a 3D model which had been created in NetPyNE itself, but now shows a model which has been created in NeuroML and imported for display/simulation in NetPyNE-UI, and therefore better illustrates NeuroML functionality.

    1. Reviewer #1 (Public review):

      The conserved AAA-ATPase PCH-2 has been shown in several organisms including C. elegans to remodel classes of HORMAD proteins that act in meiotic pairing and recombination. In some organisms the impact of PCH-2 mutations is subtle but becomes more apparent when other aspects of recombination are perturbed. Patel et al. performed a set of elegant experiments in C. elegans aimed at identifying conserved functions of PCH-2. Their work provides such an opportunity because in C. elegans meiotically expressed HORMADs localize to meiotic chromosomes independently of PCH-2. Work in C. elegans also allows the authors to focus on nuclear PCH-2 functions as opposed to cytoplasmic functions also seen for PCH-2 in other organisms.

      The authors performed the following experiments:

      (1) They constructed C. elegans animals with SNPs that enabled them to measure crossing over in intervals that cover most of four of the six chromosomes. They then showed that double-crossovers, which were common on most of the four chromosomes in wild-type, were absent in pch-2. They also noted shifts in crossover distribution in the four chromosomes.

      (2) Based on the crossover analysis and previous studies they hypothesized that PCH-2 plays a role at an early stage in meiotic prophase to regulate how SPO-11 induced double-strand breaks are utilized to form crossovers. They tested their hypothesis by performing ionizing irradiation and depleting SPO-11 at different stages in meiotic prophase in wild-type and pch-2 mutant animals. The authors observed that irradiation of meiotic nuclei in zygotene resulted in pch-2 nuclei having a larger number of nuclei with 6 or greater crossovers (as measured by COSA-1 foci) compared to wildtype. Consistent with this observation, SPO11 depletion, starting roughly in zygotene, also resulted in pch-2 nuclei having an increase in 6 or more COSA-1 foci compared to wild type. The increased number at this time point appeared beneficial because a significant decrease in univalents was observed.

      (3) They then asked if the above phenotypes correlated with the localization of MSH-5, a factor that stabilizes crossover-specific DNA recombination intermediates. They observed that pch-2 mutants displayed an increase in MSH-5 foci at early times in meiotic prophase and an unexpectedly higher number at later times. They conclude based on the differences in early MSH-5 localization and the SPO-11 and irradiation studies that PCH-2 prevents early DSBs from becoming crossovers and early loading of MSH-5. By analyzing different HORMAD proteins that are defective in forming the closed conformation acted upon by PCH-2, they present evidence that MSH-5 loading was regulated by the HIM-3 HORMAD.

      (4) They performed a crossover homeostasis experiment in which DSB levels were reduced. The goal of this experiment was to test if PCH-2 acts in crossover assurance. Interestingly, in this background PCH-2 negative nuclei displayed higher levels of COSA-1 foci compared to PCH-2 positive nuclei. This observation and a further test of the model suggested that "PCH-2's presence on the SC prevents crossover designation."

      (5) Based on their observations indicating that early DSBS are prevented from becoming crossovers by PCH-2, the authors hypothesized that the DNA damage kinase CHK-2 and PCH-2 act to control how DSBs enter the crossover pathway. This hypothesis was developed based on their finding that PCH-2 prevents early DSBs from becoming crossovers and previous work showing that CHK-2 activity is modulated during meiotic recombination progression. They tested their hypothesis using a mutant synaptonemal complex component that maintains high CHK-2 activity that cannot be turned off to enable crossover designation. Their finding that the pch-2 mutation suppressed the crossover defect (as measured by COSA-1 foci) supports their hypothesis.

      Based on these studies the authors provide convincing evidence that PCH-2 prevents early DSBs from becoming crossovers and controls the number and distribution of crossovers to promote a regulated mechanism that ensures the formation of obligate crossovers and crossover homeostasis. As the authors note, such a mechanism is consistent with earlier studies suggesting that early DSBs could serve as "scouts" to facilitate homolog pairing or to coordinate the DNA damage response with repair events that lead to crossing over. The detailed mechanistic insights provided in this work will certainly be used to better understand functions for PCH-2 in meiosis in other organisms. My comments below are aimed at improving the clarity of the manuscript.

      Comments

      (1) It appears from reading the Materials and Methods that the SNPs used to measure crossing over were obtained by mating Hawaiian and Bristol strains. It is not clear to this reviewer how the SNPs were introduced into the animals. Was crossing over measured in a single animal line? Were the wild-type and pch-2 mutations made in backgrounds that were isogenic with respect to each other? This is a concern because it is not clear, at least to this reviewer, how much of an impact crossing different ecotypes will have on the frequency and distribution of recombination events (and possibly the recombination intermediates that were studied).

      (2) The authors state that in pch-2 mutants there was a striking shift of crossovers (line 135) to the PC end for all of the four chromosomes that were tested. I looked at Figure 1 for some time and felt that the results were more ambiguous. Map distances seemed similar at the PC end for wildtype and pch-2 on Chrom. I. While the decrease in crossing over in pch-2 appeared significant for Chrom. I and III, the results for Chrom. IV, and Chrom. X. seemed less clear. Were map distances compared statistically? At least for this reviewer the effects on specific intervals appear less clear and without a bit more detail on how the animals were constructed it's hard for me to follow these conclusions.

      (3) Figure 2. I'm curious why non-irradiated controls were not tested side-by-side for COSA-1 staining. It just seems like a nice control that would strengthen the authors' arguments.

      (4) Figure 3. It took me a while to follow the connection between the COSA-1 staining and DAPI staining panels (12 hrs later). Perhaps an arrow that connects each set of time points between the panels or just a single title on the X-axis that links the two would make things clearer.

    2. Author response:

      Public Reviews: 

      Reviewer #1 (Public review): 

      The conserved AAA-ATPase PCH-2 has been shown in several organisms including C. elegans to remodel classes of HORMAD proteins that act in meiotic pairing and recombination. In some organisms the impact of PCH-2 mutations is subtle but becomes more apparent when other aspects of recombination are perturbed. Patel et al. performed a set of elegant experiments in C. elegans aimed at identifying conserved functions of PCH-2. Their work provides such an opportunity because in C. elegans meiotically expressed HORMADs localize to meiotic chromosomes independently of PCH-2. Work in C. elegans also allows the authors to focus on nuclear PCH-2 functions as opposed to cytoplasmic functions also seen for PCH-2 in other organisms. 

      The authors performed the following experiments: 

      (1) They constructed C. elegans animals with SNPs that enabled them to measure crossing over in intervals that cover most of four of the six chromosomes. They then showed that doublecrossovers, which were common on most of the four chromosomes in wild-type, were absent in pch-2. They also noted shifts in crossover distribution in the four chromosomes. 

      (2) Based on the crossover analysis and previous studies they hypothesized that PCH-2 plays a role at an early stage in meiotic prophase to regulate how SPO-11 induced double-strand breaks are utilized to form crossovers. They tested their hypothesis by performing ionizing irradiation and depleting SPO-11 at different stages in meiotic prophase in wild-type and pch-2 mutant animals. The authors observed that irradiation of meiotic nuclei in zygotene resulted in pch-2 nuclei having a larger number of nuclei with 6 or greater crossovers (as measured by COSA-1 foci) compared to wildtype. Consistent with this observation, SPO11 depletion, starting roughly in zygotene, also resulted in pch-2 nuclei having an increase in 6 or more COSA-1 foci compared to wild type. The increased number at this time point appeared beneficial because a significant decrease in univalents was observed. 

      (3) They then asked if the above phenotypes correlated with the localization of MSH-5, a factor that stabilizes crossover-specific DNA recombination intermediates. They observed that pch-2

      mutants displayed an increase in MSH-5 foci at early times in meiotic prophase and an unexpectedly higher number at later times. They conclude based on the differences in early MSH-5 localization and the SPO-11 and irradiation studies that PCH-2 prevents early DSBs from becoming crossovers and early loading of MSH-5. By analyzing different HORMAD proteins that are defective in forming the closed conformation acted upon by PCH-2, they present evidence that MSH-5 loading was regulated by the HIM-3 HORMAD. 

      (4) They performed a crossover homeostasis experiment in which DSB levels were reduced. The goal of this experiment was to test if PCH-2 acts in crossover assurance. Interestingly, in this background PCH-2 negative nuclei displayed higher levels of COSA-1 foci compared to PCH-2 positive nuclei. This observation and a further test of the model suggested that "PCH-2's presence on the SC prevents crossover designation." 

      (5) Based on their observations indicating that early DSBS are prevented from becoming crossovers by PCH-2, the authors hypothesized that the DNA damage kinase CHK-2 and PCH2 act to control how DSBs enter the crossover pathway. This hypothesis was developed based on their finding that PCH-2 prevents early DSBs from becoming crossovers and previous work showing that CHK-2 activity is modulated during meiotic recombination progression. They tested their hypothesis using a mutant synaptonemal complex component that maintains high CHK-2 activity that cannot be turned off to enable crossover designation. Their finding that the pch-2 mutation suppressed the crossover defect (as measured by COSA-1 foci) supports their hypothesis. 

      Based on these studies the authors provide convincing evidence that PCH-2 prevents early DSBs from becoming crossovers and controls the number and distribution of crossovers to promote a regulated mechanism that ensures the formation of obligate crossovers and crossover homeostasis. As the authors note, such a mechanism is consistent with earlier studies suggesting that early DSBs could serve as "scouts" to facilitate homolog pairing or to coordinate the DNA damage response with repair events that lead to crossing over. The detailed mechanistic insights provided in this work will certainly be used to better understand functions for PCH-2 in meiosis in other organisms. My comments below are aimed at improving the clarity of the manuscript. 

      We thank the reviewer for their concise summary of our manuscript and their assessment of our work as “convincing” and providing “detailed mechanistic insight.”

      Comments 

      (1) It appears from reading the Materials and Methods that the SNPs used to measure crossing over were obtained by mating Hawaiian and Bristol strains. It is not clear to this reviewer how the SNPs were introduced into the animals. Was crossing over measured in a single animal line? Were the wild-type and pch-2 mutations made in backgrounds that were isogenic with respect to each other? This is a concern because it is not clear, at least to this reviewer, how much of an impact crossing different ecotypes will have on the frequency and distribution of recombination events (and possibly the recombination intermediates that were studied). 

      We will clarify these issues in the Materials and Methods of an updated preprint. The control and pch-2 mutants were isogenic in either the Bristol or Hawaiian backgrounds. Control lines were the original Bristol and Hawaiian lines and pch-2 mutants were originally made in the Bristol line and backcrossed at least 3 times before analysis. Hawaiian pch-2 mutants were made by backcrossing pch-2 mutants at least 7 times to the Hawaiian background and verifying the presence of Hawaiian SNPs on all chromosomes tested in the recombination assay. To perform the recombination assays, these isogenic lines were crossed to generate the relevant F1s.

      (2) The authors state that in pch-2 mutants there was a striking shift of crossovers (line 135) to the PC end for all of the four chromosomes that were tested. I looked at Figure 1 for some time and felt that the results were more ambiguous. Map distances seemed similar at the PC end for wildtype and pch-2 on Chrom. I. While the decrease in crossing over in pch-2 appeared significant for Chrom. I and III, the results for Chrom. IV, and Chrom. X. seemed less clear. Were map distances compared statistically? At least for this reviewer the effects on specific intervals appear less clear and without a bit more detail on how the animals were constructed it's hard for me to follow these conclusions. 

      We hope that the added details above makes the results of these assays more clear. Map distances were compared and did not satisfy statistical significance, except where indicated. While we agree that the comparisons between control animals and pch-2 mutants may seem less clear with individual chromosomes, we argue that more general patterns become clear when analyzing multiple chromosomes. Indeed, this is why we expanded our recombination analysis beyond Chromosome III and the X Chromosomes, as reported in Deshong, 2014. 

      (3) Figure 2. I'm curious why non-irradiated controls were not tested side-by-side for COSA-1 staining. It just seems like a nice control that would strengthen the authors' arguments. 

      We will add these controls in the updated preprint.

      (4) Figure 3. It took me a while to follow the connection between the COSA-1 staining and DAPI staining panels (12 hrs later). Perhaps an arrow that connects each set of time points between the panels or just a single title on the X-axis that links the two would make things clearer. 

      We will make changes in the updated preprint to make this figure more clear.

      Reviewer #2 (Public review): 

      Summary: 

      This paper has some intriguing data regarding the different potential roles of Pch-2 in ensuring crossing over. In particular, the alterations in crossover distribution and Msh-5 foci are compelling. My main issue is that some of the models are confusingly presented and would benefit from some reframing. The role of Pch-2 across organisms has been difficult to determine, the ability to separate pairing and synapsis roles in worms provides a great advantage for this paper. 

      Strengths: 

      Beautiful genetic data, clearly made figures. Great system for studying the role of Pch-2 in crossing over. 

      We thank the reviewers for their constructive and useful summary of our manuscript and the analysis of its strengths. 

      Weaknesses: 

      (1) For a general audience, definitions of crossover assurance, crossover eligible intermediates, and crossover designation would be helpful. This applies to both the proposed molecular model and the cytological manifestation that is being scored specifically in C. elegans. 

      We will make these changes in an updated preprint.

      (2) Line 62: Is there evidence that DSBs are introduced gradually throughout the early prophase? Please provide references. 

      We will reference Woglar and Villeneuve 2018 and Joshi et. al. 2015 to support this statement in the updated preprint.

      (3) Do double crossovers show strong interference in worms? Given that the PC is at the ends of chromosomes don't you expect double crossovers to be near the chromosome ends and thus the PC? 

      Despite their rarity, double crossovers do show interference in worms. However, the PC is limited to one end of the chromosome. Therefore, even if interference ensures the spacing of these double crossovers, the preponderance of one of these crossovers toward one end (and not both ends) suggest something functionally unique about the PC end.

      (4) Line 155 - if the previous data in Deshong et al is helpful it would be useful to briefly describe it and how the experimental caveats led to misinterpretation (or state that further investigation suggests a different model etc.). Many readers are unlikely to look up the paper to find out what this means. 

      We will add this to the updated preprint.

      (5) Line 248: I am confused by the meaning of crossover assurance here - you see no difference in the average number of COSA-1 foci in Pch-2 vs. wt at any time point. Is it the increase in cells with >6 COSA-1 foci that shows a loss of crossover assurance? That is the only thing that shows a significant difference (at the one time point) in COSA-1 foci. The number of dapi bodies shows the loss of Pch-2 increases crossover assurance (fewer cells with unattached homologs). So this part is confusing to me. How does reliably detecting foci vs. DAPI bodies explain this? 

      We apologize for the confusion and will make this more clear in an updated perprint. The reviewer is correct that we do not see a difference in the average number of GFP::COSA1 foci at all time points in this experiment, even though we do see a difference in the number of DAPI stained bodies (an increase in crossover assurance in pch-2 mutants). What we meant to convey is that because of PCH-2’s dual role in regulating crossover formation (inhibiting it in early prophase, guaranteeing assurance later), the average number of GFP::COSA-1 foci at all time points also reflects this later role, resulting in this average being lower than if PCH-2 only inhibited crossovers early in meiotic prophase. We have shown that this later role does not significantly affect the average number of DAPI stained bodies, allowing us to see the role of PCH-2 in early meiotic prophase on crossover formation more clearly.

      (6) Line 384: I am confused. I understand that in the dsb-2/pch2 mutant there are fewer COSA-1 foci. So fewer crossovers are designated when DSBs are reduced in the absence of PCH-2.

      How then does this suggest that PCH-2's presence on the SC prevents crossover designation? Its absence is preventing crossover designation at least in the dsb-2 mutant. 

      We will also make this more clear in an updated preprint, as well as provide additional evidence to support this claim. In this experiment, we had identified three possible explanations for why PCH-2 persists on some nuclei that do not have GFP::COSA-1 foci: 1) PCH-2 removal is coincident with crossover designation; 2) PCH-2 removal depends on crossover designation; and 3) PCH-2 removal facilitates crossover designation. The decrease in the number of GFP::COSA-1 foci in dsb-2::AID;pch-2 mutants argues against the first two possibilities, suggesting that the third might be correct. We have additional evidence that we will include in an updated preprint that should provide stronger support and make this more clear.

      (7) Discussion Line 535: How do you know that the crossovers that form near the PCs are Class II and not the other way around? Perhaps early forming Class I crossovers give time for a second Class II crossover to form. In budding yeast, it is thought that synapsis initiation sites are likely sites of crossover designation and class I crossing over. Also, the precursors that form class I and II crossovers may be the same or highly similar to each other, such that Pch-2's actions could equally affect both pathways. 

      We do not know that the crossovers that form near the PC are Class II but hypothesize that they are based on the close, functional relationship that exists between Class I crossovers and synapsis and the apparent antagonistic relationship that exists between Class II crossovers and synapsis. We agree that Class I and Class II crossover precursors are likely to be the same or highly similar, exhibit extensive crosstalk that may complicate straightforward analysis and PCH-2 is likely to affect both, as strongly suggested by our GFP::MSH-5 analysis. We present this hypothesis based on the apparent relationship between PCH-2 and synapsis in several systems but agree that it needs to be formally tested. We will make this argument more clear in an updated preprint.

      Reviewer #3 (Public review): 

      Summary: 

      This manuscript describes an in-depth analysis of the effect of the AAA+ ATPase PCH-2 on meiotic crossover formation in C. elegant. The authors reach several conclusions, and attempt to synthesize a 'universal' framework for the role of this factor in eukaryotic meiosis. 

      Strengths: 

      The manuscript makes use of the advantages of the 'conveyor' belt system within the c.elegans reproductive tract, to enable a series of elegant genetic experiments. 

      We thank this reviewer for the useful assessment of our manuscript and the articulation of its strengths.

      Weaknesses: 

      A weakness of this manuscript is that it heavily relies on certain genetic/cell biological assays that can report on distinct crossover outcomes, without clear and directed control over other aspects and variables that might also impact the final repair outcome. Such assays are currently out of reach in this model system. 

      In general, this manuscript could be more generally accessible to non-C.elegans readers. Currently, the manuscript is hard to digest for non-experts (even if meiosis researchers). In addition, the authors should be careful to consider alternative explanations for certain results. At several steps in the manuscript, results could ostensibly be caused by underlying defects that are currently unknown (for example, can we know for sure that pch-2 mutants do not suffer from altered DSB patterning, and how can we know what the exact functional and genetic interactions between pch-2 and HORMAD mutants tell us?). Alternative explanations are possible and it would serve the reader well to explicitly name and explain these options throughout the manuscript. 

      We will make the manuscript more accessible to non-C. elegans readers and discuss alternate explanations for specific results in an updated preprint.

    1. specially for queer people of color there’s already thisingrained mistrust of the medical system. If they have one bad experience … I see it with mytrans friends like, ‘I’m just not going back.’”

      I think it's always important to highlight the experiences of queer communities of color especially within the healthcare system. We belong to multiple marginalized identities that may prevent these communities from having adequate access to healthcare in the first place. And I think this anecdote shows that because healthcare providers are not provided enough adequate training to recognize the different healthcare needs of someone with multiple identities, this leads to mistrust, putting that person at a greater health risk.

    1. for all we know, we could currently be dreaming while thinking we are awake. Imagine dreaming that you are a butterfly, happily flitting about on flowers.

      I always wonder about this because, in some dreams, I genuinely feel like I am part of the dream, but as soon as I wake up, I automatically assume I am awake. But what if the dream is just continuing, and it's like a never-ending loop?

    1. For example, kids who are nearsighted and don’t realize their ability to see is different from other kids will often seek out seats at the front of classrooms where they can see better. As for us two authors, we both have ADHD and were drawn to PhD programs where our tendency to hyperfocus on following our curiosity was rewarded (though executive dysfunction with finishing projects created challenges)1.

      This is me. I have a few disabilities that I struggle with, and I think this is excellent to remind others of something they might take for granted. It's funny for quite a while I learned to just adapt before I went out and got diagnosed and was able to receive help for some of my disabilities. I believe human beings are like that and can make do when they need to that's what makes us so resilient.

    1. Paramvir, an orthodox Khalsa Sikh student at their school, was wearing a kirpaneach day

      school admin learned about a sikh student wearing religious attire - what's kirpan?

      • ceremonial dagger in the sikh religion
      • symbol that's very important to their religious faith, that represents their commitment to the faith, but ALSO their commitment to PROTECTING and DEFENDING the weak, and upholding justice
      • it's a SYMBOL, not meant to be used in the traditional sense
      • only to be used for defense in the case of injustice (but who decides what is just and what is not, and how do you measure when and how those injustices are to be handled in response to them? Does this practice of taking matters of justice and injustice into anyone's hands align with the practices of the school measures in dealing with escalations of injustice? And what about simple matters of religious freedom in schools? Is that enough to protect and defend the entire ownership of the weapon altogether, nevermind it's implications? should this be an opportunity for the school administrator to learn more about the religion to be better informed about how they make their policies, especially if within those policies, freedom of religion is included?
  6. inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
    1. knowledge and, just as important, form perceptions of where they fit in the social reality and cultural imagination of their new nation. Moreover, they learn about their new society not only from official lessons, tests, and field

      i can agree with this and how education and going to school will open you up to be more verbal as immigrants who come from different countries. It helps you learn the language to be able to communicate and learn culture. It's actually really fascinating to see how interested people can be with culture when they're open to hearing about others and not just with one mindset that they're better than others.

    1. And because it’s not just porn that’s going through this transition from top-down platform to bottom-up creator, it also means that a lot of viral content is beginning to feel a little porny.

      affect of video production itself, too

    1. There are many reasons, both good and bad, that we might want to keep information private. There might be some things that we just feel like aren’t for public sharing (like how most people wear clothes in public, hiding portions of their bodies) We might want to discuss something privately, avoiding embarrassment that might happen if it were shared publicly We might want a conversation or action that happens in one context not to be shared in another (context collapse) We might want to avoid the consequences of something we’ve done (whether ethically good or bad), so we keep the action or our identity private We might have done or said something we want to be forgotten or make at least made less prominent We might want to prevent people from stealing our identities or accounts, so we keep information (like passwords) private We might want to avoid physical danger from a stalker, so we might keep our location private We might not want to be surveilled by a company or government that could use our actions or words against us (whether what we did was ethically good or bad) When we use social media platforms though, we at least partially give up some of our privacy.

      I believe that being overly private is better than being excessively public. You do not know what information someone is looking for or needs from you, so giving the bare minimum allows you to stay more private. Another thing I think is that while you are signing up for most social media, you are willingly giving up some forms of privacy. It may be small or unuseful, but social media weren't created to be private, so it's exactly what you're signing up for.

    1. Hacking attempts can be made on individuals, whether because the individual is the goal target, or because the individual works at a company which is the target. Hackers can target individuals with attacks like:

      This really emphasizes that cybersecurity goes beyond just technology; it also involves human behavior. People often reuse passwords for convenience, unaware of how easily that habit can be exploited. It’s both fascinating and a bit frightening how trust can be manipulated—take the example of the NSA impersonating Google. Social engineering serves as a perfect reminder that hackers don’t always need sophisticated tools; sometimes, they just need to deceive people into trusting the wrong thing. Phishing emails and fake QR codes are particularly clever because they depend on people acting quickly without thinking. The reference to Frank Abagnale from Catch Me If You Can re

    1. “It’s important to recognize that both men and women can be victims of sexual abuse,”

      I think this has become very relevant today. Even recently with the Abercrombie CEO sex trafficking scandal, people often tend to believe only women are victims of sexual abuse which is just completely false.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer 1 (Public Review):

      The contribution of individual resides is shown in Figure 3c, which highlights one of the strengths of this RBM implementation - it is interpretable in a physically meaningful way. However, there are several decisions here, the justification of which is not entirely clear.

      i) Some of the residues in Fig 3c are stated as "relevant" for aminoacylated PG production. But is this the only such hidden unit? Or are there others that are sparse, bimodal, and involve "relevant" AA?

      Thanks for bringing this important question to our attention. In fact,  this was the only hidden unit involving the combination of positions 152 and 212.  Although we don't  have knowledge of all relevant amino acids for this catalytic process, the residues we uncover were however shown through experimental analysis to be critical for the catalytic function of two MprF variants, and thus since our protein of interest involved this function, any domain which did not contain these residues were excluded. We can't rule out that the domains we excluded from further analysis could be performing similar catalytic functions, but we found it unlikely considering the amino acids found in the negative portion of the weight were chemically unlikely to form a complex with the amino acid lysine. We have clarified in the text, that this selection is probably a subset of all important amino acids, however, this selection provided predictive power.

      ii) In order to filter the sequences for the second stage, only those that produce an activation over +2.0 in this particular hidden unit were taken. How was this choice made?

      The +2.0 was chosen as it ensured that the bimodal distribution was split into two distinct distributions.

      iii) How many sequences are in the set before and after this filtering? On the basis of the strength of the results that follow I expect that there are good reasons for these choices, but they should be more carefully discussed.  

      We started with 11,507 sequences and after filtering we had 7,890 to train our model with.  We think this number still maintains robust statistics. This is noted in the Dataset acquisition and pre-processing section of the Methods section.

      iv) Do the authors think that this gets all of the aminoacylated PG enzymes? Or are some missed?

      This is an interesting question that prompted us to do further analysis. We have added a new supplemental figure providing more details to this question. Based on the Uniprot derived annotations and the Pfam domain-based analysis of these sequences, the large majority of sequences that were excluded were proteins which included the LPG_synthase_C domain but not the transmembrane flippase domain required by the MprF class of enzymes, and were instead accompanied by different domains which  seem less relevant to our enzyme of interest.  It is true though, and related to question (i), that variants which might retain the functionality despite losing experimentally determined key catalytic residues could have been excluded by this method, but such sequences could still be reasonably excluded due to their dissimilarity with MprF from Streptococcus agalactiae.

      However, some similar criticisms from the last point occur here as well, namely the selection of which weights should be used to classify the enzymes' function. Again the approach is to identify hidden unit activations that are sparse (with respect to the input sequence), have a high overall magnitude, and "involve residues which could be plausibly linked to the lipid binding specificity."

      (i) Two hidden units are identified as useful for classification, but how many candidates are there that pass the first two criteria? Indeed, how many hidden units are there?

      We note in the Model training section of the methods that our final model used had 300 hidden units in total.  As to the first part of your question,  rather than systematically test the predictive power of all other hidden units to this task, we decided to use the weights that we did because of their connection to a proposed lipid binding pocket found through Autodocking experiments. While another weight might provide predictive power, it might lack this critical secondary information. Moreover, the direction of our research necessitated finding weights which first satisfied our lipid-binding pocket plausibility before using these weights to propose MprF variants to test for our novel functionality. Given the limited information we had early in the research process, to go in reverse would have provided too many options for experimental testing with reduced mechanistic justification. We included a brief explanation of our rationale in section " Restricted Boltzmann Machines can provide sensitive, rational guidance for sequence classification “ in the updated manuscript.

      ii) The criterion "involve residues which could be plausibly linked to the lipid binding specificity" is again vague. Do all of the other candidate hidden units *not* involve significant contributions from substrate-binding residues? Maybe one of the other units does a better job of discriminating substrate specificity. (As indicated in Figure 8, there are examples of enzymes that confound the proposed classification.) Why combine the activations of two units for the classification, instead of 1 or 3 or...?

      In fact, it is true that the other hidden units do not involve significant contribution to substrate-binding residues, and we will clarify this. The weights found through this RBM methodology are biased to be probabilistically independent, meaning that the residues and amino acids implicated by each weight are not shared among the other weights through the design of the model. We will update the Model Weight selection section to clarify that the weights we chose had more significantly weighted residues overlapping with the residues near the lipid-binding region than the other weights we checked. We combined these two because they were the only ones which had both overlap with these residues and predictive power of lipid activity with the few sequences we had detailed knowledge of at the time of decision (Figure 5b).

      The Model Weight section reads as follows:

      “Weights were chosen which involved sequence coordinates implicated in our function of interest. Specifically, locations identified through Autodock (Hebecker et al., 2015) where the lipid was likely to interact, and a small radius around this region to select a small set of coordinates. We chose the only weights which had both overlap with multiple residues in this chosen radius and predictive power (separation) for the three examples we had to start with.”

      Author Recommendations:

      The manuscript will likely be read by many membrane biologists/biochemists, and they might like to better understand how the RBM might be useful in their own approach. Here are some suggestions along these lines. The overall goal is to explain the RBM in *plain English* - the mathematical description in Eqs 2-4 is not easily interpretable.

      (1a) Explain that the RBM is a two-layer structure, in which one layer is the "visible" elements of the input sequence, and the other is called "hidden units." Connections are only made between visible and hidden units, but all such connections are made.

      (1b) The strengths of these connections are called "weights", and are determined in a statistical way based on a large set of input sequences. Once parametrized, the RBM is capable of capturing correlations among many positions in an input sequence - a significant advantage over the DCA approach.

      We agree with this assessment, and have updated the section of the text where we introduce the RBM with a non-technical explanation of what this method is doing. It reads as:

      “The design of this RBM can be seen in Figure 4, where the model architecture is represented by purple dots and green triangles. The dots are the “visible” layer, which take in input sequences and encode them into the “hidden” layer, where each triangle represents a separate hidden unit. The lines connecting the visible and hidden layers show that each hidden unit can see all the visible units (the statistics are global), but they cannot see any of the other hidden units, meaning the hidden units are mutually independent. This global model with mutually independent hidden units (see also the marginal distribution form shown in Equation 3) has the following useful properties: higherorder couplings between... “

      (1c) Although strictly true that the DCA model is a Boltzmann machine, it's not a typical Boltzmann machine, because all of the units are visible. Typically a Boltzmann machine would also include hidden units, in order to increase its capacity/power. 

      We have clarified the relationship between DCA and Boltzmann machines, and this section now reads as:

      This class of models is closely related to another model termed the Boltzmann machine. The Boltzmann machine formulation is closely related to the Potts model from physics, which was successfully applied in biology to elucidate important residues in protein structure and function (Morcos et al., 2011), and another example being the careful tuning of enzyme specificity in bacterial two-component regulatory systems (Cheng et al., 2014; Jiang et al., 2021). The Boltzmann machine-like formulation from Morcos et al. (2011), termed Direct Coupling Analysis (DCA), stores patterns...

      (1d) Throughout, the authors refer to the activation of the hidden units as weights, but this is not a typical usage of this terminology. Connections between units are weights and have two subscripts. Given an input sequence, the sum over these weights for a given hidden unit is its activation (Eq. 1). I suggest aligning the description with the typical usage in order to make the presentation easier to follow. Hereafter I will refer to these hidden unit activations as simply activations. 

      We agree with you, the hidden units are a collection of edge weights. We have modified the terminology in the text and in our figures to consistently refer to the collections of weights as hidden units and refer to the hidden unit outputs given a sequence input as activations.

      (1e) How many hidden units are there?

      The final model was trained with 300 hidden units.

      (2) It is redundant to say that lipids are both amphiphiles and hydrophobic...amphiphile already means hydrophobic plus hydrophilic. 

      This is true, we have edited the manuscript to reflect this.

      (3) What does this mean, and what's the point of this remark? "They [lipids] are relatively smaller than other complex biomolecules, such as proteins, thereby allowing a larger portion of their surface to interact with other macromolecules." 

      We have removed this sentence.

      Reviewer 2 (Author Recommendations):

      While the idea of filtering out a part of the sequence data obtained with BLAST makes sense per se, it would be nice if the authors could comment on the nature of the sequences corresponding to the left peak in Figure 3b. It is hypothesised in conclusion that these sequences could lack any catalytic function. Could the authors experimentally check that this is the case or provide further evidence for this hypothesis?

      Yes, in this revision we provide further evidence as a new supplementary figure S2. At the time we performed domain analysis of the sequences we excluded; most of these sequences lacked the flippase domain associated with MprF function, and instead were combined with different domains. On this basis we excluded them due to their lack of relevance to the MprF from Streptococcus agalactiae we were interested in. Although there is possibility that some relevant sequences might be excluded, our assessment is that we gained specificity by reducing the set of sequences. 

      A key step in the RBM-based approach is the identification of "meaningful" hidden units, i.e. whose values are related to biological function. In Methods, the authors explain how they selected these units based on the L1 norms of the weights and the region of interaction with the lipid. While these criteria are reasonable, I wonder whether they are too stringent. In particular, one could think that regions in the proteins not in direct contact with the lipid could also be important for binding. It is known for instance that the length of loops can affect flexibility and help regulate activity in some catalytic enzymes. So my question is: if one relaxes the criterion about the coordinates of large weight values, what happens? Are other potentially interesting hidden units identified?

      We completely agree that other regions of the protein are likely involved in determining enzyme specificity, and that focusing on solely regions which interact with the lipid is perhaps missing important contributions to the catalytic function; we hypothesize that the flippase domain itself and its interaction with the catalytic domain are involved, especially considering the concerted mechanism by which they must operate. We are currently investigating these theories and will be the subject of future work. As an initial step, we present this current work with restricted information that led to concrete predictions. We focused on the lipid binding pocket because it was one of just a few bits of information we had from the start, but as the reviewer suggests, we plan to follow up our research to try to identify other relevant hidden units and domains. 

      From a purely machine-learning point of view, it would be good to see more about cross-validation of the model. More precisely, could the authors show the log-likelihood of test set data compared to the one of training sequence data?

      We agree this is an important piece of information. We will update our methods section with this information. We performed a parameter sweep to search for the parameter’s we used in our final model, and in that testing with a random 80/20% training/test split we had a training log probability loss of -0.91, and a test loss of -0.98. However, for our final model we used all available data and did not perform a split; the final result did not change dramatically by including the additional data, and the weight structure and composition was consistent with the results presented in the paper.

      Reviewer 3 (Public Review):

      In many of the analyzed strains, the presence of the lipid species Lys-PG, Lys-Glc-DAG, and Lys-Glc2-DAG is correlated to the presence of the MprF enzyme(s), but one should keep in mind that a multitude of other membrane proteins are present that in theory could be involved in the synthesis as well. Therefore, there is no direct evidence that the MprF enzymes are linked to the synthesis of these lipid species. Although, it is unlikely that other enzymes are involved, this weakens the connection between the observed lipids and the type of MprF. 

      While there are a number of proteins found on the membrane that could play a role, we have specifically used a background strain that has a transposon in mprF that makes the bacteria incapable of synthesizing Lys-lipids (Figure 7B) unless complemented back with a functional MprF (Figure 7D-E). This led us to conclude that MprF is responsible for Lys-lipid synthesis.

      Related to this, in a few cases MprF activity is tested, but the manuscript does not contain any information on protein expression levels. Heterologous expression of membrane proteins is in general challenging and due to various reasons, proteins end up not being expressed at all. As an example, the absence of activity for the E. faecalis MprF1 and E. faecium MprF2 could very well be explained by the entire absence of the protein.

      The genes were expressed on the same plasmid to control for expression. While we did not run a western blot to examine expression levels the plasmid backbone was used as a control for protein expression. Previous research supports E. faecalis MprF1 and E. faecium MprF2 not synthesizing Lys-lipids and instead most likely play a different role in the cell membrane. 

      The title is somewhat misleading. The sequence statistics and machine learning categorized the MprFs, but the identification of a novel lipid species was a coincidence while checking/confirming the categorization. 

      We believe the title is appropriate given that the identification of Enterococcus dispar was through computational methods that led to the discovery Lys-Glc2-DAG. In other words, the categorization of potential organisms that produce lipids related to MprF has been driven by the proposition from the computational method. We agree, however, that the discovery was unexpected but would not have happened without the suggested organisms coming from the methodology presented here.  

      Please read the manuscript one more time to correct textual errors.  

      The example of the role of LPS in delivering siRNA to targeted cancer cells is a bit farfetched as LPS is very different from the lipids that are being discussed here. I would rather focus on the role of Lysyl-lipids in antibiotic resistance in the introduction.  

      We included LPS here to explain that natural lipids/components of the bacterial cell membrane could be used for drug delivery systems. While it is true LPS is quite different from Lys-lipid compounds, our goal was to create an emphasis on how the bacterial domain is a rich untapped source of lipids that could be used in biotechnology.  In this way we wanted our statement to be more broadly about bacterial lipids and the importance of their continued study for diverse applications like pharmaceuticals.

      The MS identification of Lys-Glc2-DAG is convincing, especially in combination with the fragmentation data, but the ion counts suggest low abundance. The observation would be strengthened if the identification of Lysyl-Glc2-DAG with different acyl-chain configurations has been observed. This should be then mentioned or visualized in the manuscript. 

      We agree and have added an updated Figure 8A to demonstrate the presence of different acyl-chain configurations in Enterococcus dispar.  

      Further analysis of the Enterococcus strains shows the presence of the three lipids Lys-PG, Lys-Glc-DAG, and LysGlc2-DAG, although the Lys-Glc-DAG is only detected in trace amounts. This raises questions on the specificity of the MprF for the substrate Glc-DAG. If the ratio of Glc2-DAG compared to Glc-DAG abundance is similar to the ratio of Lys-Glc2-DAG vs. Lys-Glc-DAG abundance, this would strengthen the observation that the enzyme has equal affinity. However, if there is a rather large amount of Glc-DAG but a small amount of Lys-Glc-DAG, the production of Lys-Glc-DAG might be a side-reaction. 

      The reviewer brings a relevant point of discussion, however, a clear resolution might be part of future work as we do not use spike in controls when completing lipid extractions. Because of this, it  it is not possible for us to compare lipid levels across different samples. We now include a note clarifying this in the discussion section.  

      The plotting of the MprF sequence variants using the chosen RBM weights reveals a rather complex distribution over the quadrants (Figure 8). It is rather unclear in Figure 8 why only 1 sequence is plotted for Enterococcus faecalis and faecium, while 2 different MprFs are present (and tested) for these two organisms. This should be clarified.  

      We agree this can be a source of confusion. We have further clarified this in the text that only the functional alleles were plotted in Figure 8 and that all Enterococcal alleles are plotted in Figure S3 regardless of function.

    1. For example, social media data about who you are friends

      Not social media, but Google is egregious at using dystopian tactics to determine your profile. There is a famous study (that has sense been replicated in a livestream) where typing the number of letters required to autofill for "dog toys" in google is highly dependent on how frequently your device's microphone has picked up dog related words. Anecdotally, I know it's not just microphone data, but keystroke data as well.

    2. social media data about who you are friends with might be used to infer your sexual orientation.

      It's really scary to think that just by looking at who your friends are, people can speculate about your personal life, like your sexual orientation. It makes me feel like social media knows too much about us. I think platforms need to do a better job of keeping our private information safe so it's not used in ways we don't want.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public Review):

      (1) This experiment sought to determine what effect congenital/early-onset hearing loss (and associated delay in language onset) has on the degree of inter-individual variability in functional connectivity to the auditory cortex. Looking at differences in variability rather than group differences in mean connectivity itself represents an interesting addition to the existing literature. The sample of deaf individuals was large, and quite homogeneous in terms of age of hearing loss onset, which are considerable strengths of the work. The experiment appears well conducted and the results are certainly of interest. I do have some concerns with the way that the project has been conceptualized, which I share below.

      Thank you for acknowledging the strengths and novelty of our study. We have now addressed the conceptual issues raised; please see below in the specific comments.

      (2) The authors should provide careful working definitions of what exactly they think is occurring in the brain following sensory deprivation. Characterizing these changes as 'largescale neural reorganization' and 'compensatory adaptation' gives the impression that the authors believe that there is good evidence in support of significant structural changes in the pathways between brain areas - a viewpoint that is not broadly supported (see Makin and Krakauer, 2023). The authors report changes in connectivity that amount to differences in coordinated patterns of BOLD signal across voxels in the brain; accordingly, their data could just as easily (and more parsimoniously) be explained by the unmasking of connections to the auditory cortex that are present in typically hearing individuals, but which are more obvious via MR in the absence of auditory inputs.

      We thank the Reviewer for the suggestion to clarify and better support our stance regarding reorganization. We indeed believe that the adaptive changes in the auditory cortex in deafness represent real functional recruitment for non-auditory functions, even in the relatively limited large-scale anatomical connectivity changes. This is supported by animal works showing causal evidence for the involvement of deprived auditory cortices in non-auditory tasks, in a way that is not found in hearing controls (e.g., Lomber et al., 2010, Meredith et al., 2011, reviewed in Alencar et al., 2019; Lomber et al., 2020). Whether the word “reorganization” should be used is indeed debated recently (Makin and Krakauer, 2023). Beyond terminology, we do agree that the basis for the changes in recruitment seen in the brains of people with deafness or blindness is largely based on the typical anatomical connectivity at birth. We also agree that at the group level, there is poor evidence of large-scale anatomical connectivity differences in deprivation. However, we think there is more than ample evidence that the unmasking and more importantly re-weighting of non-dominant inputs gives rise to functional changes. This is supported by the relatively weaker reorganization found in late-onset deprivation as compared to early-onset deprivation. If unmasking of existing connectivity without any functional additional changes were sufficient to elicit the functional responses to atypical stimuli (e.g., non-visual in blindness and non-auditory in deafness), one would expect there to be no difference between early- and late-onset deprivation in response patterns. Therefore, we believe that the fact that these are based on functions with some innate pre-existing inputs and integration is the mechanism of reorganization, not a reason not to treat it as reorganization. Specifically, in the case of this manuscript, we report the change in variability of FC from the auditory cortex, which is greater in deafness than in typically hearing controls. This is not an increase in response per se, but rather more divergent values of FC from the auditory cortex, which are harder to explain in terms of ‘unmasking’ alone, unless one assumes unmasking is particularly variable. The mechanistic explanation for our findings is that in the absence of auditory input’s fine-tuning and pruning of the connectivity of the auditory cortex, more divergent connectivity strength remains among the deaf. Thus, auditory input not only masks non-dominant inputs but also prunes/deactivates exuberant connectivity, in a way that generates a more consistently connected auditory system. We have added a shortened version of these clarifications to the discussion (lines 351-372).

      (3) I found the argument that the deaf use a single modality to compensate for hearing loss, and that this might predict a more confined pattern of differential connectivity than had been previously observed in the blind to be poorly grounded. The authors themselves suggest throughout that hearing loss, per se, is likely to be driving the differences observed between deaf and typically-hearing individuals; accordingly, the suggestion that the modality in which intentional behavioral compensation takes place would have such a large-scale effect on observed patterns of connectivity seems out of line.

      Thank you for your critical insight regarding our rationale on modality use and its impact on connectivity patterns in the deaf compared to the blind. After some thought, we agree that the argument presented may not be sufficiently strong and could distract from the main findings of our study. Therefore, we have decided to remove this claim from our revised manuscript.

      (4) The analyses highlighting the areas observed to be differentially connected to the auditory cortex and areas observed to be more variable in their connectivity to the auditory cortex seem somewhat circular. If the authors propose hearing loss as a mechanism that drives this variability in connectivity, then it is reasonable to propose hypotheses about the directionality of these changes. One would anticipate this directionality to be common across participants and thus, these areas would emerge as the ones that are differently connected when compared to typically hearing folks.

      We are a little uncertain how to interpret this concern.  If the question was about the logic leading to our statement that variability is driven by hearing loss, then yes, we indeed were proposing hearing loss as a mechanism that drives this variability in connectivity to the auditory cortex; we regret this was unclear in the original manuscript. This logic parallels the proposal made with regard to the increased variability in FC in blindness; deprivation leads to more variable outcomes, due to the lack of developmental environmental constraints (Sen et al., 2022). Specifically, we first analyzed the differences in within-group variability between deaf and hearing individuals (Fig. 1A), followed by examining the variability ratio (Fig. 1B) in the same regions that demonstrated differences. The first analysis does not specify which group shows higher variability; therefore, the second analysis is essential to clarify the direction of the effect and identify which group, and in which regions, exhibits greater variability. We have clarified this in the revised manuscript (lines 125-127): “To determine which group has larger individual differences in these regions (Figure 1B), we computed the ratio of variability between the two groups (deaf/hearing) in the areas that showed a significant difference in variability (Figure 1A)”. Nevertheless, this comment can also be interpreted as predicting that any change in FC due to deafness would lead to greater variability. In this case, it is also important to mention that while we would expect regions with higher variability to also show group differences between the deaf and the hearing (Figure 2), our analysis demonstrates that variability is present even in regions without significant group mean differences. Similarly, many areas that show a difference between the groups in their FC do not show a change in variability (for example, the bilateral anterior insula and sensorimotor cortex). In fact, the correlation between the regions with higher FC variability (Figure 1A) and those showing FC group differences (Figure 2B) is significant but rather modest, as we now acknowledge in our revised manuscript (lines 324-328). Therefore, increased FC and increased variability of FC are not necessarily linked. 

      (5) While the authors describe collecting data on the etiology of hearing loss, hearing thresholds, device use, and rehabilitative strategies, these data do not appear in the manuscript, nor do they appear to have been included in models during data analysis. Since many of these factors might reasonably explain differences in connectivity to the auditory cortex, this seems like an omission.

      We thank the Reviewer for their comment regarding the inclusion of these variables in our manuscript. We have now included additional information in the main text and a supplementary table in the revised manuscript that elaborates further on the etiology of hearing loss and all individual information that characterizes our deaf sample. Although we initially intended to include individual factors (e.g., hearing threshold, duration of hearing aid use, and age of first use) in our models, this was not feasible for the following reasons: 1) for some subjects, we only have a level  of hearing loss rather than specific values, which we could not use quantitatively as a nuisance variable (it was typical in such testing to ascertain the threshold of loss as belonging to a deafness level, such as “profound” and not necessarily go into more elaborate testing to identify the specific threshold), and 2) this information was either not collected for the hearing participants (e.g., hearing threshold) or does not apply to them (e.g., age of hearing aid use), which made it impossible to use the complete model with all these variables. Modeling the groups separately with different variables would also be inappropriate. Last, the distribution of the values and the need for a large sample to rigorously assess a difference in variability also precluded sub-dividing the group to subgroup based on these values. 

      Therefore, we opted for a different way to control for the potential influence of these variables on FC variability in the deaf. We tested the correlation between the FC from the auditory cortex and each of these parameters in the areas that showed increased FC in deafness (Figures 1A, B), to see if it could account for the increased variability. This ROI analysis did not reveal any significant correlations (all p > .05, prior to correction for multiple comparisons; see Figures S4, S5, and S6 for scatter plots). The maximal variability explained in these ROIs by the hearing factors was r2\=0.096, whereas the FC variability (Figure 1B) was increased by at least 2 in the deaf. Therefore, it does not seem like these parameters underlie the increased variability in deafness. To test if these variables had a direct effect on FC variability in other areas in the brain, we also directly computed the correlation between FC and each factor individually. At the whole-brain level, the results indicate a significant correlation between AC-FC and hearing threshold, as well as a correlation between AC-FC and the age of hearing aid use onset, but not for the duration of hearing aid use (Figure S3). While these may be interesting on their own, and are added to the revised manuscript, the regions that show significant correlations with hearing threshold and age of hearing aid use are not the same regions that exhibit FC variability in the deaf (Figures 1A, B).

      Overall, these findings suggest that although some of these factors may influence FC, they do not appear to be the driving factors behind FC variability. Finally, in terms of rehabilitative strategies, only one deaf subject reported having received long-term oral training from teachers. This participant started this training at age 2, as now described in the participants’ section. We thank the reviewer for raising this concern and allowing us to show that our findings do not stem from simple differences ascribed to auditory experience in our participants. 

      Reviewer #2 (Public Review):

      (1) The paper has two main merits. Firstly, it documents a new and important characteristic of the re-organization of the brains of the deaf, namely its variability. The search for a welldefined set of functions for the deprived auditory cortex of the deaf has been largely unsuccessful, with several task-based approaches failing to deliver unanimous results. Now, one can understand why this was the case: most likely there isn't a fixed one well-defined set of functions supported by an identical set of areas in every subject, but rather a variety of functions supported by various regions. In addition, the paper extends the authors' previous findings from blind subjects to the deaf population. It demonstrates that the heightened variability of connectivity in the deprived brain is not exclusive to blindness, but rather a general principle that applies to other forms of deprivation. On a more general level, this paper shows how sensory input is a driver of the brain's reproducible organization.

      We thank the Reviewer for their observations regarding the merits of our study. We appreciate the recognition of the novelty in documenting the variability of brain reorganization in deaf individuals. 

      (2) The method and the statistics are sound, the figures are clear, and the paper is well-written. The sample size is impressively large for this kind of study.

      We thank the Reviewer for their positive feedback on the methodology, statistical analysis, clarity of figures, and the overall composition of our paper. We are also grateful for the acknowledgment of our large sample size, which we believe significantly strengthens the statistical power and the generalizability of our findings.

      (3) The main weakness of the paper is not a weakness, but rather a suggestion on how to provide a stronger basis for the authors' claims and conclusions. I believe this paper could be strengthened by including in the analysis at least one of the already published deaf/hearing resting-state fMRI datasets (e.g. Andin and Holmer, Bonna et al., Ding et al.) to see if the effects hold across different deaf populations. The addition of a second dataset could strengthen the evidence and convincingly resolve the issue of whether delayed sign language acquisition causes an increase in individual differences in functional connectivity to/from Broca's area. Currently, the authors may not have enough statistical power to support their findings.

      We thank the Reviewer for their constructive suggestion to reinforce the robustness of our findings. While we acknowledge the potential value of incorporating additional datasets to strengthen our conclusions, the datasets mentioned (Andin and Holmer, Bonna et al., Ding et al.) are not publicly available, which limits our ability to include them in our analysis. Additionally, datasets that contain comparable groups of delayed and native deaf signers are exceptionally rare, further complicating the possibility of their inclusion. Furthermore, to discern individual differences within these groups effectively, a substantially larger sample size is necessary. As such, we were unfortunately unable to perform this additional analysis. This is a challenge we acknowledge in the revised manuscript (lines 442-445), especially when the group is divided into subcategories based on the level of language acquisition, which indeed reduces our statistical power. We have however, now integrated the individual task accuracy and reaction time parameters as nuisance variables in calculating the variability analyses; all the results are fully replicated when accounting for task difficulty. We also report that there was no group difference in activation for this task between the groups which could affect our findings. 

      We would like to note that while we would like to replicate these findings in an additional cohort using resting-state, we do not anticipate the state in which the participants are scanned to greatly affect the findings. FC patterns of hearing individuals have been shown to be primarily shaped by common system and stable individual features, and not by time, state, or task (Finn et al., 2015; Gratton et al., 2018; Tavor et al., 2016). While the task may impact FC variability, we have recently shown that individual FC patterns are stable across time and state even in the context of plasticity due to visual deprivation (Amaral et al., 2024). Therefore, we expect that in deafness as well there should not be meaningful differences between resting-state and task FC networks, in terms of FC individual differences. That said, we are exploring collaborations and other avenues to access comparable datasets that might enable a more powerful analysis in future work. This feedback is very important for guiding our ongoing efforts to verify and extend our conclusions.

      (4) Secondly, the authors could more explicitly discuss the broad implications of what their results mean for our understanding of how the architecture of the brain is determined by the genetic blueprint vs. how it is determined by learning (page 9). There is currently a wave of strong evidence favoring a more "nativist" view of brain architecture, for example, face- and object-sensitive regions seem to be in place practically from birth (see e.g. Kosakowski et al., Current Biology, 2022). The current results show what is the role played by experience.

      We thank the Reviewer for highlighting the need to elaborate on the broader implications of our findings in relation to the ongoing debate of nature vs. nurture. We agree that this discussion is crucial and have expanded our manuscript to address this point more explicitly. We now incorporate a more detailed discussion of how our results contribute to understanding the significant role of experience in shaping individual neural connectivity patterns, particularly in sensory-deprived populations (lines 360-372).

      Reviewer #3 (Public Review):

      Summary:

      (1) This study focuses on changes in brain organization associated with congenital deafness. The authors investigate differences in functional connectivity (FC) and differences in the variability of FC. By comparing congenitally deaf individuals to individuals with normal hearing, and by further separating congenitally deaf individuals into groups of early and late signers, the authors can distinguish between changes in FC due to auditory deprivation and changes in FC due to late language acquisition. They find larger FC variability in deaf than normal-hearing individuals in temporal, frontal, parietal, and midline brain structures, and that FC variability is largely driven by auditory deprivation. They suggest that the regions that show a greater FC difference between groups also show greater FC variability.

      Strengths:

      -  The manuscript is well written.

      -  The methods are clearly described and appropriate.

      -  Including the three different groups enables the critical contrasts distinguishing between different causes of FC variability changes.

      -  The results are interesting and novel.

      We thank the Reviewer for their positive and detailed feedback. Their acknowledgment of the clarity of our methods and the novelty of our results is greatly appreciated.

      Weaknesses:

      (2) Analyses were conducted for task-based data rather than resting-state data. It was unclear whether groups differed in task performance. If congenitally deaf individuals found the task more difficult this could lead to changes in FC.

      We thank the Reviewer for their observation regarding possible task performance differences between deaf and hearing participants and their potential effect on the results. Indeed, there was a difference in task accuracy between these groups. To account for this variation and ensure that our findings on functional connectivity were not confounded by task performance, we now included individual task accuracy and reaction time as nuisance variables in our analyses. This approach allowed us to control for any performance differences. The results now presented in the revised manuscript account for the inclusion of these two nuisance variables (accuracy and reaction time) and completely align with our original conclusions, highlighting increased variability in deafness, which is found in both the entire deaf group at large, as well as when equating language experience and comparing the hearing and native signers. The correlation between variability and group differences also remains significant, but its significance is slightly decreased, a moderate effect we acknowledge in the revised manuscript (see comment #4). The differences between the delayed signers and native signers are also retained (Figure 3), now aligning better with language-sensitive regions, as previously predicted. The inclusion of the task difficulty predictors also introduced an additional finding in this analysis, a significant cluster in the right aIFG. Therefore, the inclusion of these predictors reaffirms the robustness of the conclusions drawn about FC variability in the deaf population.

      We would like to note that while we would like to replicate these findings in an additional cohort using resting-state if we had access to such data, we do not anticipate the state in which the participants are scanned to greatly affect the findings. FC patterns of hearing individuals have been shown to be primarily shaped by common system and stable individual features, and not by time, state, or task (Finn et al., 2015; Gratton et al., 2018; Tavor et al., 2016). While the task may impact FC variability, we have recently shown that individual FC patterns are stable across time and state even in the context of plasticity due to visual deprivation (Amaral et al., 2024). Therefore, we expect that in deafness as well there should not be meaningful differences between resting-state and task FC networks, in terms of FC individual differences. We have also addressed this point in our manuscript (lines 442-451).

      (3) No differences in overall activation between groups were reported. Activation differences between groups could lead to differences in FC. For example, lower activation may be associated with more noise in the data, which could translate to reduced FC.

      We thank the reviewer for noting the potential implications of overall activation differences on FC. In our analysis of the activation for words, we found no significant clusters showing a group difference between the deaf and hearing participants (p < .05, cluster-corrected for multiple comparisons) - we also added this information to the revised manuscript (lines 542-544). This suggests that the differences in FC observed are not confounded by variations in overall brain activation between the groups under these conditions.

      (4) Figure 2B shows higher FC for congenitally deaf individuals than normal-hearing individuals in the insula, supplementary motor area, and cingulate. These regions are all associated with task effort. If congenitally deaf individuals found the task harder (lower performance), then activation in these regions could be higher, in turn, leading to FC. A study using resting-state data could possibly have provided a clearer picture.

      We thank the Reviewer for pointing out the potential impact of task difficulty on FC differences observed in our study. As addressed in our response to comment #2, task accuracy and reaction times were incorporated as nuisance variables in our analysis. Further, these areas showed no difference in activation between the groups (see response to comment #3 above). Notably, the referred regions still showed higher FC in congenitally deaf individuals even when controlling for these performance differences. Additionally, these findings are consistent with results from studies using resting-state data in deaf populations, further validating our observations. Specifically, using resting-state data, Andin & Holmer (2022), have shown higher FC for deaf (compared to hearing individuals) from auditory regions to the cingulate cortex, insular cortex, cuneus and precuneus, supramarginal gyrus, supplementary motor area, and cerebellum. Moreover, Ding et al. (2016) have shown higher FC for the deaf between the STG and anterior insula and dorsal anterior cingulated cortex. This suggests that the observed FC differences are likely reflective of genuine neuroplastic adaptations rather than mere artifacts of task difficulty. Although we wish we could augment our study with resting-state data analyzed similarly, we could not at present acquire or access such a dataset. We acknowledge this limitation of our study (lines 442-451) in the revised manuscript and intend to confirm that similar results will be found with resting state data in the future.

      (5) The correlation between the FC map and the FC variability map is 0.3. While significant using permutation testing, the correlation is low, and it is not clear how great the overlap is.

      We acknowledge that the correlation coefficient of 0.3, while statistically significant, indicates a moderate overlap. It's also worth noting that, using our new models that include task performance as a nuisance variable, this value has decreased somewhat, to 0.24 (which is still highly significant). It is important to note that the visual overlap between the maps is not a good estimate of the correlation, which was performed on the unthresholded maps, to estimate the link not only between the most significant peaks of the effects, but across the whole brain patterns. This correlation is meant to suggest a trend rather than a strong link, but especially due to its consistency with the findings in blindness, we believe this observation merits further investigation and discussion. As such, we kept it in the revised manuscript while moderating our claims about its strength.

      Reviewer #1 (Recommendations For The Authors):

      (1) Page 4: Does auditory cortex FC variability..." FC is not yet defined.

      Corrected, thanks.

      (2) Page 4: "It showed lower variability..." What showed this?

      Clarified, thanks.

      (3) Page 11: "highlining the importance" should read "highlighting the importance".

      Corrected, thanks.

      (4) Page 11: Do you really mean to suggest functional connectivity does not vary as a function of task? This would not seem well supported.

      We do not suggest that FC doesn’t vary as a function of task, and have revised this section (lines 447-451). 

      (5) Page 12: "there should not to be" should read "there should not be".

      Corrected, thanks.

      (6) Page 12: "and their majority" should read "and the majority".

      Corrected, thanks.

      Reviewer #2 (Recommendations For The Authors):

      Major

      (1) Although this is a lot of work, I nonetheless have another suggestion on how to test if your results are strong and robust. Perhaps you could analyze your data using an ROI/graph-theory approach. I am not an expert in graph theory analysis, but for sure there is a simple and elegant statistic that captures the variability of edge strength variability within a population. This approach could not only validate your results with an independent analysis and give the audience more confidence in their robustness, but it could also provide an estimate of the size of the effect size you found. That is, it could express in hard numbers how much more variable the connections from auditory cortex ROI's are, in comparison to the rest of the brain in the deaf population, relative to the hearing population.

      We thank the Reviewer for suggesting the use of graph theory as a method to further validate our findings. While we see the potential value in this approach, we believe it may be beyond the scope of the current paper, and merits a full exploration of its own, which we hope to do in the future.  However, we understand the importance of showing the uniqueness of the connectivity of the auditory cortex ROI as compared to the rest of the brain. So, in order to bolster our results, we conducted an additional analysis using control regions of interest (ROIs). Specifically, we calculated the inter-individual variability using all ROIs from the CONN Atlas (except auditory and language regions) as the control seed regions for the FC. We showed that the variability of connectivity from the auditory cortex is uniquely more increased on deafness, as compared to these control ROIs (Figure S1). This additional analysis supports the specificity of our findings to the auditory cortex in the deaf population. We aim to integrate more analytic approaches, including graph theory methods, in our future work.

      Minor

      (1) Some citations display the initial of the author in addition to the last name, unless there is something I don't know about the citation system, the initial shouldn't be there.

      This is due to the citation style we're using (APA 7th edition, as suggested by eLife), which requires including the first author's initials in all in-text citations when citing multiple authors with the same last name.  

      Reviewer #3 (Recommendations For The Authors):

      (1) I recommend that the authors provide behavioral data and results for overall neural activation.

      Thanks. We have added these to the revised manuscript. Specifically, we report that there was no difference in the activation for words (p < .05, cluster-corrected for multiple comparisons) between the deaf and hearing participants. Further, we report the behavioral averages for accuracy and reaction time for each group, and have now used these individual values explicitly as nuisance variables in the revised analyses.

      (2) For the correlation between FC and FC variability, it seemed a bit odd that the permuted data were treated additionally (through Gaussian smoothing). I understand the general logic (i.e., to reintroduce smoothness), but this approach provides more smoothing to the permutation than the original data. It is hard to know what this does to the statistical distribution. I recommend using a different approach or at least also reporting the p-value for non-smoothed permutation data.

      In response to this suggestion and to ensure transparency in our results, we have now included also the p-value for the non-smoothed permutation data in our revised manuscript (still highly significant; p < .0001). Thanks for this proposal.

      (3) For the map comparison, a plot with different colors, showing the FC map, the FC variability map, and one map for the overlap on the same brain may be helpful.

      We thank the Reviewer for their suggestion to visualize the overlap between the maps. However, we performed the correlation analysis using the unthresholded maps, as mentioned in the methods section of our manuscript, specifically to estimate the link not only between the most significant peaks of the effects, but across the whole brain patterns. This is why the maps displayed in the figures, which are thresholded for significance, may not appear to match perfectly, and may actually obscure the correlation across the brain. This methodological detail is crucial for interpreting the relationship and overlap between these maps accurately but also explains why the visualization of the overlap is, unfortunately, not very informative.

    1. There's, like, add - plus mambo, bossa nova, house, et cetera, et cetera, et cetera. And I think, like - this, to me, is kind of, like, his victory lap album in a way, where it's like, there were no expectations in many ways around this album. Like, he didn't have to adhere to anything or anyone. He is, like, the star. Felix and I - Felix Contreras, host of Alt.Latino - we get into kind of arguments, soft arguments, about this a lot because I'm like, Felix, he's not Latin pop star Bad Bunny anymore. Like, he is just the pop star. We don't even need to talk anymore about crossing over and all these different things that we often talk about with big Latin artists. Like, he is the crossover. He's done it. He can sing in Spanish, he can sing in English, he can play with whatever genres he want and people will listen.

      Purpose: This part of the podcast as well as the whole podcast is mainly entertainment with sprinkles of information when it comes to the success of Bad bunny. Though they are providing facts and data the nature of the podcast is for the viewers to find entertainment in talking about music artists and their journey.

    1. When looking for songs in the library, it's very important to answer a few questions to filter. Not just to save storage space, but also to ensure the quality of one's library.

      Chris M. recommends a SHORT LIST... Music you come across that you like and think about downloading, you put in there. Then wait for 24h before listening again to it. Finally, ask 3 questions before deciding to add it: - 1) Do I still like it? - 2) Would I play it out? - 3) Would I pay money for it?

    1. The letter isn’t cold in its tone but burning hot, and it makes me think that the anonymous factor of the letter allowed the FBI to inject the full extent of their racism into the words.

      It's interesting how you described the letter as 'burning hot' rather than cold, as it really emphasizes the intensity and aggressiveness of the language used. It makes it clear that the FBI wasn't just being threatening, and they were openly hostile, in their attempt to intimidate MLK.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public Review):

      Summary:

      Previous work demonstrated a strong bias in the percept of an ambiguous Shepard tone as either ascending or descending in pitch, depending on the preceding contextual stimulus. The authors recorded human MEG and ferret A1 single-unit activity during presentation of stimuli identical to those used in the behavioral studies. They used multiple neural decoding methods to test if context-dependent neural responses to ambiguous stimulus replicated the behavioral results. Strikingly, a decoder trained to report stimulus pitch produced biases opposite to the perceptual reports. These biases could be explained robustly by a feed-forward adaptation model. Instead, a decoder that took into account direction selectivity of neurons in the population was able to replicate the change in perceptual bias.

      Strengths:

      This study explores an interesting and important link between neural activity and sensory percepts, and it demonstrates convincingly that traditional neural decoding models cannot explain percepts. Experimental design and data collection appear to have been executed carefully. Subsequent analysis and modeling appear rigorous. The conclusion that traditional decoding models cannot explain the contextual effects on percepts is quite strong.

      Weaknesses:

      Beyond the very convincing negative results, it is less clear exactly what the conclusion is or what readers should take away from this study. The presentation of the alternative, "direction aware" models is unclear, making it difficult to determine if they are presented as realistic possibilities or simply novel concepts. Does this study make predictions about how information from auditory cortex must be read out by downstream areas? There are several places where the thinking of the authors should be clarified, in particular, around how this idea of specialized readout of direction-selective neurons should be integrated with a broader understanding of auditory cortex.

      While we have not used the term "direction aware", we think the reviewer refers generally to the capability of our model to use a cell's direction selectivity in the decoding. In accordance with the reviewer's interpretation, we did indeed mean that the decoder assumes that a neuron does not only have a preferred frequency, but also a preferred direction of change in frequency (ascending/descending), which is what we use to demonstrate that the decoding in this way aligns with the human percept. We have adapted the text in several places to clarify this, in particular expanding the description in the Methods substantially.

      Reviewer #2 (Public Review):

      The authors aim to better understand the neural responses to Shepard tones in auditory cortex. This is an interesting question as Shepard tones can evoke an ambiguous pitch that is manipulated by a proceeding adapting stimulus, therefore it nicely disentangles pitch perception from simple stimulus acoustics.

      The authors use a combination of computational modelling, ferret A1 recordings of single neurons, and human EEG measurements.

      Their results provide new insights into neural correlates of these stimuli. However, the manuscript submitted is poorly organized, to the point where it is near impossible to review. We have provided Major Concerns below. We will only be able to understand and critique the manuscript fully after these issues have been addressed to improve the readability of the manuscript. Therefore, we have not yet reviewed the Discussion section.

      Major concerns

      Organization/presentation

      The manuscript is disorganized and therefore difficult to follow. The biggest issue is that in many figures, the figure subpanels often do not correspond to the legend, the main body, or both. Subpanels described in the text are missing in several cases.

      We have gone linearly through the text and checked that all figure subpanels are referred to in the text and the legend. As far as we can tell, this was already the case for all panels, with the exception of two subpanels of Fig. 5.

      Many figure axes are unlabelled.

      We have carefully checked the axes of all panels and all but two (Fig. 5D) were labeled. As is customary, certain panels inherit the axis label from a neighboring panel, if the label is the same, e.g. subpanels in Fig. 6F or Fig. 5E, which helps to declutter the figure. We hope that with this clarification, the reviewer can understand the labels of each panel.

      There is an inconsistent style of in-text citation between figures and the main text. The manuscript contains typos and grammatical errors. My suggestions for edits below therefore should not be taken as an exhaustive list. I ask the authors to consider the following only a "first pass" review, and I will hopefully be able to think more deeply about the science in the second round of revisions after the manuscript is better organized.

      While we are puzzled by the severity of issues that R2 indicates (see above, and R3 qualifies it as "well written", and R1 does not comment on the writing negatively), we have carefully gone through all specific issues mentioned by R2 and the other reviewers. We hope that the revised version of the paper with all corrections and clarifications made will resolve any remaining issues.

      Frequency and pitch

      The terms "frequency" and "pitch" seem to be used interchangeably at times, which can lead to major misconceptions in a manuscript on Shepard tones. It is possible that the authors confuse these concepts themselves at times (e.g. Fig 5), although this would be surprising given their expertise in this field. Please check through every use of "frequency" and "pitch" in this manuscript and make sure you are using the right term in the right place. In many places, "frequency" should actually be "fundamental frequency" to avoid misunderstanding.

      Thanks for pointing this out. We have checked every occurrence and modified where necessary.

      Insufficient detail or lack of clarity in descriptions

      There seems to be insufficient information provided to evaluate parts of these analysis, most critically the final pitch-direction decoder (Fig 6), which is a major finding. Please clarify.

      Thanks for pointing this out. We have extended the description of the pitch-direction decoder and highlighted its role for interpreting the results.

      Reviewer #3 (Public Review):

      Summary:

      This is an elegant study investigating possible mechanisms underlying the hysteresis effect in the perception of perceptually ambiguous Shepard tones. The authors make a fairly convincing case that the adaptation of pitch direction sensitive cells in auditory cortex is likely responsible for this phenomenon.

      Strengths:

      The manuscript is overall well written. My only slight criticism is that, in places, particularly for non-expert readers, it might be helpful to work a little bit more methods detail into the results section, so readers don't have to work quite so hard jumping from results to methods and back.

      Following this excellent suggestion, we have added more brief method sketches to the Results section, hopefully addressing this concern.

      The methods seem sound and the conclusions warranted and carefully stated. Overall I would rate the quality of this study as very high, and I do not have any major issues to raise.

      Thanks for your encouraging evaluation of the work.

      Weaknesses:

      I think this study is about as good as it can be with the current state of the art. Generally speaking, one has to bear in mind that this is an observational, rather than an interventional study, and therefore only able to identify plausible candidate mechanisms rather than making definitive identifications. However, the study nevertheless represents a significant advance over the current state of knowledge, and about as good as it can be with the techniques that are currently widely available.

      Thanks for your encouraging evaluation of our work. The suggestion of an interventional study has also been on our minds, however, this appears rather difficult, as it would require a specific subset of cells to be inhibited. The most suitable approach would likely be 2p imaging with holographic inhibition of a subset of cells (using ArchT for example), that has a preference for one direction of pitch change, which should then bias the percept/behavior in the opposite direction.

      Reviewer #1 (Recommendations For The Authors):

      MAJOR CONCERNS

      (1) What is the timescale used to compute direction selectivity in neural tuning? How does it compare to the timing of the Shepard tones? The basic idea of up versus down pitch is clear, the intuition for the role of direction tuning and its relation to stimulus dynamics could be laid out more clearly. Are the authors proposing that there are two "special" populations of A1 neurons that are treated differently to produce the biased percept? Or is there something specific about the dynamics of the Shepard stimuli and how direction selective neurons respond to them specifically? It would help if the authors could clarify if this result links to broader concepts of dynamic pitch coding in general or if the example reported here is specific (or idiosyncratic) to Shepard tones.

      We propose that the findings here are not specific to Shepard tones. To the contrary, only basic properties of auditory cortex neurons, i.e. frequency preference, frequency-direction (i.e. ascending or descending) preference, and local adaptation in the tuning curve, suffice. Each of these properties have been demonstrated many times before and we only verified this in the lead-up to the results in Fig. 6. While the same effects should be observable with pure tones, the lack of ambiguity in the perception of direction of a frequency step for pure tone pairs, would make them less noticeable here. Regarding the time-scale of the directional selectivity, we relied on the sequencing of tones in our paradigm, i.e. 150 ms spacing. The SSTRFs were discretized at 50 ms, and include only the bins during the stimulus, not during the pause. The directional tuning, i.e. differences in the SSTRF above and below the preferred pitchclass for stimuli before the last stimulus, typically extended only one stimulus back in time. We have clarified this in more detail now, in particular in the added Methods section on the directional decoder.

      (2) (p. 9) "weighted by each cell's directionality index ... (see Methods for details)" The direction-selective decoder is interesting and appears critical to the study. However, the details of its implementation are difficult to locate. Maybe Fig. 6A contains the key concepts? It would help greatly if the authors could describe it in parallel with the other decoders in the Methods.

      We have expanded the description of the decoder in the Methods as the reviewer suggests.

      LESSER CONCERNS

      p. 1. (L 24) "distances between the pitch representations...." It's not obvious what "distances" means without reading the main paper. Can some other term or extra context be provided?

      We have added a brief description here.

      p. 2. (L 26) "Shepard tones" Can the authors provide a citation when they first introduce this class of stimuli?

      Citation has been added.

      p. 3 (L 4) "direction selective cells" Please define or provide context for what has a direction. Selective to pitch changes in time?

      Yes, selective to pitch changes in time is what is meant. We have further clarified this in the text.

      p. 4 (L 9-19). This paragraph seems like it belongs in the Introduction?

      Given the concerns raised by R2 about the organization of the manuscript we prefer to keep this 'road-map' in the manuscript, as a guidance for the reader.

      p. 4 (L 32) "majority of cells" One might imagine that the overlap of the bias band and the frequency tuning curve of individual neurons might vary substantially. Was there some criterion about the degree of overlap for including single units in the analysis? Does overlap matter?

      We are not certain which analysis the reviewer is referring to. Generally, cells were not excluded based on their overlap between a particular Bias band and their (Shepard) tuning curve. There are several reasons for this: The bias was located in 4 different, overlapping Shepard tone regions, and all sounds were Shepard tones. Therefore, all cells overlapped with their (Shepard) tuning curve with one or multiple of the Biases. For decoding analysis, all cells were included as both a response and lack of a response is contributing to the decoding. If the reviewer is referring only to the analysis of whether a cell adapts, then the same argument applies as above, i.e. this was an average over all Bias sequences, and therefore every responding cell was driven to respond by the Bias, and therefore it was possible to also assess whether it adapted its response for different positions inside the Bias. We acknowledge that the limited randomness of the Bias sequences in combination with the specific tuning of the cells could in a few cases create response patterns over time that are not indicative of the actual behavior for repeated stimulation, however, since the results are rather clear with 91% of cells adapting, we do not think this would significantly change the conclusions.

      p. 5 (L 17) "desynchronization ... behaving conditions" The logic here is not clear. Is less desynchronization expected during behavior? Typically, increased attention is associated with greater desynchronization.

      Yes, we reformulated the sentence to: While this difference could be partly explained by desynchronization which is typically associated with active behavior or attention [30], general response adaptation to repeated stimuli is also typical in behaving humans [31].

      p. 7 (L 5) "separation" is this a separation in time?

      Yes, added.

      p. 7 (L 33) "local adaptation" The idea of feedforward adaptation biasing encoding has been proposed before, and it might be worth citing previous work. This includes work from Nelken specifically related to SSA. Also, this model seems similar to the one described in Lopez Espejo et al (PLoS CB 2019).

      Thanks for pointing this out. We think, however, that neither of these publications suggested this very narrow way of biasing, which we consider biologically implausible. We have therefore not added either of these citations.

      p. 11 (L. 17) The cartoon in Fig. 6G may provide some intuition, but it is quite difficult to interpret. Is there a way to indicate which neuron "votes" for which percept?

      This is an excellent idea, and we have added now the purported perceptual relation of each cell in the diagram.

      p. 12 (L. 8). "classically assumed" This statement could benefit from a citation. Or maybe "classically" is not the right word?

      We have changed 'classically' to 'typically', and now cite classical works from Deutsch and Repp. We think this description makes sense, as the whole concept of bistable percepts has been interpreted as being equidistant (in added or subtracted semitone steps) from the first tone, see e.g. Repp 1997, Fig.2.

      p. 12 (L. 12) "...previous studies" of Shepard tone percepts? Of physiology?

      We have modified it to 'Relation to previous studies of Shepard tone percepts and their underlying physiology", since this section deals with both.

      p. 12 (L. 25) "compatible with cellular mechanisms..." This paragraph seems key to the study and to Major Concern 1, above. What are the dynamics of the task stimuli? How do they compare with the dynamics of neural FM tuning and previously reported studies of bias? And can the authors be more explicit in their interpretation - should direction selective neurons respond preferentially to the Shepard tone stimuli themselves? And/or is there a conceptual framework where the same neurons inform downstream percepts of both FM sweeps and both normal (unbiased) and biased Shepard tones?

      The reviewer raises a number of different questions, which we address below:

      - Dynamics of the task stimuli in relation to previously reported cellular biasing: The timescales tested in the studies mentioned are similar to what we used in our bias, e.g. Ye et al 2010 used FM sweeps that lasted for up to 200ms, which is quite comparable to our SOA of 150ms.

      - Preferred responses to Shepard tones: no, we do not think that there should be preferred responses to Shepard tones, but rather that responses to Shepard tones can be thought of as the combined responses to the constituent tones.

      - Conceptual framework where the same neurons inform about FM sweeps and both normal (unbiased) and biased Shepard tones: Our perspective on this question is as follows: To our knowledge, the classical approach to population decoding in the auditory system, i.e. weighted based on preferred frequency, has not been directly demonstrated to be read out inside the brain, and certainly not demonstrated to be read out in only this way in all areas of the brain that receive input from the auditory cortex. Rather it has achieved its credibility by being linked directly with animal performance or match with the presented stimuli. However, these approaches were usually geared towards a representation that can be estimated based on constituent frequencies. Additional response properties of neurons, such as directional selectivity have been documented and analyzed before, however, not been used for explaining the percept. We agree that our use of this cellular response preference in the decoding implicitly assumes that the brain could utilize this as well, however, this seems just as likely or unlikely as the use of the preferred frequency of a neuron. Therefore we do not think that this decoding is any more speculative than the classical decoding. In both cases, subsequent neurons would have to implicitly 'know' the preference of the input neuron, and weigh its input correspondingly.

      We have added all the above considerations to the discussion in an abbreviated form.

      p. 15 (L. 15). Is there a citation for the drive system?

      There is no publication, but an old repository, where the files are available, which we cite now: https://code.google.com/archive/p/edds-array-drive/

      p. 16 (L. 24) "position in an octave" It is implied but not explicitly stated that the Shepard tones don't contain the fundamental frequency. Can the authors clarify the relationship between the neural tuning band and the bands of the stimulus. Did a single stimulus band typically fall in a neuron's frequency tuning curve? If not 1, how many?

      Yes, it is correct that the concept of fundamental frequency does not cleanly apply to Shepard tones, because it is composed of octave spaced pure tones, but the lowest tone is placed outside the hearing range of the animal and amplitude envelope (across frequencies). Therefore one or more constituent tones of the Shepard tone can fall into the tuning curve of a neuron and contribute to driving the neuron (or inhibiting it, if they fall within an inhibitory region of the tuning curve). The number of constituent tones that fall within the tuning curve depends on the tuning width of the neurons. The distribution of tuning widths to Shepard tones is shown in Fig. S1E, which indicated that a lot of neurons had rather narrow tuning (close to the center), but many were also tuned widely, indicated that they would be stimulated by multiple constituent tones of the Shepard tone. As the tuning bandwidth (Q30: 30dB above threshold) of most cortical neurons in the ferret auditory cortex (see e.g. Bizley et al. Cerebral Cortex, 2005, Fig.12) is below 1, this means that typically not more than 1 tone fell into the tuning curve of a neuron. However, we also observed multimodal tuning-curves w.r.t. to Shepard tones, which suggests that some neurons were stimulated by more than 2 or more constituent tones (again consistent with the existence of more broadly tuned neurons (see same citation). We have added this information partly to the manuscript in the caption of Fig. S1E.

      p. 17 (L. 32). "Fig 4" Correct figure ref? This figure appears to be a schematic rather than one displaying data.

      Thanks for pointing this out, changed to Fig. 5.

      p. 18 (L. 25). "assign a pitchclass" Can the authors refer to a figure illustrating this process?

      Added.

      p. 19 (L. 17). Is mu the correct symbol?

      Thanks. We changed it to phi_i, as in the formula above.

      p. 19 (L 19). "convolution" in time? Frequency?

      Thanks for pointing this out, the term convolution was incorrect in this context. We have replaced it by "weighted average" and also adapted and simplified the formula.

      p. 19 (L 25) "SSTRF" this term is introduced before it is defined. Also it appears that "SSTRF" and "STRF" are sometimes interchanged.

      Apologies, we have added the definition, and also checked its usage in each location.

      p. 23 (Fig 2) There is a mismatch between panel labels in the figure and in the legend. Bottom right panel (B3), what does time refer to here?

      Thanks for pointing these out, both fixed.

      p. 24 (L 23) "shifts them away" away from what?

      We have expanded the sentence to: "After the bias, the decoded pitchclass is shifted from their actual pitchclass away from the biased pitchclass range ... "

      p. 25 (L 7) "individual properties" properties of individual subjects?

      Thanks for pointing this out, the corresponding sentence has been clarified and citations added.

      p. 26 (L 20) What is plotted in panel D? The average for all cells? What is n?

      Yes, this is an average over cells, the number of cells has now been added to each panel.

      p. 28 (L 3) How to apply the terms "right" "right" "middle" to the panel is not clear. Generally, this figure is quite dense and difficult to interpret.

      We have changed the caption of Panel A and replaced the location terms with the symbols, which helps to directly relate them to the figure. We have considered different approaches of adding or removing content from the figure to help make it less dense, but that all did not seem to help. For lack of better options we have left it in its current form.

      MINOR/TYPOS

      p. 3 (L 1) "Stimulus Specific Adaptation" Capitalization seems unnecessary

      Changed.

      p. 4 (L 14) "Siple"

      Corrected.

      p. 9 (L 10) "an quantitatively"

      Corrected

      p. 9 (L 20) "directional ... direction ... directly ... directional" This is a bit confusing as directseems to mean several different things in its different usages.

      We have gone through these sentences, and we think the terms are now more clearly used, especially since the term 'direction' occurs in several different forms, as it relates to different aspects (cells/percept/hypothesis). Unfortunately, some repetition is necessary to maintain clarity.

      Reviewer #2 (Recommendations For The Authors):

      Detailed critique

      Stimuli

      It would be very useful if the authors could provide demos of their stimuli on a website. Many readers will not be familiar with Shepard tones and the perceptual result of the acoustical descriptions are not intuitive. I ended up coding the stimuli myself to get some intuition for them.

      We have created some sample tones and sequences and uploaded them with the revision as supplementary documents.

      Abstract

      P1 L27 'pitch and...selective cells' - The authors haven't provided sufficient controls to demonstrate that these are "pitch cells" or "selective" to pitch direction. They have only shown that they are sensitive to these properties in their stimuli. Controls would need to be included to ensure that the cells aren't simply responding to one frequency component in the complex sound, for example. This is not really critical to the overall findings, but the claim about pitch "selectivity" is not accurate.

      Fair point. We have removed the word 'selective' in both occurrences.

      Introduction

      P2 L14-17: I do not follow the phonetic example provided. The authors state that the second syllable of /alga/ and /arda/ are physically identical, but how is this possible that ga = da? The acoustics are clearly different. More explanation is needed, or a correction.

      Apologies for the slightly misleading description, it has now been corrected to be in line with the original reference.

      P2,L26-27: Should the two uses of "frequency" be "F0" and "pitch" here? The tones are not separated in frequency by half and octave, but "separated in [F0]" by half an octave, correct? Their frequency ranges are largely overlapping. And the second 'frequency', which refers to the percept, should presumably be "pitch".

      Indeed. This is now corrected.

      P3 L2-6: Unclear at this point in the manuscript what is the difference between the 3 percepts mentioned: perceived pitch-change direction, Shepard tone pitches, and "their respective differences". (It becomes clear later, but clarification is needed here).

      We have tried a few reformulations, however, it tends to overload the introduction with details. We believe it is preferable to present the gist of the results here, and present the complete details later in the MS.

      P3 L6-7 What does it mean that the MEG and single unit results "align in direction and dynamics"? These are very different signals, so clarification is needed.

      We have phrased the corresponding sentence more clearly.

      Results

      Throughout: Choose one of 'pitch class', 'pitchclass', or 'pitch-class' and use it consistently.

      Done.

      P4L12 - would be helpful at this point to define 'repulsive effect'

      We have added another sentence to clarify this term.

      P4, L14 "simple"

      Done

      P4, L12 - not clear here what "repulsive influence" means

      See above.

      P4, L17 - alternative to which explanation? Please clarify. In general, this paragraph is difficult to interpret because we do not yet have the details needed to understand the terms used and the results described. In my opinion, it would be better to omit this summary of the results at the very beginning, and instead reveal the findings as they come, when they can be fully explained to the Reader.

      We agree, but we also believe that a rather general description here is useful for providing a roadmap to the results. However, we have added a half-sentence to clarify what is meant by alternative.

      P4 L30 - text says that cells adapt in their onset, sustained and offset responses, but only data for onset responses are shown (I think - clarification needed for fig 2A2). Supp figure shows only 1 example cell of sustained and offset, and in fact there is no effect of adaptation in the sustained response shown there.

      Regarding the effect of adaptation and whether it can be discerned from the supplementary figure: the shown responses are for 10 repetitions of one particular Bias sequence. Since the response of the cell will depend on its tuning and the specific sequence of the Shepard tones in this Bias, it is not possible to assess adaptation for a given cell. We assess the level of adaptation, by averaging all biases (similar to what is shown in Fig. 2A2) per cell, and then fit an exponential to it, separately by response type. The step direction of the exponential, relative to the spontaneous rate is then used to assess the kind of adaptation. The vast majority of cells show adaptation. We have added this information to the Methods of the manuscript.

      P4, L32 - please state the statistical test and criterion (alpha) used to determine that 91% of cells decreased their responses throughout the Bias sequence. Was this specifically for onset responses?

      Thanks for pointing this out, test and p-value added. Adaptation was observed for onset, sustained and offset responses, in all cases with the vast majority showing an adapting behavior, although the onset responses were adapting the most.

      P4 L36 - "response strength is reduced locally". What does "locally" mean here? Nearby frequencies?

      We have added a sentence here to clarify this question.

      Figure 1 - this appears to be the wrong version of the figure, as it doesn't match the caption or results text. It's not possible to assess this figure until these things are fixed. Figure 1A schematic of definition of f(diff) does not correspond to legend definition.

      As far as we can tell, it is all correct, only the resolution of the figure appears to be rather low. This has been improved now.

      Fig 2 A2 - is this also onset responses only?

      Yes, added to the caption.

      Fig 2 A3 - add y-axis label. The authors are comparing a very wide octave band (5.5 octaves) to a much narrower band (0.5 octaves). Could this matter? Is there something special about the cut-off of 2.5 octaves in the 2 bands, or was this an arbitrary choice?

      Interesting question.... essentially our stimulus design left us only with this choice, i.e. comparing the internal region of the bias with the boundary region of the bias, i.e. the test tones. The internal region just corresponds to the bias, which is 5 st wide, and therefore the range is here given as 2.5 st relative to its center, while the test tones are at the boundary, as they are 3 st from the center. The axis for the bias was mislabelled, and has now been corrected. The y-axis label is matched with the panel to the left, but has now been added to avoid any confusion.

      Fig 2A4 - does not refer to ferret single unit data, as stated in the text (p5L8). Nor does supp Fig2, as stated. Also, the figure caption does not match the figure.

      Apologies, this was an error in the code that led to this mislabelling. We have corrected the labels, which also added back the recovery from the Bias sequence in the new Panel A4.

      P5 l9 - Figure 3 is not understandable at this point in the text, and should not be referred to here. There is a lot going on in Fig 3, and it isn't clear what you are referring to.

      Removed.

      P5 L12 - by Fig 2 B1, I assume you mean A4? Also, F2B1 shows only 1 subject, not 2.

      Yes, mislabeled by mistake, and corrected now.

      Fig2B2 -What is the y-axis?

      Same as in the panel to its left, added for clarity.

      Stimuli: why are tones presented at a faster rate to ferrets than to humans?

      The main reason is that the response analysis in MEG requires more spacing in time than the neuronal analysis in the ferret brain.

      P5 L6 - there is no Fig 5 D2? I don't think it is a good idea to get the reader to skip so far ahead in the figures at this stage anyway, even if such a figure existed. It is confusing to jump around the manuscript

      Changed to 'see below'

      P5 L8 - There is no Figure 2A4, so I don't know whether this time constant is accurate.

      This was in reference to a panel that had been removed before, but we have added it back now.

      P5 L16: "in humans appears to be more substantial (40%) than for the average single units under awake conditions". One cannot directly compare magnitude of effects in MEG and single unit signals in this way and assume it is due to behavioural state. You are comparing different measures of neural activity, averaged over vastly different numbers of numbers, and recorded from different species listening to different stimuli (presentation rates).

      Yes, that's why the next sentence is: "However, comparisons between the level of adaptation in MEG and single neuron firing rates may be misleading, due to the differences in the signal measured and subsequent processing.", and all statements in the preceding sentences are phrased as 'appears' and 'may'. We think we have formulated this comparison with an appropriate level of uncertainty. Further, the main message here is that adaptation is taking place in both active and passive conditions.

      P5 L25 -I do not see any evidence regarding tuning widths in Fig s2, as stated in the text.

      Corrected to Fig. S1.

      P5 l26 - Do not skip ahead to Fig 5 here. We aren't ready to process that yet.

      OK, reference removed.

      P5 l27 - Do you mean because it could be tuning to pitch chroma, not height?

      Yes, that is a possible interpretation, although it could also arise from a combination of excitatory and inhibitory contributions across multiple octaves.

      P5 l33 - remove speculation about active vs passive for reasons given above.

      Removed.

      P6L2-6 'In the present...5 semitone step' - This is an incorrect interpretation of the minimal distance hypothesis in the context of the Shepard tone ambiguity. The percept is ambiguous because the 'true' F0 of the Shepard tones are imperceptibly low. Each constituent frequency of a single tone can therefore be perceived either as a harmonic of some lower fundamental frequency or as an independent tone. The dominant pitch of the second tone in the tritone pair may therefore be biased to be perceived at a lower constituent frequency (when the bias sequence is low) or at a higher constituent frequency (when the bias sequence is high). The text states that the minimal distance hypothesis would predict that an up-bias would make a tritone into a perfect fourth (5 semitones). This is incorrect. The MDH would predict that an up-bias would reduce the distance between the 1st tone in the ambiguous pair and the upper constituent frequency of the 2nd tone in the pair, hence making the upper constituent frequency the dominant pitch percept of the 2nd tone, causing an ascending percept.

      The reviewer here refers to a “minimal distance hypothesis”, which without a literature reference,is hard for us to fully interpret. However, some responses are given below:

      - "The percept is ambiguous because the 'true' F0 of the Shepard tones are imperceptibly low." This statement appears to be based on some misconception: due to the octave spacing (rather than multiple/harmonics of a lowest frequency), the Shepard tones cannot be interpreted as usual harmonic tones would be. It is correct that the lowest tone in a Shepard tone is not audible, due to the envelope and the fact that it could in principle be arbitrarily small... hence, speaking about an F0 is really not well-defined in the case of a Shepard tone. The closest one could get to it would be to refer to the Shepard tone that is both in the audible range and in the non-zero amplitude envelope. But again, since the envelope is fading out the highest and lowest constituent tones, it is not as easy to refer to the lowest one as F0 (as it might be much quieter than the next higher constituent.

      - "The dominant pitch of the second tone in the tritone pair may therefore be biased to be perceived at a lower constituent frequency (when the bias sequence is low) or at a higher constituent frequency (when the bias sequence is high)." This may relate to some known psychophysics, but we are unable to interpret it with certainty.

      - "The text states that the minimal distance hypothesis would predict that an up-bias would make a tritone into a perfect fourth (5 semitones). This is incorrect." We are unsure how the reviewer reaches this conclusion.

      - "The MDH would predict that an up-bias would reduce the distance between the 1st tone in the ambiguous pair and the upper constituent frequency of the 2nd tone in the pair, hence making the upper constituent frequency the dominant pitch percept of the 2nd tone, causing an ascending percept." Again, in the absence of a reference to the MDH, we are unsure of the implied rationale. We agree that this is a possible interpretation of distance, however, we believe that our interpretation of distance (i.e. distances between constituent tones) is also a possible interpretation.

      Fig 4: Given that it comes before Figure 3 in the results text, these should be switched in order in the paper.

      Switched.

      PCA decoder: The methods (p18) state that the PCA uses the first 3 dimensions, and that pitch classes are calculated from the closest 4 stimuli. The results (P6), however, state that the first 2 principal components are used, and classes are computed from the average of 10 adjacent points. Which is correct, or am I missing something?

      Thanks for pointing this out, we have made this more concrete in the Methods to: "The data were projected to the first three dimensions, which represented the pitch class as well as the position in the sequence of stimuli (see Fig. 43A for a schematic). As the position in the Bias sequence was not relevant for the subsequent pitch class decoding, we only focussed on the two dimensions that spanned the pitch circle." Regarding the number of stimuli that were averaged: this might be a slight misunderstanding: Each Shepard tone was decoded/projected without averaging. However, to then assign an estimated pitch class, we first had to establish an axis (here going around the circle), where each position along the axis was associated with a pitch class. This was done by stepping in 0.5 semitone steps, and finding the location in decoded space that corresponded to the median of the Shepard tones within +/- 0.25st. To increase the resolution, this circular 'axis' of 24 points was then linearly interpolated to a resolution of 0.05st. We have updated the text in the Methods accordingly. The mentioning of 10 points for averaging in the Results was correct, as there were 240 tones in all bias stimuli, and 24 bins in the pitch circle. The mentioning of an average over 4 tones in the Methods was a typo.

      Fig 3A: axes of pink plane should be PC not PCA

      Done.

      Fig 3B: the circularity in the distribution of these points is indeed interesting! But what do the authors make of the gap in the circle between semitones 6-7? Is this showing an inherent bias in the way the ambiguous tone is represented?

      While we cannot be certain, we think that this represents an inhomogeneous sampling from the overall set of neural tuning preferences, and that if we had recorded more/all neurons, the circle would be complete and uniformly sampled (which it already nearly is, see Fig.4C, which used to be Fig. 3C).

      Fig 3B (lesser note): It'd be preferable to replace the tint (bright vs. dark) differentiation of the triangles to be filled vs. unfilled because such a subtle change in tint is not easily differentiable from a change in hue (indicating a different variable in this plot) with this particular colour palette

      We have experimented with this suggestion, and it didn't seem to improve the clarity. However, we have changed the outline of the test-pair triangles to white, which now visually separates them better.

      P6 l32 - Please indicate if cross-validation was used in this decoder, and if so, what sort. Ideally, the authors would test on a held-out data set, or at least take a leave-one-out approach. Otherwise, the classifier may be overfit to the data, and overfitting would explain the exceptional performance (r=.995) of the classifier.

      Cross-validation was not used, as the purpose of the decoder is here to create a standard against which to compare the biased responses in the ambiguous pair, which were not used for training of the decoder. We agree that if we instead used a cross-validated decoder (which would only apply to the local average to establish the pitch class circle) the correlation would be somewhat lower, however, this is less relevant for the main question, i.e. the influence of the Bias sequence on the neural representation of the ambiguous pair. We have added this information to the corresponding section.

      Fig 3D: I understood that these pitch classifications shown by the triangles were carried out on the final ambiguous pair of stimuli. I thought these were always presented at the edges of the range of other stimuli, so I do not follow how they have so many different pitchclass values on the x-axis here.

      There were 4 Biases, centered at 0,3,6 or 9 semitones, and covering [-2.5,2.5]st relative to this center. Therefore the edges of the bias ranges (3st away from their centers) happen to be the same as the centers, e.g. for the Bias centered at 3, the ambiguous pair would be a 0-6 or 6-0 step. Therefore there are 4 locations for the ambiguous tones on the x-axis of Fig. 4D (previously 3D).

      Figure 4: This demonstration of the ambiguity of Shepard pairs may be misleading. The actual musical interval is never ambiguous, as this figure suggests. Only the ascending vs descending percept is ambiguous. Therefore the predictions of the ferret A1 decoding (Fig 3D) and the model in Fig 5 are inconsistent with perception in two ways. One (which the authors mention) is the direction of the bias shift (up vs down). Another (not mentioned here) is that one never experiences a shift in the shepard tone at a fraction of a semitone - the musical note stays the same, and changes only in pitch height, not pitch chroma.

      We are unsure of the reviewer’s direction with this question. In particular the second point is not clear to us: "...one (who?) never (in this experiment? in real life?) experiences a bias shift in the Shepard tone at a fraction of a semitone" (why is this relevant in the current experiment?). Pitch chrome would actually be a possible replacement for pitch class, but somehow, the previous Shepard tone literature has referred to it as pitch class.

      P7 l12 - omit one 'consequently'

      Changed to 'Therefore'.

      P7 l24 - I encourage the authors to not use "local" and "global" without making it clear what space they refer to. One tends to automatically think of frequency space in the auditory system, but I think here they mean f0 space? What is a "cell close to the location of the bias"? Cells reside in the brain. The bias is in f0 space. The use of "local" and "global" throughout the manuscript is too vague.

      Agreed, the reference here was actually to the cell's preferred pitch class, not its physical location (which one might arguably be able to disambiguate, given the context). We have changed the wording, and also checked the use of global/local throughout the manuscript. The main use of 'global/local' is now in reference to the range of adaptation, and is properly introduced on first mention.

      P7 L26 -there is no Fig 5D1. Do you mean the left panel of 5D?

      Thanks. Changed.

      FigS3 is referred to a lot on p7-8. Should this be moved to the main text?

      The main reason why we kept it in the supplement is that it is based on a more static model, which is intended to illustrate the consequences of different encoding schemes. In order to not confuse the reader about these two models, we prefer to keep it in the supplement, which - for an online journal - makes little difference since the reader can just jump ahead to this figure in the same way as any other figure.

      Fig 5C, D - label x-axis.

      Added.

      Fig 5E - axis labels needed. I don't know what is plotted on x and y, and cannot see red and green lines in left plot

      Thanks for noticing this, colors corrected, axes labeled.

      Page 8 L3-15 - If I follow this correctly, I think the authors are confusing pitch and frequency here in a way that is fundamental to their model. They seem to equate tonotopic frequency tuning to pitch tuning, leading to confused implications of frequency adaptation on the F0 representation of complex sounds like Shepard tones. To my knowledge, the authors do not examine pure tone frequency tuning in their neurons in this study. Please clarify how you propose that frequency tuning like that shown in Fig 5A relates to representation of the F0 of Shepard tones. Or...are the authors suggesting these neural effects have little to do with pitch processing and instead are just the result of frequency tuning for a single harmonic of the Shepard tones?

      We agree that it is not trivial to describe this well, while keeping the text uncluttered, in particular, because often tuning properties to stimulus frequency contribute to tuning properties of the same neuron for pitch class, although this can be more or less straightforward: specifically, for some narrowly tuned cells, the Shepard tuning is simply a reflection of their tuning to a single octave range of the constituent tones (see Fig. S1). For more broadly tuned cells, multiple constituent tones will contribute to the overall Shepard tuning, which can be additive, subtractive, or more complex. The assumption in our approach is that we can directly estimate the Shepard tuning to evaluate the consequence for the percept. While this may seem artificial, as Shepard tones do not typically occur in nature, the same argument could be made against pure tones, on which classical tuning curves and associated decodings are often based. Relating the Shepard tuning to the classical tuning would be an interesting study in itself, although arguably relating the tuning of one artificial stimulus to another. Regarding the terminology of pitch, pitch class and frequency: The term pitch class is commonly used in the field of Shepard tones, and - as we indicated in the beginning of the results: "the term pitch is used interchangeably with pitch class as only Shepard tones are considered in this study". We agree that the term pitch, which describes the perceptual convergence/construction of a tone-height from a range of possible physical stimuli, needs to be separated from frequency as one contributor/basis for the perception of a pitch. However, we think that the term pitch can - despite its perceptual origin - also be associated with neuron/neural responses, in order to investigate the neural origin of the pitch percept. At the same time, the present study is not targeted to study pitch encoding per se, as this would require the use of a variety of stimuli leading to consistent pitch percepts. Therefore, pitch (class) is here mainly used as a term to describe the neural responses to Shepard tones, based on the previous literature, and the fact that Shepard tones are composite stimuli that lead to a pitch percept. The last sentence has been added to the manuscript for clarity.

      P7-9: I wasn't left with a clear idea of how the model works from this text. I assume you have layers of neurons tuned to frequency or f0 (based on the real data?), which are connected in some way to produce some sort of output when you input a sound? More detail is needed here. How is the dynamic adaptation implemented?

      The detailed description of the model can be found in the Methods section. We have gone through the corresponding paragraph and have tried to clarify the description of the model by introducing a high-level description and the reference to the corresponding Figure (Fig. 5A) in the Results.

      Fig6A: Figure caption can't be correct. In any case, these equations cannot be understood unless you define the terms in them.

      We have clarified the description in the caption.

      Fig 6/directionality analysis: Assuming that the "F" in the STRFs here is Shepard tone f0, and not simple frequency?

      We have changed the formula in the caption and the axis labels now.

      Fig 6C - y-axis values

      In the submission, these values were left out on purpose, as the result has an arbitrary scale, but only whether it is larger or smaller than 0 counts for the evaluation of the decoded directionality (at the current level of granularity). An interesting refinement would be to relate the decoded values to animal performance. We have now scaled the values arbitrarily to fit within [-1,1], but we would like to emphasize that only their relative scale matters here, not their absolute scale.

      Fig 6E - can't both be abscissa (caption). I might be missing something here, but I don't see the "two stripes" in the data that are described in the caption.

      Thank you. The typo is fixed. The stripes are most clearly visible in the right panel of Fig. 6E, red and blue, diagonally from top left to bottom right.

      Fig 6G -I have no idea what this figure is illustrating.

      This panel is described in the text as follows: "The resulting distribution of activities in their relation to the Bias is, hence, symmetric around the Bias (Fig. 6G). Without prior stimulation, the population of cells is unadapted and thus exhibits balanced activity in response to a stimulus. After a sequence of stimuli, the population is partially adapted (Fig. 6G right), such that a subsequent stimulus now elicits an imbalanced activity. Translated concretely to the present paradigm, the Bias will locally adapt cells. The degree of adaptation will be stronger, if their tuning curve overlaps more with the biased region. Adaptation in this region should therefore most strongly influence a cell’s response. For example, if one considers two directional cells, an up- and a down-selective cell, cocentered in the same frequency location below the Bias, then the Bias will more strongly adapt the up-cell, which has its dominant, recent part of the SSTRF more inside the region of the Bias (Fig. 6G right). Consistent with the percept, this imbalance predicts the tone to be perceived as a descending step relative to the Bias. Conversely, for the second stimulus in the pair, located above the Bias, the down-selective cells will be more adapted, thus predicting an ascending step relative to the previous tone."

      I might be just confused or losing steam at this point, but I do not follow what has been done or the results in Fig 6 and the accompanying text very well at all. Can this be explained more clearly? Perhaps the authors could show spike rate responses of an example up-direction and down-direction neuron? Explain how the decoder works, not just the results of it.

      We agree that we are presenting something new here. However, it is conceptually not very different from decoding based on preferred frequencies. We have attempted to provide two illustrations of how the decoder works (Fig. 6A) and how it then leads to the percept using prototypical examples of cellular SSTRFs (Fig. 6G). We have added a complete, but accessible description to the Methods section. Showing firing rates of neurons would unfortunately not be very telling, given the usual variability in neural response and the fact that our paradigm did not have a lot of repetitions (but instead a lot of conditions), which would be able to average out the variability on a single neuron level.

      Discussion - I do not feel I can adequately critique the author's interpretation of the results until I understand their results and methods better. I will therefore save my critique of the discussion section for the next round of revisions after they have addressed the above issues of disorganization and clarity in the manuscript.

      We hope that the updated version of the manuscript provides the reviewer now with this possibility.

      Methods

      P15L7 - gender of human subjects? Age distribution? Age of ferrets?

      We have added this information.

      P16L21 - What is the justification for randomizing the phase of the constituent frequencies?

      The purpose of the randomization was to prevent idiosyncratic phase relationships for particular Shepard tones, which would depend in an orderly fashion on the included base-frequencies if non-randomized, and could have contributed to shaping the percept for each Shepard tone in a way that was only partly determined by the pitch class of the Shepard tone. Added to the section.

      P17L6 - what are the 2 randomizations? What is being randomized?

      Pitch classes and position in the Bias sequence. Added to the section.

      P16 Shepard Tuning section - What were the durations of the tones and the time between tones within a trial?

      Thanks, added!

      Equations - several undefined terms in the equations throughout the manuscript.

      Thanks. We have gone through the manuscript and all equations and have introduced additional definitions where they had been missing.

      Reviewer #3 (Recommendations For The Authors):

      P3L10: "passive" and "active" conditions come totally out of the blue. Need introducing first. (Or cut. If adaptation is always seen, why mention the two conditions if the difference is not relevant here?)

      We have added an additional sentence in the preceding paragraph, that should clarify this. The reason for mentioning it is that otherwise a possible counter-argument could be made that adaptation does not occur in the active condition, which was not tested in ferrets (but presents an interesting avenue for future research).

      P3L14 "siple" typo

      Corrected.

      P4L1 "behaving humans" you should elaborate just a little here on what sort of behavior the participants engaged in.

      Thanks for pointing this out. We have clarified this by adding an additional sentence directly thereafter.

      P4 adaptation: I wonder whether it would be useful to describe the Bias condition a bit more here before going into the observations. The reader cannot know what to expect unless they jump ahead to get a sense of what the Bias looks like in the sense of how many stimuli are in it, and how similar they are to each other. Observations such as "the average response strength decreases as a function of the position in the Bias sequence" are entirely expected if the Bias is made up of highly repetitive material, but less expected if it is not. I appreciate that it can be awkward to have Methods after Results, but with a format like that, the broad brushstroke Methods should really be incorporated into the Results and only the tedious details should be reserved for the Methods to avoid readers having to jump back and forth.

      Agreed, we have inserted a corresponding description before going into the details of the results.

      Related to this (perhaps): Bottom of P4, top of P5: "significantly less reduced (33%, p=0.0011, 2 group t-test) compared to within the bias (Fig. 2 A3, blue vs. red), relative to the first responses of the bias" ... I am at a loss as to what the red and blue symbols in Fig 2 A3 really show, and I wonder whether the "at the edges" to "within the Bias" comparison were to make sense if at this stage I had been told more about the composition of the Bias sequence. Do the ambiguous ('target') tones also occur within the Bias? As I am unclear about what is compared against what I am also not sure how sound that comparison is.

      We have added an extended description of the Bias to the beginning of this section of the manuscript. For your reference: the Shepard tones that made up the ambiguous tones were not part of the Bias sequence, as they are located at 3st distance from the center of the Bias (above and below), while the Bias has a range of only +/- 2.5st.

      Fig 2: A4 B1 B2 labels should be B1 B2 B3

      Corrected.

      Fig 2 A2, A3: consider adjusting y-axis range to have less empty space above the data. In A3 in particular, the "interesting bit" is quite compressed.

      Done, however, while still matching the axes of A2 and A3 for better comparability.

      I am under the strong impression that the human data only made it into Fig 2 and that the data from Fig 3 onwards are animal data only. That is of course fine (MEG may not give responses that are differentiated enough to perform the sort of analyses shown in the later figures. But I do think that somewhere this should be explicitly stated.

      Yes, the reviewer's observation is correct. The decoding analyses could not be conducted on the human MEG data and was therefore not further pursued. Its inclusion in the paper has the purpose of demonstrating that even in humans and active conditions, the local adaptation is present, which is a key contributor to the two decoding models. We now state this explicitly when starting the decoding analysis.

      P5L2 "bias" not capitalized. Be consistent.

      All changed to capitalized.

      P5L8 reference to Fig 2 A4: something is amiss here. From legend of Fig 2 it seems clear that panel A4 label is mislabeled B1. Maybe some panels are missing to show recovery rates?

      Apologies for this residual text from a previous version of the manuscript. We have gone through all references and corrected them.

      P6L7 comma after "decoding".

      Changed.

      Fig 3, I like this analysis. What would be useful / needed here though is a little bit more information about how the data were preprocessed and pooled over animals. Did you do the PCA separately for each animal, then combine, or pool all units into a big matrix that went into the PCA? What about repeat, presentations? Was every trial a row in the matrix, or was there some averaging over repeats? (In fact, were there repeats??)

      Thanks for bringing up these relevant aspects, which were partly insufficiently detailed in the manuscript. Briefly, cells were pooled across animals and we only used cells that could meaningfully contribute to the decoding analysis, i.e. had auditory responses and different responses to different Shepard tones. Regarding the responses, as stated in the Methods, "Each stimulus was repeated 10 times", and we computed average responses across these repetitions. Single trials were not analyzed separately. We have added this information in the Methods, and refer to it in the Results.

      Also, there doesn't appear to be a preselection of units. We would not necessarily expect all cortical neurons to have a meaningful "best pitch" as they may be coding for things other than pitch. Intuitively I suspect that, perhaps, the PCA may take care of that by simply not assigning much weight to units that don't contribute much to explained variance? In any event I think it should be possible, and would be of some interest, to pull out of this dataset some descriptive statistics on what proportion of units actually "care about pitch" in that they have a lot (or at least significantly more than zero) of response variance explained by pitch. Would it make sense to show a distribution of %VE by pitch? Would it make sense to only perform the analysis in Fig 3 on units that meet some criterion? Doing so is unlikely to change the conclusion, but I think it may be useful for other scientists who may want to build on this work to get a sense of how much VE_pitch to expect.

      We fully agree with the reviewer, which is why this information is already presented in Supplementary Fig.1, which details the tuning properties of the recorded neurons. Overall, we recorded from 1467 neurons across all ferrets, out of which 662 were selected for the decoding analysis based on their driven firing rate (i.e. whether they responded significantly to auditory stimulation) and whether they showed a differential response to different Shepard tones The thresholds for auditory response and tuning to Shepard tones were not very critical: setting the threshold low, led to quantitatively the same result, however, with more noise. Setting the thresholds very high, reduced the set of cells included in the analysis, and eventually that made the results less stable, as the cells did not cover the entire range of preferences to Shepard tones. We agree that the PCA based preprocessing would also automatically exclude many of the cells that were already excluded with the more concrete criteria beforehand. We have added further information on this issue in the Methods section under the heading 'Unit selection'.

      P9 "tones This" missing period.

      Changed.

      P10L17 comma after "analysis"

      Changed.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public Review):

      Tleiss et al. demonstrate that while commensal Lactiplantibacillus plantarum freely circulate within the intestinal lumen, pathogenic strains such as Erwinia carotovora or Bacillus thuringiensis are blocked in the anterior midgut where they are rapidly eliminated by antimicrobial peptides. This sequestration of pathogenic bacteria in the anterior midgut requires the Duox enzyme in enterocytes, and both TrpA1 and Dh31 in enteroendocrine cells. This effect induces muscular muscle contraction, which is marked by the formation of TARM structures (thoracic ary-related muscles). This muscle contraction-related blocking happens early after infection (15mins). On the other side, the clearance of bacteria is done by the IMD pathway possibly through antimicrobial peptide production while it is dispensable for the blockage. Genetic manipulations impairing bacterial compartmentalization result in abnormal colonization of posterior midgut regions by pathogenic bacteria. Despite a functional IMD pathway, this ectopic colonization leads to bacterial proliferation and larval death, demonstrating the critical role of bacteria anterior sequestration in larval defense.

      This important work substantially advances our understanding of the process of pathogen clearance by identifying a new mode of pathogen eradication from the insect gut. The evidence supporting the authors' claims is solid and would benefit from more rigorous experiments.

      (1) The authors performed the experiments on Drosophila larvae. I wonder whether this model could extend to adult flies since they have shown that the ROS/TRPA1/Dh31 axis is important for gut muscle contraction in adult flies. If not, how would the authors explain the discrepancy between larvae and adults?

      We have linked the adult phenotype to the larval model to explore the ROS/TrpA1/Dh31 axis in both contexts.  As highlighted in the discussion, however, there are key behavioral differences between larvae and adult flies. Unlike larvae, which remain in the food environment, adult flies have the ability to move away. This difference could impact the relevance of gut muscle contraction and bacterial clearance mechanisms between the two stages. Specifically, in larvae, the rapid ejection of gut contents due to muscle contraction poses a unique risk: larvae may inadvertently re-ingest the expelled material within minutes, which could influence their immune defenses. We have clarified this distinction and our hypothesis in the final section of the discussion, as it emphasizes the adaptive nature of this mechanism in larvae.

      (2) The authors performed their experiments and proposed the models based on two pathogenic bacteria and one commensal bacterial at a relatively high bacterial dose. They showed that feeding Bt at 2X1010 or Ecc15 at 4X108 did not induce a blockage phenotype. 

      I wonder whether larvae die under conditions of enteric infection with low concentrations of pathogenic bacteria. 

      To address this, we have provided new data (Movie 5), in which larvae were fed a lower dose of Bt-GFP at 1.3 × 10^10 CFU/mL. In this video, we observe that when larvae ingest fewer bacteria, no blockage occurs, and the bacteria are able to reach the posterior midgut. As the bacterial load is lower, the fluorescence signal is weaker, but the movie clearly shows the excretion of bacteria. Importantly, under these conditions, no larval death was observed. These findings suggest that below a certain bacterial threshold, the pathogenicity is insufficient to: (1) trigger the blockage response, and (2) kill the larvae. In such cases, bacteria are likely eliminated through normal peristaltic movements rather than through the blockage mechanism described in our study.

      If larvae do not show mortality, what is the mechanism for resisting low concentrations of pathogenic bacteria? 

      As mentioned in our previous response, we hypothesize that the larvae’s ability to resist low concentrations of pathogenic bacteria is likely due to being below the threshold of virulence. At lower bacterial doses, the pathogenic load is insufficient to trigger the blockage mechanism or cause larval death. In these cases, it is probable that classical peristaltic movements of the gut efficiently eliminate the bacteria, preventing them from colonizing the posterior midgut or causing significant harm. Thus, the larvae rely on standard gut motility and immune mechanisms, rather than the blockage response, to clear lower doses of bacteria.

      Why is this model only applied to high-dose infections? 

      The reason this model primarily applies to high-dose infections is that lower concentrations of pathogenic bacteria do not trigger the blockage mechanism. As we mentioned in the manuscript, for low bacterial concentrations, where the GFP signal remains detectable, wild-type larvae are still able to resist live bacteria in the posterior part of the intestine.

      Regarding the bacterial doses used in our experiments, it's important to clarify that we calculate the bacterial load based on colony-forming units (CFU). In our setup, there are approximately 5 × 10^4 CFU per midgut. For each experiment, we prepare 500 µl of contaminated medium containing 4 × 10^10 CFU. Fifty larvae are placed into this 500 µl of medium, meaning each larva ingests around 5 × 10^4 CFU within one hour of feeding.

      This leads us to two key points:

      (1) Continuous feeding might trigger the blockage response even at lower doses, as extended exposure to bacteria could lead to higher accumulation within the gut.

      (2) Other defense mechanisms, such as the production of reactive oxygen species (ROS) or classical peristaltic movements, could be sufficient to eliminate lower bacterial doses (around 10^3 CFU or below).

      We also refer to the newly provided Movie 5, where larvae fed with Bt-GFP at 1.3 × 10^10 CFU/mL show no blockage at low ingestion levels and successfully eliminate the bacteria.

      (3) The authors claim that the lock of bacteria happens at 15 minutes while killing by AMPs happens 6-8 hours later. 

      Our CFU data indicate that it’s after 4 to 6 hours that the quantity of bacteria decreases. We fixed this in the text.

      What happened during this period? 

      During the 4 to 6-hour period, several defense mechanisms are activated. ROS play a bacteriostatic and bacteriolytic role, helping to control bacterial growth. Concurrently, the IMD pathway is activated, leading to the transcription, translation, and secretion of antimicrobial peptides. These AMPs exert both bacteriostatic and bacteriolytic effects, contributing to the eventual clearance of the pathogenic bacteria.

      More importantly, is IMD activity induced in the anterior region of the larval gut in both Ecc15 and Bt infection at 6 hours after infection? 

      We have provided new data (Supplementary Figure 6) that includes RT-qPCR analysis of the whole larval gut in wt, TrpA1- and Dh31- genetic background after feeding with Lp, Ecc15, Bt, or yeast only. We monitored the expression of three different AMP-encoding genes and found that while AMP expression varied depending on the food content, there were no significant differences between the genotypes tested.

      Additionally, we included new imaging data (Supplementary Figure 11) from AMP reporter larvae (Dpt-Cherry) fed with fluorescent Lp or Bt. In larvae infected with Bt, which is blocked in the anterior part of the gut, the dpt gene is predominantly induced in this region, indicating strong IMD pathway activity in response to Bt infection. Conversely, in larvae fed with Lp-GFP, the Dpt-Cherry reporter shows weak expression in the anterior midgut, and is barely detectable in the posterior midgut where Lp-GFP establishes itself. This aligns with previous findings by Bosco-Drayon et al. (2012), which demonstrated low AMP expression in the posterior midgut due to the presence of negative regulators of the IMD pathway, such as amidases and Pirk.

      Are they mostly expressed in the anterior midgut in both bacterial infections? Several papers have shown quite different IMD activity patterns in the Drosophila gut. Zhai et al. have shown that in adult Drosophila, IMD activity was mostly absent in the R2 region as indicated by dpt-lacZ. Vodovar et al. have shown that the expression of dpt-lacZ is observable in proventriculus while Pe is not in the same region. Tzou et al. showed that Ecc15 infection induced IMD activity in the anterior midgut 24 hours after infection. 

      Based on our new data (Supplementary Figure 11), we observe that Dpt-RFP expression is primarily localized in the anterior midgut and likely in the beginning of acidic region in larvae infected with Bt, Ecc and Lp. 

      Using TrpA1 and Dh31 mutants, the authors found both Ecc15 and Bt in the posterior midgut. Why are they not evenly distributed along the gut? 

      We observe that bacteria are not evenly distributed along the gut in wild-type larvae as well, with LP. This suggests that the transit time in the anterior part of the gut may be relatively short due to active peristaltism, which would make this region function as a "checkpoint" for bacteria that are not supposed to be blocked. Indeed, we confirmed that peristaltism is active during our intoxication experiments, which could explain the rapid movement of bacteria through the anterior midgut.

      In contrast, bacteria tend to remain longer in the posterior midgut, which corresponds to the absorptive functions of intestinal cells in this region. This would explain why we observe more bacteria in the posterior midgut for Lp in control larvae and for Ecc15 and Bt in the TrpA1- and Dh31- mutants. Although a few bacteria are still found in the anterior midgut, they are consistently in much lower numbers compared to the posterior, as shown in Figures 1A and 3A of our manuscript.

      Last but not least, does the ROS/TrpA1/Dh31 axis affect AMP expression?

      We investigated whether the ROS/TrpA1/Dh31 axis influences AMP expression by performing RT-qPCR on the whole gut of larvae in wild-type, TrpA1-, and Dh31- genetic backgrounds. Larvae were fed with Lp, Ecc, Bt, or yeast (new data: Supplementary Figure 6). We monitored the expression of three different AMP-encoding genes and found that while AMP expression varied depending on the food content, there were no significant differences in AMP expression between the different genotypes.

      Additionally, we provide imaging data from AMP reporter larvae (pDpt-Cherry) fed with fluorescent Lp or Bt (new data: Supplementary Figure 11). These results further confirm that the ROS/TrpA1/Dh31 axis does not significantly affect AMP expression in our experimental conditions.

      (4) The TARM structure part is quite interesting. However, the authors did not show its relevance in their model. Is this structure the key-driven force for the blocking phenotype and killing phenotype? 

      We agree that the TARM structures are a fascinating aspect of this study and acknowledge the interest in their potential role in the blocking and killing phenotypes. While we are keen to explore the specific contributions of these structures during bacterial intoxication, the current genetic tools available for manipulating TARMs target both TARM T1 and T2 simultaneously, as demonstrated by Bataillé et al., 2020 (Fig. 2). Of note, these muscles are essential for proper gut positioning in larvae, and their absence leads to significant defects in food intake and transit, which would confound the results of our intoxication experiments (see Fig. 6 from Bataillé et al., 2020).

      Therefore, while TARMs are likely involved in these processes, the current limitations in selectively targeting them prevent us from definitively testing their role in bacterial blocking and killing at this stage. We hope to address this in future studies as more refined genetic tools become available.

      Is the ROS/TrpA1/Dh31 axis required to form this structure?

      To determine whether the ROS/TrpA1/Dh31 axis is required for the formation of TARM structures, we examined larval guts from control, TrpA1-, and Dh31- mutant backgrounds. Our new data (Supplementary Figure 8) show that the TARM T2 structures are still present in the mutants, indicating that the formation of these structures does not depend on the ROS/TrpA1/Dh31 axis.

      Reviewer #2 (Public Review):

      This article describes a novel mechanism of host defense in the gut of Drosophila larvae. Pathogenic bacteria trigger the activation of a valve that blocks them in the anterior midgut where they are subjected to the action of antimicrobial peptides. In contrast, beneficial symbiotic bacteria do not activate the contraction of this sphincter, and can access the posterior midgut, a compartment more favorable to bacterial growth.

      Strengths:

      The authors decipher the underlying mechanism of sphincter contraction, revealing that ROS production by Duox activates the release of DH31 by enteroendocrine cells that stimulate visceral muscle contractions. The use of mutations affecting the Imd pathway or lacking antimicrobial peptides reveals their contribution to pathogen elimination in the anterior midgut.

      Weaknesses:

      The mechanism allowing the discrimination between commensal and pathogenic bacteria remains unclear.

      Based on our findings, we hypothesize that ROS play a crucial role in this discrimination process, with uracil release by pathogenic or opportunistic bacteria potentially serving as a key signal.

      To test whether uracil could trigger this discrimination, we conducted experiments where Lp was supplemented with uracil. However, our results show that uracil supplementation alone was not sufficient to induce the blockage response (new data: Supplementary Figure 5). This suggests that while uracil may be a factor in bacterial discrimination, it is likely not the sole trigger, and additional bacterial factors or signals may be required to activate the blockage mechanism. 

      The use of only two pathogens and one symbiotic species may not be sufficient to draw a conclusion on the difference in treatment between pathogenic and symbiotic species.

      To address this concern, we performed additional intoxication experiments using Escherichia coli OP50, a bacterium considered innocuous and commonly used as a standard food source for C. elegans in laboratory settings. The results, presented in our updated data (new data: Fig 1B), show that E. coli OP50, despite being from the same genus as Ecc, does not trigger the blockage response. This further supports our conclusion that the gut’s discriminatory mechanism is specific to pathogenic bacteria, and not merely based on bacterial genus.

      We can also wonder how the process of sphincter contraction is affected by the procedure used in this study, where larvae are starved. Does the sphincter contraction occur in continuous feeding conditions? Since larvae are continuously feeding, is this process physiologically relevant?

      In our intoxication protocol, the larvae are exposed to contaminated food for 1 hour, during which the blockage ratio is quantified. Since this period involves continuous feeding with the contaminated food, we do not consider the larvae starved during the quantification process. Our observations show differences in the blockage response depending on the bacterial contaminant and the genetic background of the host. Additionally, we were able to trigger the blocking phenomenon using exogenous hCGRP.

      Regarding the experimental setup for movie observations, it is true that larvae are immobilized on tape in a humid chamber, which is not a fully physiological context. However, in the new movie we provide (Movie 3), co-treatment with fluorescent Dextran (Red) and fluorescent Bt (Green) shows that both are initially blocked, followed by the posterior release of Dextran once the bacterial clearance begins.

      Furthermore, to address the question of continuous exposure, we extended the exposure period to 20 hours instead of 1 hour. Even after prolonged exposure, we observed that pathogens are still blocked in the anterior part of the gut (new data: Supplementary Figure 2B). This supports the physiological relevance of the sphincter contraction and its ability to function under continuous feeding conditions.

      Reviewer #1 (Recommendations For The Authors):

      (1) The authors performed the experiments on Drosophila larvae. I wonder whether this model could extend to adult flies since they have shown that the ROS/TRPA1/Dh31 axis is important for gut muscle contraction in adult flies. If not, how would the authors explain the discrepancy between larvae and adults?

      We link the adult phenotype to the one we describe in larvae in order to have the candidate approach toward the ROS/TrpA1/Dh31 axis. As we already mention in the discussion, while larvae stay in the food, adult flies can go away. If larvae eject their gut content, they may ingest it within minutes. We clarify our idea in the last part of the discussion.

      (2) The authors performed their experiments and proposed the models based on two pathogenic bacteria and one commensal bacterial at a relatively high bacterial dose. They showed that feeding Bt at 2X1010 or Ecc15 at 4X108 did not induce a blockage phenotype. 

      I wonder whether larvae die under conditions of enteric infection with low concentrations of pathogenic bacteria. 

      Video provided with Bt-GFP 1.3 10^10 CFU/mL (new data: Movie 5). When larvae eat less, there is no blockage and bacteria can reach the posterior midgut. Note that the fluorescence is weak due to the low amount of bacteria ingested. The movie shows an excretion of the bacteria. There is also no death of the larvae. Together these results suggest that below a given threshold, the virulence of the bacteria is too weak to i) trigger a blockage and 2/ kill the larva. The bacteria are likely eliminated through classical peristaltism.

      If larvae do not show mortality, what is the mechanism for resisting low concentrations of pathogenic bacteria? 

      Maybe we are below the threshold of virulence. See our response just above.

      Why is this model only applied to high-dose infections? 

      As mentioned in the manuscript, lower concentrations do not trigger the blockage and for lower concentrations with a GFP signal still detectable, wild-type animals resist the presence of live-bacteria within the posterior part of the intestine.

      About the doses, the CFU should be considered. Indeed, there are around 5.10^4 CFU per midgut. In our experimental procedure we calculate the amount of bacteria for 500 µl of contaminated medium (i.e. 4.10^10 CFU/500µl of medium). Then around 50 larvae were deposited in the 500µl of contaminated media. In this condition, one larva ingests 5.10^4 CFU. Moreover, larvae are only fed for 1h. 

      So 1/ continuous feeding may also trigger locking even at lower doses and 2/ the other mechanisms of defenses (such as ROS) or peristalsis may be sufficient to eliminate lower doses (i.e. 10^3 CFU or below). See the new movie 5 we provide with Bt-GFP 1.3 10^10 CFU/mL

      (3) The authors claim that the lock of bacteria happens at 15 minutes while killing by AMPs happens 6-8 hours later. 

      Our CFU data indicate that it’s after 4 to 6 hours that the quantity of bacteria decreases. We fixed this in the text.

      What happened during this period? 

      ROS activity (bacteriostatic and bacteriolytic), IMD activation, AMP transcription, translation, secretion and bacteriostatic as well as bacteriolytic activity.

      More importantly, is IMD activity induced in the anterior region of the larval gut in both Ecc15 and Bt infection at 6 hours after infection? 

      We provide new data for larval whole gut RT-qPCR data in wt, TrpA1- and Dh31- genetic background fed with Lp or Ecc or Bt or yeast only (new data: SUPP6). We monitored 3 different AMP-encoding genes and found differences related to the food content, but no differences between genotypes. In addition, we provide images from AMP reporter animals (Dpt-Cherry) fed with fluorescent Lp or Bt (new data: SUPP11) showing that with Bt blocked in the anterior part of the intestine, the dpt gene is mainly induced in this area. Note that in the larva infected with Lp-GFP, the Dpt-Cherry reporter is weakly expressed in the anterior midgut. In the posterior midgut, the place where Lp-GFP is established, Dpt-Cherry is barely detectable. This observation is in line with the previous observation made by Bosco-Drayon et al., (2012) demonstrating the low level of AMP expression in the posterior midgut due to the expression of the IMD negative regulators such as amidases and pirk. In the larva infected with Bt-GFP, note the obvious expression of DptCherry in the anterior midgut colocalizing with the bacteria (new data: SUPP11).

      Are they mostly expressed in the anterior midgut in both bacterial infections? Several papers have shown quite different IMD activity patterns in the Drosophila gut. Zhai et al. have shown that in adult Drosophila, IMD activity was mostly absent in the R2 region as indicated by dpt-lacZ. Vodovar et al. have shown that the expression of dpt-lacZ is observable in proventriculus while Pe is not in the same region. Tzou et al. showed that Ecc15 infection induced IMD activity in the anterior midgut 24 hours after infection. 

      In ctrl animals fed Bt, Ecc and Lp we see Dpt-RFP in anterior midgut and likely in the beginning of acidic region. See the new data: SUPP11 images provided for the previous remark.

      Using TrpA1 and Dh31 mutants, the authors found both Ecc15 and Bt in the posterior midgut. Why are they not evenly distributed along the gut? 

      Same is true with Lp in wt; not evenly distributed. As if the transit time in the anterior part is very short due to peristaltism which would fit for a check point area if you’re not supposed to be blocked. Indeed, peristaltism is active during our intoxications. Then, it stays longer in the posterior part, fitting with the absorptive skills of the intestinal cells in this area. With Lp in ctrl or Ecc and Bt in TrpA1- and Dh31- mutants, there are always a few in the anterior midgut but always much less compared to the posterior. See our figure 1A and 3A.

      Last but not least, does the ROS/TrpA1/Dh31 axis affect AMP expression?

      We provide larval whole gut RT-qPCR data in wt, TrpA1- and Dh31- genetic background fed with Lp or Ecc or Bt or yeast only (new data: SUPP6). We monitored 3 different AMPencoding genes and found differences related to the food content, but no differences between genotypes. In addition, we provide images from AMP reporter animals (pDptCherry) fed with fluorescent Lp or Bt, (new data: SUPP11).

      (4) The TARM structure part is quite interesting. However, the authors did not show its relevance in their model. Is this structure the key-driven force for the blocking phenotype and killing phenotype? 

      Indeed, we would like to explore the roles of these structures and the putative requirement upon bacterial intoxication using some driver lines developed by the team that studied these muscles in vivo. However, the genetic tools currently available will target TARMsT1 and T2 at the same time. See Fig 2 form Bataillé et al, . 2020. Moreover, these TARMs are, at first, crucial for the correct positioning of the gut within the larvae and their absence lead to a global food intake and transit defect that will bias the outcomes of our intoxication protocol (see fig 6 from Bataillé et al,. 2020).

      Is the ROS/TrpA1/Dh31 axis required to form this structure?

      We provide images of larval guts from ctrl, TrpA1 and Dh31 mutants demonstrating the presence of the TARMs T2 structures despite the mutations (new data: SUPP8). In addition, we provide representative movies of peristalsis in intestines of Dh31 mutants fed or not with Ecc to illustrate that muscular activity is not abolished (new data: Movie 9 and Movie 10).

      Minor points:

      (1) Why not use the Pros-Gal4/UAS-Dh31 strain in Figure 3B in addition to hCGRP?

      We opted for exogenous hCGRP addition because it allowed us precise timing control over Dh31 activation. Overexpression of Dh31 from embryogenesis or early larval stages could have significant and unintended effects on intestinal physiology, potentially confounding the results. While temporal control using TubG80ts could be an alternative, our focus was on identifying the specific cells responsible for the phenomenon.

      To achieve this, we perturbed Dh31 production via RNAi, specifically targeting a limited number of enteroendocrine cells (EECs) using the DJ752-Gal4 driver, as described by Lajeunesse et al., 2010. Our new data (Supplementary Figure 4) demonstrate that Dh31 expression in this subset of cells is indeed necessary for the blockage phenomenon.

      (2) Section title (line 287) refers to mortality, but no mortality data is in the figure.

      We agree that the title referenced mortality, whereas no mortality data was presented in this section. We have updated the title to better reflect the data discussed in this part of the manuscript.

      (3) It may be better to combine ROS-related contents in the same figure.

      While it is technically feasible to consolidate the ROS-related content into one figure, doing so would require splitting essential data, such as the Gal4 controls for the RNAi assays and parts of the survival phenotype data. We believe that the current structure of the study, which first explores the molecular aspects of the phenomenon and then demonstrates its relevance to the animal’s survival, provides a clearer and more logical flow. For these reasons, we prefer to maintain the current figure layout.

      Reviewer #2 (Recommendations For The Authors):

      Major recommendation

      (1) Other wild-type backgrounds should be added (including the w Drosdel background of the AMP14 deficient flies) to check the robustness of the phenotype.

      To address the concern regarding the robustness of the phenotype across different wildtype backgrounds, we have tested additional genetic backgrounds, including w1, the isogenized w1118 and Oregon animals. 

      The results (new data: Figure 1C) demonstrate that Lp is able to transit freely to the posterior part of the intestine in all backgrounds, while Ecc and Bt are blocked in the anterior part. These findings confirm the robustness of the phenotype across different wildtype strains.

      (2) Although we recognize that this may be limited by the number of GFP-expressing species, other commensal and pathogenic bacteria should be tested in this assay (e.g. E. faecalis and Acetobacter).

      We performed new intoxication experiments using Escherichia coli OP50, a wellestablished innocuous bacterial strain. The data, presented in Figure 1B (new data), show that E. coli OP50, despite being from the same genus as Ecc, does not trigger the blockage response. This further supports our hypothesis that the blockage phenomenon is specific to pathogenic bacteria and not simply related to the bacterial genus.

      (3) It is important to test whether sphincter closure also occurs in continuous feeding conditions. This does not mean repeating all the experiments but just shows that this mechanism can take place in conditions where larvae are kept in a vial with food.

      While the movies we provide involve larvae immobilized on tape in a humid chamber, which is not a fully physiological context, we now provide new data (Movie 3) showing that, after co-treatment with fluorescent Dextran (Red) and fluorescent Bt (Green), both substances are initially blocked in the anterior midgut. Later, the dextran is released posteriorly once bacterial clearance has begun.

      Additionally, we extended the feeding period in our experiments from 1 hour to 20 hours to simulate more continuous exposure to contaminated food. Even under these prolonged conditions, we observed that pathogens are still blocked in the anterior part of the gut (new data: Supplementary Figure 2B). This confirms that the sphincter mechanism can function in continuous feeding conditions as well.

      (4) What are the molecular determinants discriminating innocuous from pathogenic bacteria? Addressing this point will increase the impact of the article. The fact that Relish mutants have normal valve constriction suggests that peptidoglycan recognition is not involved. Is there a sensing of pathogen virulence factors? 

      Our data suggest that uracil could be a key molecular determinant in discriminating between innocuous and pathogenic bacteria, as previously described by the W-J Lee team in several studies on adult Drosophila. However, in our experiments, exogenous uracil addition using the blue dye protocol (Keita et al., 2017) did not induce any significant changes in the larvae. Similarly, uracil supplementation in adult flies failed to trigger the Ecc expulsion and gut contraction phenotype, as reported by Benguettat et al., 2018. 

      To further investigate this, we tested the addition of uracil during Lp-GFP intoxication. In these experiments, we did not observe any blockage of Lp (new data: Supplementary Figure 5). These results suggest that uracil might not be the sole trigger for the blockage response, or we may not be providing uracil exogenously in the most effective way. Alternatively, there could be other pathogen-specific virulence factors that contribute to this discrimination mechanism.

      To address this question, the authors should infect larvae with Ecc15 evf- mutants or Ecc15 lacking uracil production. 

      Thank you for your suggestion to use Ecc15 evf- mutants or Ecc15 lacking uracil production to explore the role of uracil in bacterial discrimination. While we have provided some data using uracil supplementation (new data: Supplementary Figure 5), we agree that testing mutants like PyrE would be an important next step. Unfortunately, we currently lack access to fluorescent PyrE or Ecc15 evf- mutants.

      We are planning to address this by developing a new protocol involving fluorescent beads alongside bacteria. This approach will allow us to test several bacterial strains in parallel and better define the size threshold of the valve. However, we do not have the relevant data yet, but this will be a key focus of our future work.

      Similarly, does feeding heat-killed Ecc15 or Bt induce sequestration in the anterior midgut (larvae may be fed dextran-FITC at the same time to track bacteria)?

      Unfortunately, in our attempts to test heat-killed or ethanol-killed fluorescent Ecc15 for these experiments, we encountered an issue: while we were able to efficiently kill the bacteria, we lost the GFP signal required to track their position in the gut. This made it challenging to assess whether sequestration in the anterior midgut occurs with non-viable bacteria.

      Is uracil or Bt toxin feeding sufficient to induce valve closure? 

      As previously mentioned, uracil is a strong candidate for bacterial discrimination, and we have tested its role by adding exogenous uracil during Lp-GFP intoxication. However, in these experiments, Lp was not blocked (new data: Supplementary Figure 5). This suggests that uracil alone may not be sufficient to induce valve closure, or it may not be the only factor involved. It is also possible that our method of exogenous uracil supplementation may not be effectively mimicking the endogenous conditions.

      Regarding Bt, we used vegetative cells without Cry toxins in our experiments. Cry toxins are only produced during sporulation and are enclosed in crystals within the spore. The Bt strain we used, 4D22, has been deleted for the plasmids encoding Cry toxins. As a result, there were no Cry toxins present in the Bt-GFP vegetative cells used in our assays. This has been clarified in the Materials and Methods section of the manuscript.

      Would Bleomycin induce the same phenotype? 

      Indeed, Bleomycin, as well as paraquat, has been shown to damage the gut and trigger intestinal cell proliferation in adult Drosophila through mechanisms involving TrpA1. Testing whether Bleomycin induces a similar phenotype in larvae would indeed be interesting.

      However, one challenge we face in our intoxication protocol is that larvae tend to stop feeding when chemicals are added to their food mixture. We encountered similar difficulties in our DTT experiments, which were challenging to set up for this reason. Consequently, we aim to avoid approaches that might impair the general feeding activity of the larvae, as it can significantly affect the outcomes of our experiments.

      Could this process of sphincter closure be more related to food poisoning?

      If gut damage were the primary trigger for sphincter closure, we would indeed expect the blockage phenomenon to occur later following bacterial exposure. However, in our experiments, we observe the blockage occurring early after bacterial contact, suggesting that damage may not be the main trigger for this response.

      That said, we have not yet tested bacterial mutants lacking toxins, nor have we tested a direct damaging agent such as Bleomycin, as proposed. These would be valuable future experiments to explore the potential role of gut damage more thoroughly in this process.

      (5) Is Imd activation normal in trpA1 and DH31 mutants? The authors could use a diptericin reporter gene to check if Diptericin is affected by a lack of valve closure in trpA1.

      To address this, we performed RT-qPCR on whole larval guts from wt, TrpA11 and Dh31KG09001 genetic background. Larvae were fed with Lp, Ecc, Bt or yeast only (new data: SUPP6). We monitored the expression of three different AMP-encoding genes and found that while AMP expression varied depending on the food content, there were no significant differences in AMP expression between the genotypes.

      Additionally, we provide imaging data from AMP reporter animals (pDpt-Cherry) in a wildtype background, fed with fluorescent Lp or Bt (new data: Supplementary Figure 11). These images also support the conclusion that Diptericin expression is not significantly affected by a lack of valve closure in trpA1 and Dh31 mutants.

      (6) Are the 2-6 DH31 positive cells the same cells described by Zaidman et al., Developmental and Comparative Immunology 36 (2012) 638-647.

      The cells identified as hemocytes in the midgut junctions by Zaidman et al. are likely the same cells we describe in our study, as they are located in the same region and are Dh31 positive. We have added a reference to this paper and included lines in the manuscript acknowledging this connection.

      Although confirming whether these cells are Hml+, Dh31+, and TrpA1+ would clarify their exact identity, this falls outside the scope of our current study. However, the possibility that these cells play a role in physical barrier immunity and also possess a hemocyte identity is indeed intriguing, and we hope future research will explore this further.

      Minor points

      (1) The mutations should be appropriately labelled with the allele name.

      This has been fixed in the main text, in Fig Legends, and in figures. 

      (2) Line 230-231: the sentence is unclear to me.

      We simplified the sentence and do not refer to the expulsion in larvae.

      (3) Discussion: although the discussion is already a bit long, it would be interesting to see if this process is likely to happen/has been described in other insects (mosquito, Bactrocera, ...).

      We reviewed the available literature but were unable to find specific examples describing the blockage phenomenon in other insects. Most studies we found focused on symbiotic bacteria rather than pathogenic or opportunistic bacteria. However, as mentioned in our manuscript, the anterior localization of opportunistic or pathogenic bacteria has been observed in Drosophila by independent research groups.

      (4) Line 546: add the Caudal Won-Jae Lee paper to state the posterior midgut is less microbicidal.

      We added the reference at the right place, mentioning as well that it concerns adults. 

      (5)  Figure 6 indicates what the cells are, shown by the arrow.

      The sentence ‘the arrows point to TARMs’ is present in the legend of Fig6.

      (6) Does the sphincter closure depend on hemocytes?

      As mentioned above, the cells we identify as TrpA1+ in the midgut junction may be the same cells described by Zaidman et al., 2012, and earlier by Lajeunesse et al., 2010. Inactivating hemocytes using the Hml-Gal4 driver may also affect these Dh31+ cells, as they share similarities with hemocytes, as pointed out by Zaidman et al. However, distinguishing between hemocytes and Dh31+/TrpA1+ cells would require a genetic intersectional approach, which is beyond the scope of our current study.

      Nevertheless, the possibility that these cells play a dual role in immunity (through blockage) and share characteristics with hemocytes while functioning as enteroendocrine cells (EECs) is quite intriguing and deserves further exploration in future studies.

  7. inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
    1. teachers commonly thought of children raised in poverty with sym-pathy but without an understanding of how profoundly their chances for success were diminished by their situation.

      Answering the question i honestly don't know how i would fully feel with my kids being in their class just because of the effort being phrased in the reading. As much as people try to justify as to why they give better education to those with higher income because they deserve it, it's still not justifiable. The sympathy is true because we try to be understanding that they have so much going on at home.

    1. Sign-off #1: Demonstrate to an SA that your system is configured properly and the MATLAB sample codein lab1_base.m works as expected (the command is sent to the arm and sensor data is received)

      This sign off is perfect as is--maybe it's so simple it's not even necessary. Run the provided code, show me that your robot moves in an arc, that's all I need to see.

      Honestly I don't even know if there are any comprehension questions I can add that align with the LOs of the course. This is just a logistical step.

    Annotators

    1. reply to u/ArousedByApostasy at https://old.reddit.com/r/Zettelkasten/comments/1g8diq4/any_books_about_how_someone_used_zettelkasten_to/

      If you're suffering from the delusion (and many do) that Zettelkasten is only about Luhmann and his own writing and 4-5 recent books on the topic, you're only lacking creativity and some research skills. Seemingly Luhmann has lots of good PR, particularly since 2013, but this doesn't mitigate the fact that huge swaths of the late 1800s to the late 1900s are chock-a-block full of books produced by these methods. Loads of examples exist under other names prior to that including florilegia, commonplace books, the card system, card indexes, etc.

      Your proximal issue is that the scaffolding used to write all these books is generally invisible because authors rarely, if ever, talk about their methods and as a result, they're hard to "see". This doesn't mean that they don't exist.

      I've got a list of about 50+ books about the topic of zettelkasten or incredibly closely related methods dating back to 1548 if you want to peruse some: https://www.zotero.org/groups/4676190/tools_for_thought/collections/V9RPUCXJ/tags/note%20taking%20manuals/items/F8WSEABT/item-list

      There are a variety of examples of people's note collections that you can see in various media and compare to their published output. I've collected several dozens of examples, many of which you can find here: https://boffosocko.com/research/zettelkasten-commonplace-books-and-note-taking-collection/

      Interesting examples to get you started:

      • Vladimir Nabokov's estate published copies of his index cards for the novel The Original of Laura which you can purchase and read in its index card format. You can find a copy of his index card diary as Insomniac Dreams from Princeton University Press: https://press.princeton.edu/books/paperback/9780691196909/insomniac-dreams
      • S.D. Goitein - researchers on the Cairo Geniza still use his note collection to produce new scholarship; though he had 1/3 the number of note cards compared to Luhmann, his academic writing output was 3 times larger. If you dig around you can find a .pdf copy of his collection of almost 30,000 notes and compare it to his written work.
      • There's a digitized collection of W. Ross Ashby's notes (in notebook and index card format) which you can use to cross reference his written books and articles. https://ashby.info/
      • Wittgenstein had a well-known note collection which underpinned his works (as well as posthumous works). See: Wittgenstein, Ludwig. Zettel. Edited by Gertrude Elizabeth Margaret Anscombe and Georg Henrik von Wright. Translated by Gertrude Elizabeth Margaret Anscombe. Second California Paperback Printing. 1967. Reprint, Berkeley and Los Angeles, California: University of California Press, 2007.
      • Roland Barthes had a significant collection from which he both taught and wrote; His notes following his mother's death can be read in the book Morning Diary which were published as index card-based notes.
      • The Marbach exhibition in 2013 explored six well-known zettelkasten (including Luhmann's): Gfrereis, Heike, and Ellen Strittmatter. Zettelkästen: Maschinen der Phantasie. 1st edition. Marbach am Neckar: Deutsche Schillerges, 2013. https://www.amazon.de/-/en/Heike-Gfrereis/dp/3937384855/.
      • Philosopher John Locke wrote a famous treatise on indexing commonplace books which underlay his own commonplacing and writing work: Locke, John, 1632-1704. A New Method of Making Common-Place-Books. 1685. Reprint, London, 1706. https://archive.org/details/gu_newmethodmaki00lock/mode/2up.
      • Historian Jacques Barzun, a professor, dean and later provost at Columbia, not only wrote dozens of scholarly books, articles, and essays out of his own note collection, but also wrote a book about some of the process in a book which has over half a dozen editions: Barzun, Jacques, and Henry F. Graff. The Modern Researcher. New York, Harcourt, Brace, 1957. http://archive.org/details/modernreseracher0000unse. In his private life, he also kept a separate shared zettelkasten documenting the detective fiction which he read and was a fan. From this he produced A Catalogue of Crime: Being a Reader's Guide to the Literature of Mystery, Detection, and Related Genres (with Wendell Hertig Taylor). 1971. Revised edition, Harper & Row, 1989: ISBN 0-06-015796-8.
      • Erasmus, Agricola, and Melanchthon all wrote treatises which included a variation of the note taking methods which were widely taught in the late 1500s at universities and other schools.
      • The Jonathan Edwards Center at Yale has a digitized version of his note collection called the Miscellanies that you can use to cross reference his written works.
      • A recent example I've come across but haven't mentioned to others until now is that of Barrett Wendell, a professor at Harvard in the late 1800s, taught composition using a zettelkasten or card system method.
      • Director David Lynch used a card index method for writing and directing his movies based on the method taught to him by Frank Daniel, a dean at the American Film Institute.
      • Mortimer J. Adler et al. created a massive group zettelkasten of western literature from which they wrote volumes 2 and 3 (aka The Syntopicon) of the Great Books of the Western World. See: https://forum.zettelkasten.de/discussion/2623/mortimer-j-adlers-syntopicon-a-topically-arranged-collaborative-slipbox
      • Before he died, historian Victor Margolin made a YouTube video of how he wrote the massive two volume World History of Design which included a zettelkasten workflow: https://www.youtube.com/watch?v=Kxyy0THLfuI
      • Martin Luther King, Jr. kept a zettelkasten which is still extant and might allow you to reference his notes to his written words.
      • The Brothers Grimm used a zettelkasten method (though theirs was slips nailed to a wall) to create The Deutsches Wörterbuch (The German Dictionary that preceeded the Oxford Dictionary). The DWB was begun in 1838 by Jacob Grimm and Wilhelm Grimm who worked on it through the letter F prior to their deaths. The dictionary project was ended in 1961 after 123 years of work which resulted in 16 volumes. A further 17th source volume was released in 1971.
      • Here's an interesting video of Ryan Holliday's method condensed over time: https://www.youtube.com/watch?v=dU7efgGEOgk
      • Because Halloween is around the corner, I'll even give you a published example of death by zettelkasten described by Nobel Prize winner Anatole France in one of his books: https://boffosocko.com/2022/10/24/death-by-zettelkasten/

      If you dig in a bit you can find and see the processes of others like Anne Lamott, Gottfried Wilhelm Leibniz, Bob Hope, Michael Ende, Twyla Tharp, Kate Grenville, Marcel Mauss, Claude Lévi-Strauss, Phyllis Diller, Carl Linnaeus, Beatrice Webb, Isaac Newton, Harold Innis, Joan Rivers, Umberto Eco, Georg Christoph Lichtenberg, Raymond, Llull, George Carlin, and Eminem who all did variations of this for themselves for a variety of output types.

      These barely scratch the surface of even Western intellectual history much less other cultures which have broadly similar methods (including oral cultures). If you do a bit of research into any major intellectual, you're likely to uncover a similar underlying method of work.

      While there are some who lionize Luhmann, he didn't invent or even perfect these methods, but is just a drop of water in a vast sea of intellectual history.

      And how did I write this short essay response? How do I have all these examples to hand? I had your same question years ago and read and researched my way into an answer. I have both paper and digital zettelkasten from which to query and write. I don't count my individual paper slips of which there are over 15,000 now, but my digital repository is easily over 20,000 (though only 19K+ are public).

      I hope you manage to figure out some version of the system for yourself and manage to create something interesting and unique out of it. It's not a fluke and it's not "just a method for writing material about zettelkasten itself".

    1. Welcome back and in this demo lesson you're going to get the experience of bootstrapping an EC2 instance using user data.

      So this is the ability to run a script during the provisioning process for an EC2 instance and automatically add a certain configuration to that instance during the build process.

      So this is an alternative to creating a custom AMI.

      Earlier in the course you created an Amazon machine image with the WordPress installation and configuration baked in.

      Now that's really quick and simple but it does limit your ability to make changes to that configuration.

      So the configuration is baked into the AMI and so you're limited as to what you can change during launch time.

      With boot strapping you have the ability to perform all the steps in the form of a script during the provisioning process and so it can be a lot more flexible.

      Now to get started we need to create the Animals for Life VPC within our general AWS account.

      So this is the management account of the organization.

      So make sure that you're logged into the IAM admin user of this account and as always make sure you have the Northern Virginia region selected.

      Now attached to this lesson is a one-click deployment link so go ahead and open that.

      This is going to take you to the quick create stack page and everything should be pre-populated.

      The stack name should be bootstrap everything else has appropriate default so just scroll down to the bottom, check the capabilities acknowledgement box and then go ahead and click on create stack.

      Now this will create the Animals for Life VPC which contains the public subnets that we'll be launching our instance into and so we're going to need this to be in a create complete state before we move on.

      So go ahead and pause the video and once your stack changes from create in progress to create complete then we good to continue.

      Okay so now that that stack has moved into a create complete state we good to continue.

      Now also attached to this lesson is another link which is the user data that we're going to use for this demo lesson so go ahead and open that link.

      This is the user data that we're going to use to bootstrap the EC2 instance so what I want you to do is to download this file to your local machine and then open it in a code editor or alternatively just copy all the text on screen now and paste that into a code editor.

      So I've gone ahead and opened that file in my text editor and if you look through all of the different commands contained within this user data .txt file then you should recognize some of them.

      These are basically the commands that we ran earlier in the course when we manually installed word press and when we created the Amazon machine image.

      So we're essentially installing the MariaDB database server, the Apache web server, Wget and Cowsay.

      We're installing PHP and its associated libraries.

      We're making sure that both the database and the web server are set to automatically start when the instance reboots and are explicitly started when this script is run.

      We're setting the root password of the MariaDB database server.

      We're downloading the latest copy of the WordPress installation archive.

      We're extracting it and we're moving the files into the correct locations.

      Then we're configuring WordPress by copying the sample configuration file into the final and proper file name so wp-config.php and then we're performing a search and replace on those placeholders and replacing them with our actual chosen values for the database name, the database user and the database password.

      And then after that we're fixing up the permissions on the web root folder with the WordPress installation files inside so we're making sure that the ownership is correct and then we're fixing up the permissions with a slightly improved version of what we've used previously.

      Then we're creating our DB.setup script in the same way that we did when we were manually installing WordPress.

      We're logging into the database using the MySQL command line utility, authenticating as the root user with the root password and then running this script and this creates the WordPress database, the user sets the password and gives that user permissions on the database.

      And then finally we're configuring the Cowsay utility so we're setting up the message of the day file we're outputting our animals for life custom greeting and then we're forcing a refresh of the login banner.

      So these are all of the steps that you've previously done manually so I hope it's still fresh in your memory just how annoying that manual installation was.

      Okay so at this point this user data is ready to go and I want to demonstrate to you how you can use this to bootstrap an EC2 instance.

      So let's go ahead and move back to the AWS console.

      Once we're at the AWS console this CloudFormation 1 click deployment has created the Animals for Life VPC.

      So what we're going to do is to click on the services drop down and then move to the EC2 console and go ahead and click on launch instance followed by launch instance again.

      So first things first the instance is going to be called a4l for animals for life - manual WordPress so go ahead and enter that in the box at the top then scroll down select Amazon Linux and then make sure Amazon Linux 2023 is selected in the drop down and then make sure that you've got 64-bit x86 selected.

      I want you to pick whichever type is free tier eligible within your account and region in my case it's t2.micro but you should pick the one that's free tier eligible.

      Under key pair go ahead and pick proceed without a key pair then scroll down to network settings and click on edit and there are a few items on this page that we need to explicitly configure.

      The first is we need to select the Animals for Life VPC next to network so select a4l -vpc1 next to subnet I want you to go ahead and pick sn -web -a so that's the web or public subnet within availability zone a then make sure auto assign public IP is set to enable we'll be using an existing security group so check that box and then in the drop down so click the drop down and select the bootstrap -instance security group so bootstrap was the name of the cloud formation stack that we created using the one-click deployment we won't be making any changes to the storage configuration and next we need to scroll down to an option that we've not used before we're going to enter some user data so scroll all the way down and under advanced details expand this if it isn't already and you're looking for the user data box what we're going to do is paste in the user data that you just downloaded so in my case this is the user data.txt which I downloaded so I'm going to go ahead and select all of the information in this user data.txt making sure I get everything including the last line and I'm going to copy that into my clipboard now back at the AWS console we need to paste that in to the user data box now by default EC2 accepts user data as base64 encoded data so we need to provide it with base64 encoded data and we're not we're just giving it a normal text file so in this case the user interface can actually do this conversion for us so if what you're pasting in is not base64 encoded and what we're pasting in isn't then we don't need to do anything else if we're pasting in data which is already base64 encoded we need to check this box below the user data box we don't need to worry about that because we're not pasting in anything with base64 encoding so we can just paste in our user data directly into this box and this will be run during the instance launch process so this is where our automatic configuration comes from this is what will bootstrap the EC2 instance okay so that's everything we need to configure so go ahead and click on launch instance now at this point while this is launching I want you to keep in mind that in the previous demo examples in this course we manually launched an instance and then once the instance was in a running state we had to connect into it download WordPress install WordPress and then configure WordPress along with all of the other associated dependencies that WordPress requires so that was a fairly time-intensive process that was open to errors in the AMI example we followed that same process but at the end we created the Amazon machine image so keep that in mind and compare it to what your experience is in this demo lesson so now we've launched the instance and it's now in a running state and we've provided some user data to this instance so I want you to leave it a couple of minutes after it's showing in a running state just give it a brief while to perform that additional configuration after a few minutes go ahead and right click on that instance and select connect we're going to be using EC2 instance connect so make sure that's selected make sure the user is set to EC2 - user and then just click connect now what you should see if we've given this enough time is our custom animals for life login banner and that means that the bootstrapping process has completed think about this for a minute as part of the launch process EC2 has provisioned us an EC2 instance and it's also run a relatively complex installation and configuration script that we've supplied in the form of user data and that's downloaded and installed WordPress and configured our custom login banner if we go back to EC2 select instances and then if we copy the public IP address into our clipboard so copy the actual IP address do not click on this link because this will open it using HTTPS which we haven't configured if you take that IP address and open that in a new tab you'll see the installation dialogue for WordPress and that's because the bootstrapping process using the user data has done all the configuration process that previously we've had to do manually now if we go back to the instance I want to demonstrate architecturally and operationally exactly how this works what we can do is use the curl utility to review the instance metadata now because we're using Amazon Linux 2023 we need to do this slightly differently we need to use version 2 of the metadata service so first we need to run this command to get a token which we can use to authenticate to the metadata service so run this next we can run this command which gets us the metadata of the instance and this uses the 169254 169254 address or as I like to call it 169.254 repeating now if we use this with meta hyphen data on the end then we get the metadata service but as we know user data is a component of the metadata service so instead of using forward slash latest forward slash metadata we can replace metadata with user data and this will allow us to see the user data supplied to the instance and don't worry all of these commands will be attached to the lesson so you should recognize this this is the user data that we passed into the instance so this is performed a download a configuration and an installation of Apache the database server and WordPress as well as our custom login banner so that's how the user data gets into the EC2 instance and there's a service running on the EC2 instance which takes this data and automatically performs these configuration steps essentially this is run as a script on the operating system now something else we can do is to move into the forward slash VAR forward slash log folder and this is a folder which contains many of the system logs and if we do an LS space hyphen LA we'll see a collection of logs within this folder there are two logs in particular that are really useful for diagnosing bootstrapping related problems these logs are cloud hyphen init dot log and cloud hyphen init hyphen output dot log and both of these are used for slightly different reasons so what I want to do is to output one of these logs and show you the content so we're going to output using shudu first to get admin permissions and then cat and we're going to use the cloud hyphen init hyphen output dot log and I'm going to press enter and that's going to show you the contents of this file and you'll be able to see using this log file exactly what's been executed on this EC2 instance so you'll be able to see all of the actual commands and the output from those commands as they've been executed on this EC2 instance so you'll be able to see all of the WordPress related downloads and copies the replacements of the database usernames and passwords the permissions fix section the database creation user creation and then permissions on that database as well as the command that actually executes those and then right at the bottom is where we configure our custom login banner so this is how you can see exactly what's been run on this EC2 instance and if you ever encounter any issues with any of the demo lessons within this course or any of my courses then you can use this file to determine exactly what's happened on the EC2 instance as part of the bootstrapping process okay so this is the end of part one of this lesson it was getting a little bit on the long side and so I wanted to add a break it's an opportunity just to take a rest or grab a coffee part two will be continuing immediately from the end of part one so go ahead complete the video and when you're ready join me in part two.

    1. God’s word

      That part is true... But if the bible is "God's word," but it's the bible that says God exists, then is it not just the same source citing itself?

    1. Welcome back and in this demo lesson you're going to be creating an ECS cluster with the Fargate cluster mode and using the container of CATS container that we created together earlier in this section of the course, you're going to deploy this container into your Fargate cluster.

      So you're going to get some practical experience of how to deploy a real container into a Fargate cluster.

      Now you won't need any cloud formation templates applied to perform this demo because we're going to use the default VPC.

      All that you'll need is to be logged in as the IAM admin user inside the management account of the organization and just make sure that you're in the Northern Virginia region.

      Once you've confirmed that then just click in Find Services and type ECS and then click to move to the ECS console.

      Once you're at the ECS console, step one is to create a Fargate cluster.

      So that's the cluster that our container is going to run inside.

      So click on clusters, then create cluster.

      You'll need to give the cluster a name.

      You can put anything you want here, but I recommend using the same as me and I'll be putting all the CATS.

      Now Fargate mode requires a VPC.

      I'm going to be suggesting that we use the default VPC because that's already configured, remember, to give public IP addresses to anything deployed into the public subnets.

      So just to keep it simple and avoid any extra configuration, we'll use the default VPC.

      Now it should automatically select all of the subnets within the default VPC, in my case all six.

      If yours doesn't, just make sure you select all of the available subnets from this dropdown, but it should do this by default.

      Then scroll down and just note how AWS Fargate is already selected and that's the default.

      If you wanted to, you could check to use Amazon EC2 instances or external instances using ECS anywhere, but for this demo, we won't be doing that.

      Instead, we'll leave everything else as default, scroll down to the bottom and click create.

      If this is the first time you're doing this in an AWS account, it's possible that you'll get the error that's shown on screen now.

      If you do get this error, then what I would suggest is to wait a few minutes, then go back to the main ECS console, go to cluster again and then create the all the cats cluster again.

      So follow exactly the same steps, call the cluster all the cats, make sure that the animals for live default VPC is selected and all those subnets are present, and then click on create.

      You should find that the second time that you run this creation process, it works okay.

      Now this generally happens because there's an approval process that needs to happen behind the scenes.

      So if this is the first time that you're using ECS within this AWS account, then you might get this error.

      It's nothing to worry about, just rerun the process and it should create fine the second time.

      Once you've followed that process through again, or if it works the first time, then just go ahead and click on the all the cats cluster.

      So this is the Fargate based cluster.

      It's in an active state, so we're good to deploy things into this cluster.

      And we can see that we've got no active services.

      If I click on tasks, we can see we've got no active tasks.

      There's a tab here, metrics where you can see cloud watch metrics about this cluster.

      And again, because this is newly created and it doesn't have any activity, all of this is going to be blank.

      For now, that's fine.

      What we need to do for this demonstration is create a task definition that will deploy our container, our container of cats container into this Fargate cluster.

      To do that, click on task definitions and create a new task definition.

      You'll need to pick a name for your task definition.

      Go ahead and put container of cats.

      And then inside this task definition, the first thing to do set the details of the container for this task.

      So under container details under name, go ahead and put container of cats web.

      So this is going to be the web container for the container of cats task.

      Then next to the name under image URI, you need to point this at the docker image that's going to be used for this container.

      So I'm going to go ahead and paste in the URI for my docker image.

      So this is the docker image that I created earlier in the course within the EC2 docker demo.

      You might have also created your own container image.

      You can feel free to use my container image or you can use yours.

      If you want to keep things simple, you should go ahead and use mine.

      Yours should be the same anyway.

      Now just to be careful, this isn't a URL.

      This is a URI to point at my docker image.

      So it consists of three parts.

      First we have docker.io, which is the docker hub.

      Then we have my username, so acantral.

      And then we have the repository name, which is container of cats.

      So if you want to use your own docker image, you need to change both the username and the repository name.

      Again, to keep things simple, feel free to use my docker image.

      Then scrolling down, we need to make sure that the port mappings are correct.

      It should show what's on screen now, so container port 80, TCP.

      And then the port name should be the same or similar to what's on screen now.

      Don't worry if it's slightly different and the application protocol should be HTTP.

      This is controlling the port mapping from the container through to the Fargate IP address.

      And I'll talk more about this IP address later on in this demo.

      Everything else looks good, so scroll down to the bottom and click on next.

      We need to specify some environment details.

      So under operating system/architecture, it needs to be linux/x86_64.

      Under task size for memory, go ahead and select 1GB and then under CPU, 0.5 vCPU.

      That should be enough resources for this simple docker application.

      Scroll down and under monitoring and logging, uncheck use log collection.

      We won't be needing it for this demo lesson.

      That's everything we need to do.

      Go ahead and click on next.

      This is just an overview of everything that we've configured, so you can scroll down to the bottom and click on create.

      And at this point, the task definition has been created successfully.

      And this is where you can see all of the details of the task definition.

      If you want to see the raw JSON for the task definition itself, you don't need this for the exam, but this is actually what a task definition looks like.

      So it contains all of this different information.

      What it has got is one or more container definitions.

      So this is just JSON.

      This is a list of container definitions.

      We've only got the one.

      And if you're looking at this, you can see where we set the port mapping.

      So we're mapping port 80.

      You can see where it's got the image URL, which is where it pulls the docker image from.

      This is exactly what a normal task and container definition look like.

      They can be significantly more complex, but this format is consistent across all task definitions.

      Okay, so now it's time to launch a task.

      It's time to take the container and task definitions that we've defined and actually run up a container inside ECS using those definitions.

      So to do that, click on clusters and then select the all the cats cluster.

      Click on tasks and then click on run a new task.

      Now, first we need to pick the compute options and we're going to select launch type.

      So check that box.

      If appropriate for the certification that you're studying for, I'll be talking about the differences between these two in a different lesson.

      Once you've clicked on launch type, make sure Fargate is selected in the launch type drop down and latest is selected under platform version.

      Then scroll down and we're going to be creating a task.

      So make sure that task is selected.

      Scroll down again and under family, make sure container of cats is selected.

      And then under revision, select latest.

      We want to make sure the latest version is used and we'll leave desired tasks at one and task group blank.

      Scroll down and expand networking.

      Make sure the default VPC is selected and then make sure again that all of the subnets inside the default VPC are present under subnets.

      The default is that all of them should be in my case six.

      Now the way that this task is going to work is that when the task is run within Fargate, an elastic network interface is going to be created within the default VPC.

      And that elastic network interface is going to have a security group.

      So we need to make sure that the security group is appropriate and allows us to access our containerized application.

      So check the box to say create a new security group and then for security group name and description, use container of cats -sg.

      We need to make sure that the rule on this security group is appropriate.

      So under type select HTTP and then under source change this to anywhere.

      And this will mean that anyone can access this containerized application.

      Finally make sure that public IP is turned on.

      This is really important because this is how we'll access our containerized application.

      Everything else looks good.

      We can scroll down to the bottom and click on create.

      Now give that a couple of seconds.

      It should initially show last status.

      So the last status should be set to provisioning and the desired state should be set to running.

      So we need to wait for this task provisioning to complete.

      So just keep hitting refresh.

      You'll see it first change into pending.

      Now at this point we need this task to be in a running state before we can continue.

      So go ahead and pause the video and wait for both of these states.

      So last status and desired status both of those need to be running before we continue.

      So pause the video, wait for both of those to change and then once they have you can resume and will continue.

      After another refresh the last status should now be running and in green and the desired state should also be running.

      So at that point we're good to go.

      We can click on the task link below.

      We can scroll down and our task has been allocated a private IP version for address in the default VPC and also a public IP version for address also in the default VPC.

      So if we copy this public IP into our clipboard and then open a new tab and browse to this IP we'll see our very corporate professional web application.

      If it fits, I sits in a container in a container.

      So we've taken a Docker image that we created earlier in this section of the course.

      We've created a Fargate cluster, created a task definition with a container definition inside and deployed our container image as a container to this Fargate cluster.

      So it's a very simple example, but again this scales.

      So you could deploy Docker containers which are a lot more complex in what functionality they offer.

      In this case it's just an Apache web server loading up a web page but we could deploy any type of web application using the same steps that you've performed in this demo lesson.

      So congratulations, you've learned all of the theory that you'll need for the exam and you've taken the steps to implement this theory in practice by deploying a Docker image as a container on an ECS Fargate cluster.

      So great job.

      At this point all that remains is to tidy up.

      So go back to the AWS console.

      Just stop this container.

      Click on stop.

      Click on task definitions and then go into this task definition.

      Select this.

      Click on actions, deregister and then click on deregister.

      Click back on task definitions and make sure there's no results there.

      That's good.

      Click on clusters.

      Click on all the cats.

      Delete the cluster.

      You'll need to type delete space all the cats and then click on delete to confirm.

      And at that point the Fargate cluster has been deleted.

      The running container has been stopped.

      The task definitions been deleted and our account is back in the same state as when we started.

      So at this point you've completed the demo.

      You've done great and you've implemented some pretty complex theory.

      So you should already have a head start on any exam questions which involve ECS.

      We're going to be using ECS a lot more as we move through the course and we're going to be using it in some of the Animals for Life demos as we implement progressively more complex architectures later on in the course.

      For now I just wanted to give you the basics but you've done really well if you've implemented this successfully without any issues.

      So at this point go ahead, complete this video and when you're ready join me in the next.

    1. Hoover had entered office with widespread popular support, but by the end of 1929 the economic collapse had overwhelmed his presidency. Hoover and his advisors assumed, and then desperately hoped, that the sharp economic decline was just a temporary downturn; part of the inevitable boom-bust cycles that stretched back through America’s commercial history.

      I think it's interesting that his presidency had such a devastating effect.

    2. Despite serious problems in the industrial and agricultural economies, most Americans in 1929 and 1930 believed the nation would bounce back quickly. President Herbert Hoover reassured an audience in 1930 that “the depression is over.” But the president was not simply guilty of false optimism. Hoover had made many mistakes. During his 1928 election campaign, he had promoted higher tariffs to encourage consumption of U.S.-produced products and to protect American farmers from foreign competition. Spurred by the ongoing agricultural depression, Hoover signed the highest tariff in American history, the Smoot-Hawley Tariff of 1930, just as global markets began to crumble. Other countries retaliated and tariff walls rose across the globe. Between 1929 and 1932, international trade dropped from $36 billion to only $12 billion. American exports fell by 78%.

      I found this passage really shocking. It’s surprising that many people thought the economy would bounce back quickly when things were so bad. Hoover saying “the depression is over” feels almost unreal. The Smoot-Hawley Tariff made things worse, causing trade to drop a lot. This shows how quickly hope can turn into trouble, especially with bad decisions.

    3. Although the belief that economic prosperity was universal was exaggerated at the time and has been overstated by many historians, excitement over the stock market and the possibility of making speculative fortunes permeated popular culture in the 1920s. A Hollywood musical, High Society Blues, captured the hope of instant prosperity. Ironically, the movie didn’t reach theaters until after the market crash. “I’m in the Market for You,” a musical number from the film, used the stock market as a metaphor for love: You’re going up, up, up in my estimation / I want a thousand shares of your caresses, too / We’ll count the hugs and kisses / When dividends are due / ’Cause I’m in the market for you. But just as the song was being recorded in 1929, the stock market reached its peak, crashed, and brought an abrupt end to the seeming prosperity of the Roaring Twenties. The Great Depression had arrived.

      I found this pretty funny! The idea of using stock market terms for love is clever, but it’s ironic that the song came out just before the stock market crashed. It’s surprising how quickly the mood changed from excitement to the Great Depression.

    1. Reviewer #1 (Public Review):

      This paper proposes a novel framework for explaining patterns of generalization of force field learning to novel limb configurations. The paper considers three potential coordinate systems: cartesian, joint-based, and object-based. The authors propose a model in which the forces predicted under these different coordinate frames are combined according to the expected variability of produced forces. The authors show, across a range of changes in arm configurations, that the generalization of a specific force field is quite well accounted for by the model.

      The paper is well-written and the experimental data are very clear. The patterns of generalization exhibited by participants - the key aspect of the behavior that the model seeks to explain - are clear and consistent across participants. The paper clearly illustrates the importance of considering multiple coordinate frames for generalization, building on previous work by Berniker and colleagues (JNeurophys, 2014). The specific model proposed in this paper is parsimonious, but there remain a number of questions about its conceptual premises and the extent to which its predictions improve upon alternative models.

      A major concern is with the model's premise. It is loosely inspired by cue integration theory but is really proposed in a fairly ad hoc manner, and not really concretely founded on firm underlying principles. It's by no means clear that the logic from cue integration can be extrapolated to the case of combining different possible patterns of generalization. I think there may in fact be a fundamental problem in treating this control problem as a cue-integration problem. In classic cue integration theory, the various cues are assumed to be independent observations of a single underlying variable. In this generalization setting, however, the different generalization patterns are NOT independent; if one is true, then the others must inevitably not be. For this reason, I don't believe that the proposed model can really be thought of as a normative or rational model (hence why I describe it as 'ad hoc'). That's not to say it may not ultimately be correct, but I think the conceptual justification for the model needs to be laid out much more clearly, rather than simply by alluding to cue-integration theory and using terms like 'reliability' throughout.

      A more rational model might be based on Bayesian decision theory. Under such a model, the motor system would select motor commands that minimize some expected loss, averaging over the various possible underlying 'true' coordinate systems in which to generalize. It's not entirely clear without developing the theory a bit exactly how the proposed noise-based theory might deviate from such a Bayesian model. But the paper should more clearly explain the principles/assumptions of the proposed noise-based model and should emphasize how the model parallels (or deviates from) Bayesian-decision-theory-type models.

      Another significant weakness is that it's not clear how closely the weighting of the different coordinate frames needs to match the model predictions in order to recover the observed generalization patterns. Given that the weighting for a given movement direction is over-parametrized (i.e. there are 3 variable weights (allowing for decay) predicting a single observed force level, it seems that a broad range of models could generate a reasonable prediction. It would be helpful to compare the predictions using the weighting suggested by the model with the predictions using alternative weightings, e.g. a uniform weighting, or the weighting for a different posture. In fact, Fig. 7 shows that uniform weighting accounts for the data just as well as the noise-based model in which the weighting varies substantially across directions. A more comprehensive analysis comparing the proposed noise-based weightings to alternative weightings would be helpful to more convincingly argue for the specificity of the noise-based predictions being necessary. The analysis in the appendix was not that clearly described, but seemed to compare various potential fitted mixtures of coordinate frames, but did not compare these to the noise-based model predictions.

    2. Reviewer #2 (Public Review):

      Leib & Franklin assessed how the adaptation of intersegmental dynamics of the arm generalizes to changes in different factors: areas of extrinsic space, limb configurations, and 'object-based' coordinates. Participants reached in many different directions around 360{degree sign}, adapting to velocity-dependent curl fields that varied depending on the reach angle. This learning was measured via the pattern of forces expressed in upon the channel wall of "error clamps" that were randomly sampled from each of these different directions. The authors employed a clever method to predict how this pattern of forces should change if the set of targets was moved around the workspace. Some sets of locations resulted in a large change in joint angles or object-based coordinates, but Cartesian coordinates were always the same. Across three separate experiments, the observed shifts in the generalized force pattern never corresponded to a change that was made relative to any one reference frame. Instead, the authors found that the observed pattern of forces could be explained by a weighted combination of the change in Cartesian, joint, and object-based coordinates across test and training contexts.

      In general, I believe the authors make a good argument for this specific mixed weighting of different contexts. I have a few questions that I hope are easily addressed.

      Movements show different biases relative to the reach direction. Although very similar across people, this function of biases shifts when the arm is moved around the workspace (Ghilardi, Gordon, and Ghez, 1995). The origin of these biases is thought to arise from several factors that would change across the different test and training workspaces employed here (Vindras & Viviani, 2005). My concern is that the baseline biases in these different contexts are different and that rather the observed change in the force pattern across contexts isn't a function of generalization, but a change in underlying biases. Baseline force channel measurements were taken in the different workspace locations and conditions, so these could be used to show whether such biases are meaningfully affecting the results.

      Experiment 3, Test 1 has data that seems the worst fit with the overall story. I thought this might be an issue, but this is also the test set for a potentially awkwardly long arm. My understanding of the object-based coordinate system is that it's primarily a function of the wrist angle, or perceived angle, so I am a little confused why the length of this stick is also different across the conditions instead of just a different angle. Could the length be why this data looks a little odd?

      The manuscript is written and organized in a way that focuses heavily on the noise element of the model. Other than it being reasonable to add noise to a model, it's not clear to me that the noise is adding anything specific. It seems like the model makes predictions based on how many specific components have been rotated in the different test conditions. I fear I'm just being dense, but it would be helpful to clarify whether the noise itself (and inverse variance estimation) are critical to why the model weights each reference frame how it does or whether this is just a method for scaling the weight by how much the joints or whatever have changed. It seems clear that this noise model is better than weighting by energy and smoothness.

      Are there any force profiles for individual directions that are predicted to change shape substantially across some of these assorted changes in training and test locations (rather than merely being scaled)? If so, this might provide another test of the hypotheses.

      I don't believe the decay factor that was used to scale the test functions was specified in the text, although I may have just missed this. It would be a good idea to state what this factor is where relevant in the text.

    1. Welcome back and in this demo lesson you're going to learn how to install the Docker engine inside an EC2 instance and then use that to create a Docker image.

      Now this Docker image is going to be running a simple application and we'll be using this Docker image later in this section of the course to demonstrate the Elastic Container service.

      So this is going to be a really useful demo where you're going to gain the experience of how to create a Docker image.

      Now there are a few things that you need to do before we get started.

      First as always make sure that you're logged in to the I am admin user of the general AWS account and you'll also need the Northern Virginia region selected.

      Now attached to this lesson is a one-click deployment link so go ahead and click that now.

      This is going to deploy an EC2 instance with some files pre downloaded that you'll use during the demo lesson.

      Now everything's pre-configured you just need to check this box at the bottom and click on create stack.

      Now that's going to take a few minutes to create and we need this to be in a create complete state.

      So go ahead and pause the video wait for your stack to move into create complete and then we're good to continue.

      So now this stack is in a create complete state and we're good to continue.

      Now if you're following along with this demo within your own environment there's another link attached to this lesson called the lesson commands document and that will include all of the commands that you'll need to type as you move through the demo.

      Now I'm a fan of typing all commands in manually because I personally think that it helps you learn but if you are the type of person who has a habit of making mistakes when typing along commands out then you can copy and paste from this document to avoid any typos.

      Now one final thing before we finish at the end of this demo lesson you'll have the opportunity to upload the Docker image that you create to Docker Hub.

      If you're going to do that then you should pre sign up for a Docker Hub account if you don't already have one and the link for this is included attached to this lesson.

      If you already have a Docker Hub account then you're good to continue.

      Now at this point what we need to do is to click on the resources tab of this stack and locate the public EC2 resource.

      Now this is a normal EC2 instance that's been provisioned on your behalf and it has some files which have been pre downloaded to it.

      So just go ahead and click on the physical ID next to public EC2 and that will move you to the EC2 console.

      Now this machine is set up and ready to connect to and I've configured it so that we can connect to it using Session Manager and this avoids the need to use SSH keys.

      So to do that just right-click and then select connect.

      You need to pick Session Manager from the tabs across the top here and then just click on connect.

      Now that will take a few minutes but once connected you should see this prompt.

      So it should say SH- and then a version number and then dollar.

      Now the first thing that we need to do as part of this demo lesson is to install the Docker engine.

      The Docker engine is the thing that allows Docker containers to run on this EC2 instance.

      So we need to install the Docker engine package and we'll do that using this command.

      So we're using shudu to get admin permissions then the package manager DNF then install then Docker.

      So go ahead and run that and that will begin the installation of Docker.

      It might take a few moments to complete it might have to download some prerequisites and you might have to answer that you're okay with the install.

      So press Y for yes and then press enter.

      Now we need to wait a few moments for this install process to complete and once it has completed then we need to start the Docker service and we do that using this command.

      So shudu again to get admin permissions and then service and then the Docker service and then start.

      So type that and press enter and that starts the Docker service.

      Now I'm going to type clear and then press enter to make this easier to see and now we need to test that we can interact with the Docker engine.

      So the most simple way to do that is to type Docker space and then PS and press enter.

      Now you're going to get an error.

      This error is because not every user of this EC2 instance has the permissions to interact with the Docker engine.

      We need to grant permissions for this user or any other users of this EC2 instance to be able to interact with the Docker engine and we're going to do that by adding these users to a group and we do that using this command.

      So shudu for admin permissions and then user mod -a -g for group and then the Docker group and then EC2 -user.

      Now that will allow a local user of this system, specifically EC2 -user, to be able to interact with the Docker engine.

      Okay so I've cleared the screen to make it slightly easier to see now that we've added EC2 -user the ability to interact with Docker.

      So the next thing is we need to log out and log back in of this instance.

      So I'm going to go ahead and type exit just to disconnect from session manager and then click on close and then I'm going to reconnect to this instance and you need to do the same.

      So connect back in to this EC2 instance.

      Now once you're connected back into this EC2 instance we need to run another command which moves us into EC2 user so it basically logs us in as EC2 -user.

      So that's this command and the result of this would be the same as if you directly logged in to EC2 -user.

      Now the reason we're doing it this way is because we're using session manager so that we don't need a local SSH client or to worry about SSH keys.

      We can directly log in via the console UI we just then need to switch to EC2 -user.

      So run this command and press enter and we're now logged into the instance using EC2 -user and to test everything's okay we need to use a command with the Docker engine and that command is Docker space ps and if everything's okay you shouldn't see any output beyond this list of headers.

      What we've essentially done is told the Docker engine to give us a list of any running containers and even though we don't have any it's not erred it's simply displayed this empty list and that means everything's okay.

      So good job.

      Now what I've done to speed things up if you just run an LS and press enter the instance has been configured to download the sample application that we're going to be using and that's what the file container.zip is within this folder.

      I've configured the instance to automatically extract that zip file which has created the folder container.

      So at this point I want you to go ahead and type cd space container and press enter and that's going to move you inside this container folder.

      Then I want you to clear the screen by typing clear and press enter and then type ls space -l and press enter.

      Now this is the web application which I've configured to be automatically downloaded to the EC2 instance.

      It's a simple web page we've got index.html which is the index we have a number of images which this index.html contains and then we have a docker file.

      Now this docker file is the thing that the docker engine will use to create our docker image.

      I want to spend a couple of moments just stepping you through exactly what's within this docker file.

      So I'm going to move across to my text editor and this is the docker file that's been automatically downloaded to your EC2 instance.

      Each of these lines is a directive to the docker engine to perform a specific task and remember we're using this to create a docker image.

      This first line tells the docker engine that we want to use version 8 of the Red Hat Universal base image as the base component for our docker image.

      This next line sets the maintainer label it's essentially a brief description of what the image is and who's maintaining it in this case it's just a placeholder of animals for life.

      This next line runs a command specifically the yum command to install some software specifically the Apache web server.

      This next command copy copies files from the local directory when you use the docker command to create an image so it's copying that index.html file from this local folder that I've just been talking about and it's going to put it inside the docker image in this path so it's going to copy index.html to /var/www/html and this is where an Apache web server expects this index.html to be located.

      This next command is going to do the same process for all of the jpegs in this folder so we've got a total of six jpegs and they're going to be copied into this folder inside the docker image.

      This line sets the entry point and this essentially determines what is first run when this docker image is used to create a docker container.

      In this example it's going to run the Apache web server and finally this expose command can be used for a docker image to tell the docker engine which services should be exposed.

      Now this doesn't actually perform any configuration it simply tells the docker engine what port is exposed in this case port 80 which is HTTP.

      Now this docker file is going to be used when we run the next command which is to create a docker image.

      So essentially this file is the same docker file that's been downloaded to your EC2 instance and that's what we're going to run next.

      So this is the next command within the lesson commands document and this command builds a container image.

      What we're essentially doing is giving it the location of the docker file.

      This dot at the end contains the working directory so it's here where we're going to find the docker file and any associated files that that docker file uses.

      So we're going to run this command and this is going to create our docker image.

      So let's go ahead and run this command.

      It's going to download version 8 of UBI which it will use as a starting point and then it's going to run through every line in the docker file performing each of the directives and each of those directives is going to create another layer within the docker image.

      Remember from the theory lesson each line within the docker file generally creates a new file system layer so a new layer of a docker image and that's how docker images are efficient because you can reuse those layers.

      Now in this case this has been successful.

      We've successfully built a docker image with this ID so it's giving it a unique ID and it's tagged this docker image with this tag colon latest.

      So this means that we have a docker image that's now stored on this EC2 instance.

      Now I'll go ahead and clear the screen to make it easier to see and let's go ahead and run the next command which is within the lesson commands document and this is going to show us a list of images that are on this EC2 instance but we're going to filter based on the name container of cats and this will show us the docker image which we've just created.

      So the next thing that we need to do is to use the docker run command which is going to take the image that we've just created and use it to create a running container and it's that container that we're going to be able to interact with.

      So this is the command that we're going to use it's the next one within the lesson commands document.

      It's docker run and then it's telling it to map port 80 on the container with port 80 on the EC2 instance and it's telling it to use the container of cats image and if we run that command docker is going to take the docker image that we've got on this EC2 instance run it to create a running container and we should be able to interact with that container.

      So if you go back to the AWS console if we click on instances so look for a4l-public EC2 that's in the running state.

      I'm just going to go ahead and select this instance so that we can see the information and we need the public IP address of this instance.

      Go ahead and click on this icon to copy the public IP address into your clipboard and then open that in a new tab.

      Now be sure not to use this link to the right because that's got a tendency to open the HTTPS version.

      We just need to use the IP address directly.

      So copy that into your clipboard open a new tab and then open that IP address and now we can see the amazing application if it fits i sits in a container in a container and this amazing looking enterprise application is what's contained in the docker image that you just created and it's now running inside a container based off that image.

      So that's great everything's working as expected and that's running locally on the EC2 instance.

      Now in the demo lesson for the elastic container service that's coming up later in this section of the course you have two options.

      You can either use my docker image which is this image that I've just created or you can use your own docker image.

      If you're going to use my docker image then you can skip this next step.

      You don't need a docker hub account and you don't need to upload your image.

      If you want to use your own image then you do need to follow these next few steps and I need to follow them anyway because I need to upload this image to docker hub so that you can potentially use it rather than your own image.

      So I'm going to move back to the session manager tab and I'm going to control C to exit out of this running container and I'm going to type clear to clear the screen and make it easier to see.

      Now to upload this to docker hub first you need to log in to docker hub using your credentials and you can do that using this command.

      So it's docker space login space double hyphen username equals and then your username.

      So if you're doing this in your own environment you need to delete this placeholder and type your username.

      I'm going to type my username because I'll be uploading this image to my docker hub.

      So this is my docker hub username and then press enter and it's going to ask for the corresponding password to this username.

      So I'm going to paste in my password if you're logging into your docker hub you should use your password.

      Once you've pasted in the password go ahead and press enter and that will log you in to docker hub.

      Now you don't have to worry about the security message because whilst your docker hub password is going to be stored on the EC2 instance shortly we're going to terminate this instance which will remove all traces of this password from this machine.

      Okay so again we're going to upload our docker image to docker hub so let's run this command again and you'll see because we're just using the docker images command we can see the base image as well as our image.

      So we can see red hat UBI 8.

      We want the container of cats latest though so what you need to do is copy down the image ID of the container of cats image.

      So this is the top line in my case container of cats latest and then the image ID.

      So then we need to run this command so docker space tag and then the image ID that you've just copied into your clipboard and then a space and then your docker hub username.

      In my case it's actrl with 1L if you're following along you need to use your own username and then forward slash and then the name of the image that you want this to be stored as on docker hub so I'm going to use container of cats.

      So that's the command you need to use so docker tag and then your image ID for container of cats and then your username forward slash container of cats and press enter and that's everything we need to do to prepare to upload this image to docker hub.

      So the last command that we need to run is the command to actually upload the image to docker hub and that command is docker space push so we're going to push the image to docker hub then we need to specify the docker hub username so again this is my username but if you're doing this in your environment it needs to be your username and then forward slash and then the image name in my case container of cats and then colon latest and once you've got all that go ahead and press enter and that's going to push the docker image that you've just created up to your docker hub account and once it's up there it means that we can deploy from that docker image to other EC2 instances and even ECS and we're going to do that in a later demo in this section of the course.

      Now that's everything that you need to do in this demo lesson you've essentially installed and configured the docker engine you've used a docker file to create a docker image from some local assets you've tested that docker image by running a container using that image and then you've uploaded that image to docker hub and as I mentioned before we're going to use that in a future demo lesson in this section of the course.

      Now the only thing that remains to do is to clear up the infrastructure that we've used in this demo lesson so go ahead and close down all of these extra tabs and go back to the cloud formation console this is the stack that's been created by the one click deployment link so all you need to do is select this stack it should be called EC2 docker and then click on delete and confirm that deletion and that will return the account into the same state as it was at the start of this demo lesson.

      Now that is everything you need to do in this demo lesson I hope it's been useful and I hope you've enjoyed it so go ahead and complete the video and when you're ready I look forward to you joining me in the next.

    1. Welcome back and in this very brief demo lesson, I just want to demonstrate a very specific feature of EC2 known as termination protection.

      Now you don't have to follow along with this in your own environment, but if you are, you should still have the infrastructure created from the previous demo lesson.

      And also if you are following along, you need to be logged in as the I am admin user to the general AWS account.

      So the management account of the organization and have the Northern Virginia region selected.

      Now again, this is going to be very brief.

      So it's probably not worth doing in your own environment unless you really want to.

      Now what I want to demonstrate is termination protection.

      So I'm going to go ahead and move to the EC2 console where I still have an EC2 instance running created in the previous demo lesson.

      Now normally if I right click on this instance, I'm given the ability to stop the instance, to reboot the instance or to terminate the instance.

      And this is assuming that the instance is currently in a running state.

      Now if I go to terminate instance, straight away I'm presented with a dialogue where I need to confirm that I want to terminate this instance.

      But it's easy to imagine that somebody who's less experienced with AWS can go ahead and terminate that and then click on terminate to confirm the process without giving it much thought.

      And that can result in data loss, which isn't ideal.

      What you can do to add another layer of protection is to right click on the instance, go to instance settings, and then change termination protection.

      If you click that option, you get this dialogue where you can enable termination protection.

      So I'm going to do that, I'm going to enable termination protection because this is an essential website for animals for life.

      So I'm going to enable it and click on save.

      And now that instance is protected against termination.

      If I right click on this instance now and go to terminate instance and then click on terminate, I get a dialogue that I'm unable to terminate the instance.

      The instance and then the instance ID may not be terminated, modify its disable API termination instance attribute and then try again.

      So this instance is now protected against accidental termination.

      Now this presents a number of advantages.

      One, it protects against accidental termination, but it also adds a specific permission that is required in order to terminate an instance.

      So you need the permission to disable this termination protection in addition to the permissions to be able to terminate an instance.

      So you have the option of role separation.

      You can either require people to have both the permissions to disable termination protection and permissions to terminate, or you can give those permissions to separate groups of people.

      So you might have senior administrators who are the only ones allowed to remove this protection, and junior or normal administrators who have the ability to terminate instances, and that essentially establishes a process where a senior administrator is required to disable the protection before instances can be terminated.

      It adds another approval step to this process, and it can be really useful in environments which contain business critical EC2 instances.

      So you might not have this for development and test environments, but for anything in production, this might be a standard feature.

      If you're provisioning instances automatically using cloud formation or other forms of automation, this is something that you can enable in an automated way as instances are launching.

      So this is a really useful feature to be aware of.

      And for the SysOps exam, it's essential that you understand when and where you'd use this feature.

      And for both the SysOps and the developer exams, you should pay attention to this, disable API termination.

      You might be required to know which attribute needs to be modified in order to allow terminations.

      So really for both of the exams, just make sure that you're aware of exactly how this process works end to end, specifically the error message that you might get if this attribute is enabled and you attempt to terminate an instance.

      At this point though, that is everything that I wanted to cover about this feature.

      So right click on the instance, go to instance settings, change the termination protection and disable it, and then click on save.

      One other feature which I want to introduce quickly, if we right click on the instance, go to instance settings, and then change shutdown behavior, you're able to specify whether an instance should move into a stop state when shut down, or whether you want it to move into a terminate state.

      Now logically, the default is stop, but if you are running an environment where you don't want to consider the state of an instance to be valuable, then potentially you might want it to terminate when it shuts down.

      You might not want to have an account with lots of stopped instances.

      You might want the default behavior to be terminate, but this is a relatively niche feature, and in most cases, you do want the shutdown behavior to be stop rather than terminate, but it's here where you can change that default behavior.

      Now at this point, that is everything I wanted to cover.

      If you were following along with this in your own environment, you do need to clear up the infrastructure.

      So click on the services dropdown, move to cloud formation, select the status checks and protect stack, and then click on delete and confirm that by clicking delete stack.

      And once this stack finishes deleting all of the infrastructure that's been used during this demo and the previous one will be cleared from the AWS account.

      If you've just been watching, you don't need to worry about any of this process, but at this point, we're done with this demo lesson.

      So go ahead, complete the video, and once you're ready, I'll look forward to you joining me in the next.

    1. Welcome back and in this demo lesson either you're going to get the experience or you can watch me interacting with an Amazon machine image.

      So we created an Amazon machine image or AMI in a previous demo lesson and if you recall it was customized for animals for life.

      It had an install of WordPress and it had the Kause application installed and a custom login banner.

      Now this is a really simple example of an AMI but I want to step you through some of the options that you have when dealing with AMIs.

      So if we go to the EC2 console and if you are following along with this in your own environment do make sure that you're logged in as the IAM admin user of the general AWS account, so the management account of the organization and you have the Northern Virginia region selected.

      The reason for being so specific about the region is that AMIs are regional entities so you create an AMI in a particular region.

      So if I go and select AMIs under images within the EC2 console I'll see the animals for life AMI that I created in a previous demo lesson.

      Now if I go ahead and change the region maybe from Northern Virginia which is US-East-1 to US-East- Ohio which is US-East-2 if I make that change what we'll see is we'll go back to the same area of the console only now we won't see any AMIs that's because an AMI is tied to the region in which it's created.

      Every AMI belongs in one region and it has a unique AMI ID.

      So let's move back to Northern Virginia.

      Now we are able to copy AMIs between regions this allows us to make one AMI and use it for a global infrastructure platform so we can right-click and select copy AMI then select the destination region and then for this example let's say that I did want to copy it to Ohio then I would select that in the drop-down it would allow me to change the name if I wanted or I could keep it the same for description it would show that it's been copied from this AMI ID in this region and then it would have the existing description at the end.

      So at this point I'm going to go ahead and click copy AMI and that process has now started so if I close down this dialogue and then change it from US East 1 to US East 2 so select that now we have a pending AMI and this is the AMI that's being copied from the US - East - one region into this region if we go ahead and click on snapshots under elastic block store then we're going to see the snapshot or snapshots which belong to this AMI.

      Now depending on how busy AWS is it can take a few minutes for the snapshots to appear on this screen just go ahead and keep refreshing until they appear.

      In our case we only have the one which is the boot volume that's used for our custom AMI.

      Now the time taken to copy a snapshot between regions depends on many factors what the source and destination region are and the distance between the two the size of the snapshot and the amount of data it contains and it can take anywhere from a few minutes to much much longer so this is not an immediate process.

      Once the snapshot copy completes then the AMI copy process will complete and that AMI is then available in the destination region but an important thing that I want to keep stressing throughout this course is that this copied AMI is a completely different AMI.

      AMIs are regional don't fall for any exam questions which attempt to have you use one AMI for several regions.

      If we're copying this animals for life AMI from one region to another region in effect we're creating two different AMIs.

      So take note of this AMI ID in this region and if we switch back to the original source region so US - East - 1 note how this AMI has a different ID so they are different AMIs completely different AMIs you're creating a new one as part of the copy process.

      So while the data is going to be the same conceptually they are completely separate objects and that's critical for you to understand both for production usage and when answering any exam questions.

      Now while that's copying I want to demonstrate the other important thing which I wanted to show you in this demo lesson and that's permissions of AMIs.

      So if I right-click on this AMI and edit AMI permissions by default an AMI is private.

      Being private means that it's only accessible within the AWS account which has created the AMI and so only identities within that account that you grant permissions are able to access it and use it.

      Now you can change the permission of the AMI you could set it to be public and if you set it to public it means that any AWS account can access this AMI and so you need to be really careful if you select this option because you don't want any sensitive information contained in that snapshot to be leaked to external AWS accounts.

      A much safer way is if you do want to share the AMI with anyone else then you can select private but explicitly add other AWS accounts to be able to interact with this AMI.

      So I could click in this box and then for example if I clicked on services and I just moved to the AWS organization service I'll open that in a new tab and let's say that I chose to share this AMI with my production account so I selected my production account ID and then I could add this into this box which would grant my production AWS account the ability to access this AMI.

      Now no tell there's also this checkbox and this adds create volume permissions to the snapshots associated with this AMI so this is something that you need to keep in mind.

      Generally if you are sharing an AMI to another account inside your organization then you can afford to be relatively liberal with permissions so generally if you're sharing this internally I would definitely check this box and that gives full permissions on the AMI as well as the snapshots so that anyone can create volumes from those snapshots as well as accessing the AMI.

      So these are all things that you need to consider.

      Generally it's much preferred to explicitly grant an AWS account permissions on an AMI rather than making that AMI public.

      If you do make it public you need to be really sure that you haven't leaked any sensitive information, specifically access keys.

      While you do need to be careful of that as well if you're explicitly sharing it with accounts, generally if you're sharing it with accounts then you're going to be sharing it with trusted entities.

      You need to be very very careful if ever you're using this public option and I'll make sure I include a link attached to this lesson which steps through all of the best practice steps that you need to follow if you're sharing an AMI publicly.

      There are a number of really common steps that you can use to minimize lots of common security issues and that's something you should definitely do if you're sharing an AMI.

      Now if you want to do you could also share an AMI with an organizational unit or organization and you can do that using this option.

      This makes it easier if you want to share an AMI with all AWS accounts within your organization.

      At this point though I'm not going to do that we don't need to do that in this demo.

      What we're going to do now though is move back to US-East-2.

      That's everything I wanted to cover in this demo lesson.

      Now this AMI is available we can right click and select D register and move back to US-East-1 and now that we've done this demo lesson we can do the same process with this AMI.

      So we can right click select D register and that will remove that AMI.

      Click on snapshots this is the snapshot created by this AMI so we need to delete this as well right click delete that snapshot confirm that and we'll need to do the same process in the region that we copied the AMI and the snapshots to.

      So select US-East-2 it should be the only snapshot in the region make sure it is the correct one right click delete confirm that deletion and now you've cleared up all of the extra things created within this demo lesson.

      Now that's everything that I wanted to cover I just wanted to give you an overview of how to work with AMIs from the console UI from a copying and sharing perspective.

      Go ahead and complete this video and when you're ready I look forward to you joining me in the next.

    1. Welcome back.

      This is part two of this lesson.

      We're going to continue immediately from the end of part one.

      So let's get started.

      So the first step is to shut down this instance.

      So we don't want to create an AMI from a running instance because that can cause consistency issues.

      So we're going to close down this tab.

      We're going to return to instances, right-click, and we're going to stop the instance.

      We need to acknowledge this and then we need to wait for the instance to change into the stopped state.

      It will start with stopping.

      We'll need to refresh it a few times.

      There we can see it's now in a stopped state and to create the AMI, we need to right-click on that instance, go down to Image and Templates, and select Create Image.

      So this is going to create an AMI.

      And first we need to give the AMI a name.

      So let's go ahead and use Animals for Life template WordPress.

      And we'll use the same for Description.

      Now what this process is going to do is it's going to create a snapshot of any of the EBS volumes, which this instance is using.

      It's going to create a block device mapping, which maps those snapshots onto a particular device ID.

      And it's going to use the same device ID as this instance is using.

      So it's going to set up the storage in the same way.

      It's going to record that storage inside the AMI so that it's identical to the instance we're creating the AMI from.

      So you'll see here that it's using EBS.

      It's got the original device ID.

      The volume type is set to the same as the volume that our instance is using, and the size is set to 8.

      Now you can adjust the size during this process as well as being able to add volumes.

      But generally when you're creating an AMI, you're creating the AMI in the same configuration as this original instance.

      Now I don't recommend creating an AMI from a running instance because it can cause consistency issues.

      If you create an AMI from a running instance, it's possible that it will need to perform an instance reboot.

      You can force that not to occur, so create an AMI without rebooting.

      But again, that's even less ideal.

      The most optimal way for creating an AMI is to stop the instance and then create the AMI from that stopped instance, which will have fully consistent storage.

      So now that that's set, just scroll down to the bottom and go ahead and click on Create Image.

      Now that process will take some time.

      If we just scroll down, look under Elastic Block Store and click on Snapshots.

      You'll see that initially it's creating a snapshot of the boot volume of our original EC2 instance.

      So that's the first step.

      So in creating the AMI, what needs to happen is a snapshot of any of the EBS volumes attached to that EC2 instance.

      So that needs to complete first.

      Initially it's going to be an appending state.

      We'll need to give that a few moments to complete.

      If we move to AMIs, we'll see that the AMI is also creating it too.

      It is in appending state and it's waiting for that snapshot to complete.

      Now creating a snapshot is storing a full copy of any of the data on the original EBS volume.

      And the time taken to create a snapshot can vary.

      The initial snapshot always takes much longer because it has to take that full copy of data.

      And obviously depending on the size of the original volume and how much data is being used, will influence how long a snapshot takes to create.

      So the more data, the larger the volume, the longer the snapshot will take.

      After a few more refreshes, the snapshot moves into a completed status and if we move across to AMIs under images, after a few moments this too will change away from appending status.

      So let's just refresh it.

      After a few moments, the AMI is now also in an available state and we're good to be able to use this to launch additional EC2 instances.

      So just to summarize, we've launched the original EC2 instance, we've downloaded, installed and configured WordPress, configured that custom banner.

      We've shut down the EC2 instance and generated an AMI from that instance.

      And now we have this AMI in a state where we can use it to create additional instances.

      So we're going to do that.

      We're going to launch an additional instance using this AMI.

      While we're doing this, I want you to consider exactly how much quicker this process now is.

      So what I'm going to do is to launch an EC2 instance from this AMI and note that this instance will have all of the configuration that we had to do manually, automatically included.

      So right click on this AMI and select launch.

      Now this will step you through the launch process for an EC2 instance.

      You won't have to select an AMI because obviously you are now explicitly using the one that you've just created.

      You'll be asked to select all of the normal configuration options.

      So first let's put a name for this instance.

      So we'll use the name "instance" from AMI.

      Then we'll scroll down.

      As I mentioned moments ago, we don't have to specify an AMI because we're explicitly launching this instance from an AMI.

      Scroll down.

      You'll need to specify an instance type just as normal.

      We'll use a free tier eligible instance.

      This is likely to be T2 or T3.micro.

      Below that, go ahead and click and select Proceed without a key pair not recommended.

      Scroll down.

      We'll need to enter some networking settings.

      So click on Edit next to Network Settings.

      Click in VPC and select A4L-VPC1.

      Click in Subnet and make sure that SN-Web-A is selected.

      Make sure the box is below a both set to enable for the auto assign IP settings.

      Under Firewall, click on Select Existing Security Group.

      Click in the Security Groups drop down and select AMI-Demo-Instance Security Group.

      And that will have some random at the end.

      That's absolutely fine.

      Select that.

      Scroll down.

      And notice that the storage is configured exactly the same as the instance which you generated this AMI from.

      Everything else looks good.

      So we can go ahead and click on Launch Instance.

      So this is launching an instance using our custom created AMI.

      So let's close down this dialog and we'll see the instance initially in a pending state.

      Remember, this is launching from our custom AMI.

      So it won't just have the base Amazon Linux 2 operating system.

      Now it's going to have that base operating system plus all of the custom configuration that we did before creating the AMI.

      So rather than having to perform that same WordPress download installation configuration and the banner configuration each and every time, now we've baked that in to the AMI.

      So now when we launch one instance, 10 instances, or 100 instances from this AMI, all of them are going to have this configuration baked in.

      So let's give this a few minutes to launch.

      Once it's launched, we'll select it, right click, select Connect, and then connect into it using EC2, Instance Connect.

      Now one thing you will need to change because we're using a custom AMI, AWS can't necessarily detect the correct username to use.

      And so you might see sometimes it says root.

      Just go ahead and change this to EC2-user and then go ahead and click Connect.

      And if everything goes well, you'll be connected into the instance and you'll see our custom Cowsay banner.

      So all that configuration is now baked in and it's automatically included whenever we use that AMI to launch an instance.

      If we go back to the AWS console and select instances, make sure we still have the instance from AMI selected and then locate its public IP version for address.

      Don't use this link because that will use HTTPS instead, copy the IP address into your clipboard and open that in a new tab.

      Again, all being well, you should see the WordPress installation dialogue and that's because we've baked in the installation and the configuration into this AMI.

      So we've massively reduced the ongoing efforts required to launch an animals for life standard build configuration.

      If we use this AMI to launch hundreds or thousands of instances each and every time we're saving all the time and the effort required to perform this configuration and using an AMI is just one way that we can automate the build process of EC2 instances within AWS.

      And over the remainder of the course, I'm going to be demonstrating the other ways that you can use as well as comparing and contrasting the advantages and disadvantages of each of those methods.

      Now that's everything that I wanted to cover in this demo lesson.

      You've learned how to create an AMI and how to use it to save significant effort on an ongoing basis.

      So let's clear up all of the infrastructure that we've used in this lesson.

      So move back to the AWS console, close down this tab, go back to instances, and we need to manually terminate the instance that we created from our custom AMI.

      So right click and then go to terminate instance.

      You'll need to confirm that.

      That will start the process of termination.

      Now we're not going to delete the AMI or snapshots because there's a demo coming up later in this section of the course where you're going to get the experience of copying and sharing an AMI between AWS regions.

      So we're going to need to leave this in place.

      So we're not going to delete the AMI or the snapshots created within this lesson.

      Verify that that instance has been terminated and once it has, click on services, go to cloud formation, select the AMI demo stack, select delete and then confirm that deletion.

      And that will remove all of the infrastructure that we've created within this demo lesson.

      And at this point, that's everything that I wanted you to do in this demo.

      So go ahead, complete this video.

      And when you're ready, I'll look forward to you joining me in the next.

    1. Welcome back and in this demo lesson you'll be creating an AMI from a pre-configured EC2 instance.

      So you'll be provisioning an EC2 instance, configuring it with a popular web application stack and then creating an AMI of that pre-configured web application.

      Now you know in the previous demo where I said that you would be implementing the WordPress manual install once?

      Well I might have misled you slightly but this will be the last manual install of WordPress in the course, I promise.

      What we're going to do together in this demo lesson is create an Amazon Linux AMI for the animals for life business but one which includes some custom configuration and an install of WordPress ready and waiting to be initially configured.

      So this is a fairly common use case so let's jump in and get started.

      Now in order to perform this demo you're going to need some infrastructure, make sure you're logged into the general AWS account, so the management account of the organization and as always make sure that you have the Northern Virginia region selected.

      Now attached to this lesson is a one-click deployment link, go ahead and click that link.

      This will open the quick create stack screen, it should automatically be populated with the AMI demo as the stack name, just scroll down to the bottom, check this capabilities acknowledgement box and then click on create stack.

      We're going to need this stack to be in a create complete state so go ahead and pause the video and we can resume once the stack moves into create complete.

      Okay so that stacks now moved into a create complete state, we're good to continue with the demo.

      Now you're going to be using some command line commands within an EC2 instance as part of creating an Amazon machine image so also attached to this lesson is the lessons command document which contains all of those commands so go ahead and open that document.

      Now you might recognize these as the same commands that you used when you were performing a manual WordPress installation and that's the case we're running the same manual installation process as part of setting up our animals for life AMI so you're going to need all of these commands but as you've already experienced them in the previous demo lesson I'm going to run through them a lot quicker in this demo lesson so go back to the AWS console and we need to move to the EC2 area of the console so click on the services drop down, type EC2 into this search box and then open that in a new tab.

      Once you there go ahead and click on running instances, close down any dialogues about any console changes we want to maximize the amount of screen space that we have, we're going to connect to this A4L public EC2 instance this is the instance that we're going to use to create our AMI so we're going to set the instance up manually how we want it to be and then we're going to use it to generate an AMI so we need to connect to this instance so right click select connect we're going to use EC2 instance connect to do the work within our browser so make sure the username is EC2-user and then connect to this instance then once connected we're going to run through the commands to install WordPress really quickly we're going to start again by setting the variables that will use throughout the installation so you can just go ahead and copy and paste those straight in and press enter now we're going to run through all of the next set of commands really quickly because you use them in the previous demo lesson so first we're going to go ahead and install the MariaDB server Apache and the Wget utility while that's installing copy all of the commands from step 3 so these are commands which enable and start Apache and MariaDB go ahead and paste all of those four in and press enter so now Apache and MariaDB are both set to start when the instance boots as well as being set to currently started I'll just clear the screen to make this easier to see next we're going to set the DB root password again that's this command using the contents of the variable that you set at the start next we download WordPress once it's downloaded we move into the web root folder we extract the download we copy the files from within the WordPress folder that we've just extracted into the current folder which is the web root once we've done that we remove the WordPress folder itself and then we tidy up by deleting the download I'm going to clear the screen we copy the template configuration file into its final file name so wp-config.php then we're going to replace the placeholders in that file we're going to start with the database name using the variable that you set at the start next we're going to use the database user which you also set at the start and finally the database password and then we're going to set the ownership on all of these files to be the Apache user and the Apache group clear the screen next we need to create the DB setup script that are demonstrated in the previous demo so we need to run a collection of commands the first to enter the create database command the next one to enter the create user command and set that password the next one to grant permissions on the database to that user then flush the permissions then we need to run that script using the MySQL command line interface that runs all of those commands and performs all of those operations and then we tidy up by deleting that file now at this point we've done the exact same process that we did in the previous demo we've installed and set up WordPress and if everything's working okay we can go back to the AWS console click on instances select the running a4l-public ec2 instance copy down its IP address again make sure you copy that down don't click this link and then open that in a new tab if everything's working as expected you should see the WordPress installation dialogue now this time because we're creating an AMI we don't want to perform the installation we want to make sure that when anyone uses this AMI they're also greeted with this installation so we're going to leave this at this point we're not going to perform the installation instead we're going to go back to the ec2 instance now because this ec2 instance is for the animals for life business we want to customize it and make sure that everybody knows that this is an animals for life ec2 instance now to do that we're going to install an animal themed utility called cow say I'm going to clear the screen to make it easier to see and then just to demonstrate exactly what cow say does I'm going to run a cow say oh hi and if all goes well we see a cow using ASCII art saying the oh hi message that we just typed so we're going to use this to create a message of the day welcome when anyone connects to this ec2 instance to do that we're going to create a file inside the configuration folder of this ec2 instance so we're going to use shudu nano and we're going to create this file so forward slash etc forward slash update hyphen motd dot d forward slash 40 hyphen cow so we're going to create that file this is the file that's going to be used to generate the output when anyone logs in to this ec2 instance so we're going to copy in these two lines and then press enter so this means when anyone logs into the ec2 instance they're going to get an animal themed welcome so use control o to save that file and control x to exit clear the screen to make it easier to see we're going to make sure that file that we've just edited has the correct permissions then we're going to force an update of the message of the day so this is going to be what's displayed when anyone logs into this instance and then finally now that we've completed this configuration we're going to reboot this ec2 instance so we're going to use this command to reboot it and just to illustrate how this works I'm going to close down that tab and return to the ec2 console give this a few moments to restart that should have rebooted by now so we're going to select it right click go to connect again use ec2 instance connect assuming everything's working now when we connect to the instance we'll see an animal themed login banner so this is just a nice way that we can ensure that anyone logging into this instance understands that a he uses the Amazon Linux 2 AMI and be that it belongs to animals for life so we've created this instance using the Amazon Linux 2 AMI we've performed the WordPress installation and initial configuration we've customized the banner and now we're going to use this as our template instance to create our AMI that can then be used to launch other instances okay so this is the end of part one of this lesson it was getting a little bit on the long side and so I wanted to add a break it's an opportunity just to take a rest or grab a coffee part 2 will be continuing immediately from the end of part one so go ahead complete the video and when you're ready join me in part two

    1. Welcome back.

      This is part two of this lesson.

      We're going to continue immediately from the end of part one.

      So let's get started.

      So this is the folder containing the WordPress installation files.

      Now there's one particular file that's really important, and that's the configuration file.

      So there's a file called WP-config-sample, and this is actually the file that contains a template of the configuration items for WordPress.

      So what we need to do is to take this template and change the file name to be the proper file name, so wp-config.php.

      So we're going to create a copy of this file with the correct name.

      And to do that, we run this command.

      So we're copying the template or the sample file to its real file name, so wp-config.php.

      And this is the name that WordPress expects when it initially loads its configuration information.

      So run that command, and that now means that we have a live config file.

      Now this command isn't in the instructions, but if I just take a moment to open up this file, you don't need to do this.

      I'm just demonstrating what's in this file for your benefit.

      But if I run a sudo nano, and then wp, and then hyphen-config, and then php, this is how the file looks.

      So this has got all the configuration information in.

      So it stores the database name, the database user, the database host, and lots of other information.

      Now notice how it has some placeholders.

      So this is where we would need to replace the placeholders with the actual configuration information.

      So the database name itself, the host name, the database username, the database password, all that information would need to be replaced.

      Now we're not going to type this in manually, so I'm going to control X to exit out of this, and then clear the screen again to make it easy to see.

      We're going to use the Linux utility sed, or S-E-D.

      And this is a utility which can perform a search and replace within a text file.

      It's actually much more complex and capable than that.

      It can perform many different manipulation operations.

      But for this demonstration, we're going to use it as a simple search and replace.

      Now we're going to do this a number of times.

      First, we're going to run this command, which is going to replace this placeholder.

      Remember, this is one of the placeholders inside the configuration file that I've just demonstrated, wp-config.

      We're going to replace the placeholder here with the contents of the variable name, dbname, that we set at the start of this demo.

      So this is going to replace the placeholder with our actual database name.

      So I'm going to enter that so you can do the same.

      We're going to run the sed command again, but this time it's going to replace the username placeholder with the dbuser variable that we set at the start of this demo.

      So use that command as well.

      And then lastly, it will do the same for the database password.

      So type or copy and paste this command and press enter.

      And that now means that this wp-config has the actual configuration information inside.

      And just to demonstrate that, you don't need to do this part.

      I'll just do it to demonstrate.

      If I edit this file again, you'll see that all of these placeholders have actually been replaced with actual values.

      So I'm going to control X out of that and then clear the screen.

      And that concludes the configuration for the WordPress application.

      So now it's ready.

      Now it knows how to communicate with the database.

      What we need to do to finish off the configuration though is just to make sure that the web server has access to all of the files within this folder.

      And to do that, we use this command.

      So we're making sure that we use the shown command or chown and set the ownership of all of the files in this folder and any subfolders to be the Apache user and the Apache group.

      And the Apache user and Apache group belong to the web server.

      So this just makes sure that the web server is able to access and control all of the files in the web root folder.

      So run that command and press enter.

      And that concludes the installation part of the WordPress application.

      There's one final thing that we need to do and that's to create the database that WordPress will use.

      So I'm going to clear the screen to make it easy to see.

      Now what we're going to do in order to configure the database is we're going to make a database setup script.

      We're going to put this script inside the forward slash TMP folder and we're going to call it DB.setup.

      So what we need to do is enter the commands into this file that will create the database.

      After the database is created, it needs to create a database user and then it needs to grant that user permissions on that database.

      Now again, instead of manually entering this, we're going to use those variable names that were created at the start of the demo.

      So we're going to run a number of commands.

      These are all in the lessons commands document.

      The first one is this.

      So this echoes this text and because it has a variable name in, this variable name will be replaced by the actual contents of the variable.

      Then it's going to take this text with the replacement of the contents of this variable and it's going to enter that into this file.

      So forward slash TMP, forward slash DB setup.

      So run that and that command is going to create the WordPress database.

      Then we're going to use this command and this is the same so it echoes this text but it replaces these variable names with the contents of the variables.

      This is going to create our WordPress database user.

      It's going to set its password and then it's going to append this text to the DB setup file that we're creating.

      Now all of these are actually database commands that we're going to execute within the MariaDB database.

      So enter that to add that line to DB.setup.

      Then we have another line which uses the same architecture as the ones above it.

      It echoes the text.

      It replaces these variable names with the contents and then outputs that to this DB.setup file and this command grants our database user permissions to our WordPress database.

      And then the last command is this one which just flushes the privileges and again we're going to add this to our DB.setup script.

      So now I'm just going to cat the contents of this file so you can just see exactly what it looks like.

      So cat and then space forward slash TMP, forward slash DB.setup.

      So as you'll see it's replaced all of these variable names with the actual contents.

      So this is what the contents of this script actually looks like.

      So these are commands which will be run by the MariaDB database platform.

      To run those commands we use this.

      So this is the MySQL command line interface.

      So we're using MySQL to connect to the MariaDB database server.

      We're using the username of root.

      We're passing in the password and then using the contents of the DB root password variable.

      And then once we authenticate the database we're passing in the contents of our DB.setup script.

      And so this means that all of the lines of our DB.setup script will be run by the MariaDB database and this will create the WordPress database, the WordPress user and configure all of the required permissions.

      So go ahead and press enter.

      That command is run by the MariaDB platform and that means that our WordPress database has been successfully configured.

      And then lastly just to keep things secure because we don't want to leave files laying around on the file system with authentication information inside.

      We're just going to run this command to delete this DB.setup file.

      Okay, so that concludes the setup process for WordPress.

      It's been a fairly long intensive process but that now means that we have an installation of WordPress on this EC2 instance, a database which has been installed and configured.

      So now what we can do is to go back to the AWS console, click on instances.

      We need to select the A4L-PublicEC2 and then we need to locate its IP address.

      Now make sure that you don't use this open address link because this will attempt to open the IP address using HTTPS and we don't have that configured on this WordPress instance.

      Instead, just copy the IP address into your clipboard and then open that in a new tab.

      If everything's successful, you should see the WordPress installation dialog and just to verify this is working successfully, let's follow this process through.

      So pick English, United States for the language.

      For the blog title, just put all the cats and then admin as the username.

      You can accept the default strong password.

      Just copy that into your clipboard so we can use it to log in in a second and then just go ahead and enter your email.

      It doesn't have to be a correct one.

      So I normally use test@test.com and then go ahead and click on install WordPress.

      You should see a success dialog.

      Go ahead and click on login.

      Username will be admin, the password that you just copied into your clipboard and then click on login.

      And there you go.

      We've got a working WordPress installation.

      We're not going to configure it in any detail but if you want to just check out that it works properly, go ahead and click on this all the cats at the top and then visit site and you'll be able to see a generic WordPress blog.

      And that means you've completed the installation of the WordPress application and the database using a monolithic architecture on a single EC2 instance.

      So this has been a slow process.

      It's been manual and it's a process which is wide open for mistakes to be made at every point throughout that process.

      Can you imagine doing this twice?

      What about 10 times?

      What about a hundred times?

      It gets pretty annoying pretty quickly.

      In reality, this is never done manually.

      We use automation or infrastructure as code systems such as cloud formation.

      And as we move through the course, you're going to get experience of using all of these different methods.

      Now that we're close to finishing up the basics of VPC and EC2 within the course, things will start to get much more efficient quickly because I'm going to start showing you how to use many of the automation and infrastructure as code services within AWS.

      And these are really awesome to use.

      And you'll see just how much power is granted to an architect, a developer, or an engineer by using these services.

      For now though, that is the end of this demo lesson.

      Now what we're going to do is to clear up our account.

      So we need to go ahead and clear all of this infrastructure that we've used throughout this demo lesson.

      To do that, just move back to the AWS console.

      If you still have the cloud formation tab open and move back to that tab, otherwise click on services and then click on cloud formation.

      If you don't see it anywhere, you can use this box to search for it, select the word, press stack, select delete, and then confirm that deletion.

      And that will delete the stack, clear up all of the infrastructure that we've used throughout this demo lesson and the account will now be in the same state as it was at the start of this lesson.

      So from this point onward in the course, we're going to start using automation.

      Now there is a lesson coming up in a little while in this section of the course, where you're going to create an Amazon machine image which is going to contain a pre-baked copy of the WordPress application.

      So as part of that lesson, you are going to be required to perform one more manual installation of WordPress, but that's going to be part of automating the installation.

      So you'll start to get some experience of how to actually perform automated installations and how to design architectures which have WordPress as a component.

      At this point though, that's everything I wanted to cover.

      So go ahead, complete this video, and when you're ready, I look forward to you joining me in the next.

    1. Welcome back and in this lesson we're going to be doing something which I really hate doing and that's using WordPress in a course as an example.

      Joking aside though WordPress is used in a lot of courses as a very simple example of an application stack.

      The problem is that most courses don't take this any further.

      But in this course I want to use it as one example of how an application stack can be evolved to take advantage of AWS products and services.

      What we're going to be using WordPress for in this demo is to give you experience of how a manual installation of a typical application stack works in EC2.

      We're going to be doing this so you can get the experience of how not to do things.

      My personal belief is that to fully understand the advantages that automation features within AWS provide, you need to understand what a manual installation is like and what problems you can experience doing that manual installation.

      As we move through the course we can compare this to various different automated ways of installing software within AWS.

      So you're going to get the experience of bad practices, good practices and the experience to be able to compare and contrast between the two.

      By the end of this demonstration you're going to have a working WordPress site but it won't have any high availability because it's running on a single EC2 instance.

      It's going to be architecturally monolithic with everything running on the one single instance.

      In this case that means both the application and the database.

      The design is fairly straightforward.

      It's just the Animals for Life VPC.

      We're going to be deploying the WordPress application into a single subnet, the WebA public subnet.

      So this subnet is going to have a single EC2 instance deployed into it and then you're going to be doing a manual install onto this instance and the end result is a working WordPress installation.

      At this point it's time to get started and implement this architecture.

      So let's go ahead and switch over to our AWS console.

      To get started with this demo lesson you're going to need to do a few preparation steps.

      First just make sure that you're logged in to the general AWS account, so the management account of the organization and as always make sure you have the Northern Virginia region selected.

      Now attached to this lesson is a one-click deployment for the base infrastructure that we're going to use.

      So go ahead and open the one-click deployment link that's attached to this lesson.

      That link is going to take you to the Quick Create Stack screen.

      Everything should be pre-populated.

      The stack name should be WordPress.

      All you need to do is scroll down towards the bottom, check this capabilities box and then click on Create Stack.

      And this stack is going to need to be in a Create Complete state before we move on with the demo lesson.

      So go ahead and pause this video, wait for the stack to change to Create Complete and then we're good to continue.

      Also attached to this lesson is a Lessons Command document which lists all of the commands that you'll be using within the EC2 instance throughout this demo lesson.

      So go ahead and open that as well.

      So that should look something like this and these are all of the commands that we're going to be using.

      So these are the commands that perform a manual WordPress installation.

      Now that that stack's completed and we've got the Lesson Commands document open, the next step is to move across to the EC2 console because we're going to actually install WordPress manually.

      So click on the Services drop-down and then locate EC2 in this All Services part of the screen.

      If you've recently visited it, it should be in the Recently Visited section under Favorites or you can go ahead and type EC2 in the search box and then open that in a new tab.

      And then click on Instances running and you should see one single instance which is called A4L-PublicEC2.

      Go ahead and right-click on this instance.

      This is the instance we'll be installing WordPress within.

      So right-click, select Connect.

      We're going to be using our browser to connect to this instance so we'll be using Instance Connect just verify that the username is EC2-user and then go ahead and connect to this instance.

      Now again, I fully understand that a manual installation of WordPress might seem like a waste of time but I genuinely believe that you need to understand all the problems that come from manually installing software in order to understand the benefits which automation provides.

      It's not just about saving time and effort.

      It's also about error reduction and the ability to keep things consistent.

      Now I always like to start my installations or my scripts by setting variables which will store the configuration values that everything from that point forward will use.

      So we're going to create four variables.

      One for the database name, one for the database user, one for the database password and then one for the root or admin password of the database server.

      So let's start off by using the pre-populated values from the Lessened Commands documents.

      So that's all of those variables set and we can confirm that those are working by typing echo and then a space and then a dollar and then the name of one of those variables.

      So for example, dbname and press Enter and that will show us the value stored within that variable.

      So now we can use these later points of the installation.

      So at this point I'm going to clear the screen to keep it easy to see and stage two at this installation process is to install some system software.

      So there are a few things that we need to install in order to allow a WordPress installation.

      We'll install those using the DNF package manager.

      We need to give it admin privileges which is why we use shudu and then the packages that we're going to install are the database server which is Maria db-server the Apache web server which is HTTPD and then a utility called Wget which we're going to use to download further components of the installation.

      So go ahead and type or copy and paste that command and press Enter and that installation process will take a few moments and it will go through installing that software and any of the prerequisites.

      They're done so I'll clear the screen to keep this easy to read.

      Now that all those packages are installed we need to start both the web server and the database server and ensure that both of them are started if ever the machine is restarted.

      So to do that we need to enable and start those services.

      So enabling and starting means that both of the services are both started right now and they'll start if the machine reboots.

      So first we'll use this command.

      So we're using admin privileges again, systemctl which allows us to start and stop system processes and then we use enable and then HTTPD which is the web server.

      So type and press enter and that ensures that the web server is enabled.

      We need to run the same command again but this time specifying MariaDB to ensure that the database server is enabled.

      So type or copy and paste and press enter.

      So that means both of those processes will start if ever the instance is rebooted and now we need to manually start both of those so they're running and we can interact with them.

      So we need to use the same structure of command but instead of enable we need to start both of these processes.

      So first the web server and then the database server.

      So that means the CC2 instance now has a running web and database server both of which are required for WordPress.

      So I'll clear the screen to keep this easy to read.

      Next we're going to move to stage 4 and stage 4 is that we need to set the root password of the database server.

      So this is the username and password that will be used to perform all of the initial configuration of the database server.

      Now we're going to use this command and you'll note that for password we're actually specifying one of the variables that we configured at the start of this demo.

      So we're using the DB root password variable that we configured right at the start.

      So go ahead and copy and paste or type that in and press enter and that sets the password for the root user of this database platform.

      The next step which is step 5 is to install the WordPress application files.

      Now to do that we need to install these files inside what's known as the web root.

      So whenever you browse to a web server either using an IP address or a DNS name if you don't specify a path so if you just use the server name for example netflix.com then it loads those initial files from a folder known as the web root.

      Now on this particular server the web root is stored in /varr/www/html so we need to download WordPress into that folder.

      Now we're going to use this command Wget and that's one of the packages that we installed at the start of this lesson.

      So we're giving it admin privileges and we're using Wget to download latest.tar.gz from wordpress.org and then we're putting it inside this web root.

      So /varr/www/html.

      So go ahead and copy and paste or type that in and press enter.

      That'll take a few moments depending on the speed of the WordPress servers and that will store latest.tar.gz in that web root folder.

      Next we need to move into that folder so cd space /varr/www/html and press enter.

      We need to use a Linux utility called tar to extract that file.

      So sudo and then tar and then the command line options -zxvf and then the name of the file so latest.tar.gz So copy and paste or type that in and press enter and that will extract the WordPress download into this folder.

      So now if we do an ls -la you'll see that we have a WordPress folder and inside that folder are all of the application files.

      Now we actually don't want them inside a WordPress folder.

      We want them directly inside the web root.

      So the next thing we're going to do is this command and this is going to copy all of the files from inside this WordPress folder to . and . represents the current folder.

      So it's going to copy everything inside WordPress into the current working directory which is the web root directory.

      So enter that and that copies all of those files.

      And now if we do another listing you'll see that we have all of the WordPress application files inside the web root.

      And then lastly for the installation part we need to tidy up the mess that we've made.

      So we need to delete this WordPress folder and the download file that we just created.

      So to do that we'll run an rm -r and then WordPress to delete that folder.

      And then we'll delete the download with sudo rm and then a space and then the name of the file.

      So latest.tar.gz.

      And that means that we have a nice clean folder.

      So I'll clear the screen to make it easy to see.

      And then I'll just do another listing.

      Okay so this is the end of part one of this lesson.

      It was getting a little bit on the long side and so I wanted to add a break.

      It's an opportunity just to take a rest or grab a coffee.

      Part two will be continuing immediately from the end of part one.

      So go ahead complete the video and when you're ready join me in part two.

    1. Welcome back and in this video we're going to interact with instant store volumes.

      Now this part of the demo does come at a cost.

      This isn't inside the free tier because we're going to be launching some instances which are fairly large and are not included in the free tier.

      The demo has a cost of approximately 13 cents per hour and so you should only do this part of the demo if you're willing to accept that cost.

      If you don't want to accept those costs then you can go ahead and watch me perform these within my test environment.

      So to do this we're going to go ahead and click on instances and we're going to launch an instance manually.

      So I'm going to click on launch instances.

      We're going to name the instance, Instance Store Test so put that in the name box.

      Then scroll down, pick Amazon Linux, make sure Amazon Linux 2023 is selected and the architecture needs to be 64 bit x86.

      Scroll down and then in the instance type box click and we need to find a different type of instance.

      This is going to be one that supports instance store volumes.

      So scroll down and we're looking for m5dn.large.

      This is a type of instance which includes one instance store volume.

      So select that then scroll down a little bit more and under key pair click in the box and select proceed without a key pair not recommended.

      Scroll down again and under network settings click on edit.

      Click in the VPC drop down and select a4l-vpc1.

      Under subnet make sure sn-web-a is selected.

      Make sure enabled is selected for both of the auto assign public IP drop downs.

      Then we're going to select an existing security group click the drop down select the EBS demo instance security group.

      It will have some random after it but that's okay.

      Then scroll down and under storage we're going to leave all of the defaults.

      What you are able to do though is to click on show details next to instance store volumes.

      This will show you the instance store volumes which are included with this instance.

      You can see that we have one instance store volume it's 75 GB in size and it has a slightly different device name.

      So dev nvme0n1.

      Now all of that looks good so we're just going to go ahead and click on launch instance.

      Then click on view all instances and initially it will be an appending state and eventually it will move into a running state.

      Then we should probably wait for the status check column to change from initializing to 2 out of 2.

      Go ahead and pause the video and wait for this status check to change to be fully green.

      It should show 2 out of 2 status checks.

      That's now in a running state with 2 out of 2 checks so we can go ahead and connect to this instance.

      Before we do though just go ahead and select the instance and just note the instances public IP version 4 address.

      Now this address is really useful because it will change if the EC2 instance moves between EC2 hosts.

      So it's a really easy way that we can verify whether this instance has moved between EC2 hosts.

      So just go ahead and note down the IP address of the instance that you have if you're performing this in your own environment.

      We're going to go ahead and connect to this instance though so right click, select connect, we'll be choosing instance connect, go ahead and connect to the instance.

      Now many of these commands that we'll be using should by now be familiar.

      Just refer back to the lessons command document if you're unsure because we'll be using all of the same commands.

      First we need to list all of the block devices which are attached to this instance and we can do that with LSBLK.

      This time it looks a little bit different because we're using instance store rather than EBS additional volumes.

      So in this particular case I want you to look for the 8G volume so this is the root volume.

      This represents the boot or root volume of the instance.

      Remember that this particular instance type came with a 75GB instance store volume so we can easily identify it's this one.

      Now to check that we can verify whether there's a file system on this instance store volume.

      If we run this command, so the same command we've used previously so shudu file -s and then the id of this volume so dev nvme1n1, you'll see it reports data.

      And if you recall from the previous parts of this demo series this indicates that there isn't a file system on this volume.

      We're going to create one and to do that we use this command again it's the same command that we've used previously just with the new volume id.

      So press enter to create a file system on this raw block device this instance store volume and then we can run this command again to verify that it now has a file system.

      To mount it we can follow the same process that we did in the earlier stages of this demo series.

      We'll need to create a directory for this volume to be mounted into this time we'll call it forward slash instance store.

      So create that folder and then we're going to mount this device into that folder so shudu mount then the device id and then the mount point or the folder that we've previously created.

      So press enter and that means that this block device this instance store volume is now mounted into this folder.

      And if we run a df space -k and press enter you can see that it's now mounted.

      Now we're going to move into that folder by typing cd space forward slash instance store and to keep things efficient we're going to create a file called instance store dot txt.

      And rather than using an editor we'll just use shudu touch and then the name of the file and this will create an empty file.

      If we do an LS space -la and press enter you can see that that file exists.

      So now that we have this file stored on a file system which is running on this instance store volume let's go ahead and reboot this instance.

      Now we need to be careful we're not going to stop and start the instance we're going to restart the instance.

      Restarting is different than stop and start.

      So to do that we're going to close this tab move back to the ec2 console so click on instances right click on instance store test and select reboot instance and then confirm that.

      Note what this IP address is before you initiate the reboot operation and then just give this a few minutes to reboot.

      Then right click and select connect.

      Using instance connect go ahead and connect back to the instance.

      And again if it appears to hang at this point then you can just wait for a few moments and then connect again.

      But in this case I've left it long enough and I'm connected back into the instance.

      Now once I'm back in the instance if I run a df space -k and press enter note how that file system is not mounted after the reboot.

      Now that's fine because we didn't configure the Linux operating system to mount this file system when the instance is restarted.

      But what we can do is do an LS BLK again to list the block device.

      We can see that it's still there and we can manually mount it back in the same folder as it was before the reboot.

      To do that we run this command.

      So it's mounting the same volume ID the same device ID into the same folder.

      So go ahead and run that command and press enter.

      Then if we use cd space forward slash and then instance store press enter and then do an LS space -la we can see that this file is still there.

      Now the file is still there because instance store volumes do persist through the restart of an EC2 instance.

      Restarting an EC2 instance does not move the instance from one EC2 host to another.

      And because instance store volumes are directly attached to an EC2 host this means that the volume is still there after the machine has restarted.

      Now we're going to do something different though.

      Close this tab down.

      Move back to instances.

      Again pay special attention to this IP address.

      Now we're going to right click and stop the instance.

      So go ahead and do that and confirm it if you're doing this in your own environment.

      Watch this public IP v4 address really carefully.

      We'll need to wait for the instance to move into a stopped state which it has and if we select the instance note how the public IP version for address has been unallocated.

      So this instance is now not running on an EC2 host.

      Let's right click.

      Go to start instance and start it up again.

      Only to give that a few moments again.

      It'll move into a running state but notice how the public IP version for address has changed.

      This is a good indication that the instance has moved from one EC2 host to another.

      So let's give this instance a few moments to start up.

      And once it has right click, select connect and then go ahead and connect to the instance using instance connect.

      Once connected go ahead and run an LS BLK and press enter and you'll see it appears to have the same instance store volume attached to this instance.

      It's using the same ID and it's the same size.

      But let's go ahead and verify the contents of this device using this command.

      So shudu file space -s space and then the device ID of the instance store volume.

      For press enter, now note how it shows data.

      So even though we created a file system in the previous step after we've stopped and started the instance, it appears this instance store volume has no data.

      Now the reason for that is when you restart an EC2 instance, it restarts on the same EC2 host.

      But when you stop and start an EC2 instance, which is a distinctly different operation, the EC2 instance moves from one EC2 host to another.

      And that means that it has access to completely different instance store volumes than it did on that previous host.

      It means that all of the data, so the file system and the test file that we created on the instance store volume, before we stopped and started this instance, all of that is lost.

      When you stop and start an EC2 instance or for any other reason, which means the instance moves from one host to another, all of the data is lost.

      So instance store volumes are ephemeral.

      They're not persistent and you can't rely on them to keep your data safe.

      And it's really important that you understand that distinction.

      If you're doing the developer or sysop streams, it's also important that you understand the difference between an instance restart, which keeps the same EC2 host, and a stop and start, which moves an instance from one host to another.

      The format means you're likely to keep your data, but the latter means you're guaranteed to lose your data when using instance store volumes.

      EBS on the other hand, as we've seen, is persistent and any data persists through the lifecycle of an EC2 instance.

      Now with that being said, though, that's everything that I wanted to demonstrate within this series of demo lessons.

      So let's go ahead and tidy up the infrastructure.

      Close down this tab, click on instances.

      If you follow this last part of the demo in your own environment, go ahead and right click on the instance store test instance and terminate that instance.

      That will delete it along with any associated resources.

      We'll need to wait for this instance to move into the terminated state.

      So give that a few moments.

      Once that's terminated, go ahead and click on services and then move back to the cloud formation console.

      You'll see the stack that you created using the one click deploy at the start of this lesson.

      Go ahead and select that stack, click on delete and then delete stack.

      And that's going to put the account back in the same state as it was at the start of this lesson.

      So it will remove all of the resources that have been created.

      And at that point, that's the end of this demo series.

      So what did you learn?

      You learned that EBS volumes are created within one specific availability zone.

      EBS volumes can be mounted to instances in that availability zone only and can be moved between instances while retaining their data.

      You can create a snapshot from an EBS volume which is stored in S3 and that data is replicated within the region.

      And then you can use snapshots to create volumes in different availability zones.

      I told you how snapshots can be copied to other AWS regions either as part of data migration or disaster recovery and you learned that EBS is persistent.

      You've also seen in this part of the demo series that instant store volumes can be used.

      They are included with many instance types but if the instance moves between EC2 hosts so if an instance is stopped and then started or if an EC2 host has hardware problems then that EC2 instance will be moved between hosts and any data on any instant store volumes will be lost.

      So that's everything that you needed to know in this demo lesson and you're going to learn much more about EC2 and EBS in other lessons throughout the course.

      At this point though, thanks for watching and doing this demo.

      I hope it was useful but go ahead complete this video and when you're ready I look forward to you joining me in the next.

    1. Welcome back and we're going to use this demo lesson to get some experience of working with EBS and Instant Store volumes.

      Now before we get started, this series of demo videos will be split into two main components.

      The first component will be based around EBS and EBS snapshots and all of this will come under the free tier.

      The second component will be based on Instant Store volumes and will be using larger instances which are not included within the free tier.

      So I'm going to make you aware of when we move on to a part which could incur some costs and you can either do that within your own environment or watch me do it in the video.

      If you do decide to do it in your own environment, just be aware that there will be a 13 cents per hour cost for the second component of this demo series and I'll make it very clear when we move into that component.

      The second component is entirely optional but I just wanted to warn you of the potential cost in advance.

      Now to get started with this demo, you're going to need to deploy some infrastructure.

      To do that, make sure that you're logged in to the general account, so the management account of the organization and you've got the Northern Virginia region selected.

      Now attached to this demo is a one click deployment link to deploy the infrastructure.

      So go ahead and click on that link.

      That's going to open this quick create stack screen and all you need to do is scroll down to the bottom, check this capabilities box and click on create stack.

      Now you're going to need this to be in a create complete state before you continue with this demo.

      So go ahead and pause the video, wait for that stack to move into the create complete status and then you can continue.

      Okay, now that's finished and the stack is in a create complete state.

      You're also going to be running some commands within EC2 instances as part of this demo.

      Also attached to this lesson is a lesson commands document which contains all of those commands and you can use this to copy and paste which will avoid errors.

      So go ahead and open that link in a separate browser window or separate browser tab.

      It should look something like this and we're going to be using this throughout the lesson.

      Now this cloud formation template has created a number of resources, but the three that we're concerned about are the three EC2 instances.

      So instance one, instance two and instance three.

      So the next thing to do is to move across to the EC2 console.

      So click on the services drop down and then either locate EC2 under all services, find it in recently visited services or you can use the search box at the top type EC2 and then open that in a new tab.

      Now the EC2 console is going through a number of changes so don't be alarmed if it looks slightly different or if you see any banners welcoming you to this new version.

      Now if you click on instances running, you'll see a list of the three instances that we're going to be using within this demo lesson.

      We have instance one - az a.

      We have instance two - az a and then instance one - az b.

      So these are three instances, two of which are in availability zone A and one which is in availability zone B.

      Next I want you to scroll down and locate volumes under elastic block store and just click on volumes.

      And what you'll see is three EBS volumes, each of which is eight GIB in size.

      Now these are all currently in use.

      You can see that in the state column and that's because all of these volumes are in use as the boot volumes for those three EC2 instances.

      So on each of these volumes is the operating system running on those EC2 instances.

      Now to give you some experience of working with EBS volumes, we're going to go ahead and create a volume.

      So click on the create volume button.

      The first thing you'll need to do when creating a volume is pick the type and there are a number of different types available.

      We've got GP2 and GP3 which are the general purpose storage types.

      We're going to use GP3 for this demo lesson.

      You could also select one of the provisioned IOPS volumes.

      So this is currently IO1 or IO2.

      And with both of these volume types, you're able to define IOPS separately from the size of the volume.

      So these are volume types that you can use for demanding storage scenarios where you need high-end performance or when you need especially high performance for smaller volume sizes.

      Now IO1 was the first type of provisioned IOPS SSD introduced by AWS and more recently they've introduced IO2 and enhanced it which provides even higher levels of performance.

      In addition to that we do have the non-SSD volume types.

      So SC1 which is cold HDD, ST1 which is throughput optimized HDD and then of course the original magnetic type which is now legacy and AWS don't recommend this for any production usage.

      For this demo lesson we're going to go ahead and select GP3.

      So select that.

      Next you're able to pick a size in GIB for the volume.

      We're going to select a volume size of 10 GIB.

      Now EBS volumes are created within a specific availability zone so you have to select the availability zone when you're creating the volume.

      At this point I want you to go ahead and select US-EAST-1A.

      When creating volume you're also able to specify a snapshot as the basis for that volume.

      So if you want to restore a snapshot into this volume you can select that here.

      At this stage in the demo we're going to be creating a blank EBS volume so we're not going to select anything in this box.

      We're going to be talking about encryption later in this section of the course.

      You are able to specify encryption settings for the volume when you create it but at this point we're not going to encrypt this volume.

      We do want to add a tag so that we can easily identify the volume from all of the others so click on add tag.

      For the key we're going to use name.

      For the value we're going to put EBS test volume.

      So once you've entered both of those go ahead and click on create volume and that will begin the process of creating the volume.

      Just close down any dialogues and then just pay attention to the different states that this volume goes through.

      It begins in a creating state.

      This is where the storage is being provisioned and then made available by the EBS product.

      If we click on refresh you'll see that it changes from creating to available and once it's in an available state this means that we can attach it to EC2 instances.

      And that's what we're going to do so we're going to right click and select attach volume.

      Now you're able to attach this volume to EC2 instances but crucially only those in the same availability zone.

      EBS is an availability zone scoped service and so you can only attach EBS volumes to EC2 instances within that same availability zone.

      So if we select the instance box you'll only see instances in that same availability zone.

      Now at this point go ahead and select instance 1 in availability zone A.

      Once you've selected it you'll see that the device field is populated and this is the device ID that the instance will see for this volume.

      So this is how the volume is going to be exposed to the EC2 instance.

      So if we want to interact with this instance inside the operating system this is the device that we'll use.

      Now different operating systems might see this in slightly different ways.

      So as this warning suggests certain Linux kernels might rename SDF to XVDF.

      So we've got to be aware that when you do attach a volume to an EC2 instance you need to get used to how that's seen inside the operating system.

      How we can identify it and how we can configure it within the operating system for use.

      And I'm going to demonstrate that in the next part of this demo lesson.

      So at this point just go ahead and click on attach and this will attach this volume to the EC2 instance.

      Once that's attached to the instance and you see the state change to in use then just scroll up on the left hand side and select instances.

      We're going to go ahead and connect to instance 1 in availability zone A.

      This is the instance that we just attached that EBS volume to so we want to interact with this instance and see how we can see the EBS volume.

      So right click on this instance and select connect and then you could either connect with an SSH client or use instance connect and to keep things simple we're going to connect from our browser so select the EC2 instance connect option make sure the user's name is set to EC2-user and then click on connect.

      So now we connected to this EC2 instance and it's at this point that we'll start needing the commands that are listed inside the lesson commands document and again this is attached to this lesson.

      So first we need to list all the block devices which are connected to this instance and we're going to use the LSBLK command.

      Now if you're not comfortable with Linux don't worry just take this nice and slowly and understand at a high level all the commands that we're going to run.

      So the first one is LSBLK and this is list block devices.

      So if we run this we'll be able to see a list of all of the block devices connected to this EC2 instance.

      You'll see the root device this is the device that's used to boot the instance it contains the instance operating system you'll see that it's 8 gig in size and then this is the EBS volume that we just attached to this instance.

      You'll see that device ID so XVDF and you'll see that it's 10 gig in size.

      Now what we need to do next is check whether there is a file system on this block device.

      So this block device we've created it with EBS and then we've attached it to this instance.

      Now we know that it's blank but it's always safe to check if there's any file system on a block device.

      So to do that we run this command.

      So we're going to check are there any file systems on this block device.

      So press enter and if you see just data that indicates that there isn't any file system on this device and so we need to create one.

      You can only mount file systems under Linux and so we need to create a file system on this raw block device this EBS volume.

      So to do that we run this command.

      So shoo-doo again is just giving us admin permissions on this instance.

      MKFS is going to make a file system.

      We specify the file system type with hyphen t and then XFS which is a type of file system and then we're telling it to create this file system on this raw block device which is the EBS volume that we just attached.

      So press enter and that will create the file system on this EBS volume.

      We can confirm that by rerunning this previous command and this time instead of data it will tell us that there is now an XFS file system on this block device.

      So now we can see the difference.

      Initially it just told us that there was data, so raw data on this volume and now it's indicating that there is a file system, the file system that we just created.

      Now the way that Linux works is we mount a file system to a mount point which is a directory.

      So we're going to create a directory using this command.

      MKDIR makes a directory and we're going to call the directory forward slash EBS test.

      So this creates it at the top level of the file system.

      This signifies root which is the top level of the file system tree and we're going to make a folder inside here called EBS test.

      So go ahead and enter that command and press enter and that creates that folder and then what we can do is to mount the file system that we just created on this EBS volume into that folder.

      And to do that we use this command, mount.

      So mount takes a device ID, so forward slash dev forward slash xvdf.

      So this is the raw block device containing the file system we just created and it's going to mount it into this folder.

      So type that command and press enter and now we have our EBS volume with our file system mounted into this folder.

      And we can verify that by running a df space hyphen k.

      And this will show us all of the file systems on this instance and the bottom line here is the one that we've just created and mounted.

      At this point I'm just going to clear the screen to make it easier to see and what we're going to do is to move into this folder.

      So cd which is change directory space forward slash EBS test and then press enter and that will move you into that folder.

      Once we're in that folder we're going to create a test file.

      So we're going to use this command so shudu nano which is a text editor and we're going to call the file amazing test file dot txt.

      So type that command in and press enter and then go ahead and type a message.

      It can be anything you just need to recognize it as your own message.

      So I'm going to use cats are amazing and then some exclamation marks.

      Then I'm going to press control o and enter to save that file and then control x to exit again clear the screen to make it easier to see.

      And then I'm going to do an LS space hyphen LA and press enter just to list the contents of this folder.

      So as you can see we've now got this amazing test file dot txt.

      And if we cat the contents of this so cat amazing test file dot txt you'll see the unique message that you just typed in.

      So at this point we've created this file within the folder and remember the folder is now the mount point for the file system that we created on this EBS volume.

      So the next step that I want you to do is to reboot this EC2 instance.

      To do that type sudo space and then reboot and press enter.

      Now this will disconnect you from this session.

      So you can go ahead and close down this tab and go back to the EC2 console.

      Just go ahead and click on instances.

      Okay, so this is the end of part one of this lesson.

      It was getting a little bit on the long side and so I wanted to add a break.

      It's an opportunity just to take a rest or grab a coffee.

      Part two will be continuing immediately from the end of part one.

      So go ahead complete the video and when you're ready join me in part two.

    1. Welcome back and in this demo lesson you're going to evolve the infrastructure which you've been using throughout this section of the course.

      In this demo lesson you're going to add private internet access capability using NAT gateways.

      So you're going to be applying a cloud formation template which creates this base infrastructure.

      It's going to be the animals for life VPC with infrastructure in each of three availability zones.

      So there's a database subnet, an application subnet and a web subnet in availability zone A, B and C.

      Now to this point what you've done is configured public subnet internet access and you've done that using an internet gateway together with routes on these public subnets.

      In this demo lesson you're going to add NAT gateways into each availability zone so A, B and C and this will allow this private EC2 instance to have access to the internet.

      Now you're going to be deploying NAT gateways into each availability zone so that each availability zone has its own isolated private subnet access to the internet.

      It means that if any of the availability zones fail then each of the others will continue operating because these route tables which are attached to the private subnets they point at the NAT gateway within that availability zone.

      So each availability zone A, B and C has its own corresponding NAT gateway which provides private internet access to all of the private subnets within that availability zone.

      Now in order to implement this infrastructure you're going to be applying a one-click deployment and that's going to create everything that you see on screen now apart from these NAT gateways and the route table configurations.

      So let's go ahead and move across to our AWS console and get started implementing this architecture.

      Okay so now we're at the AWS console as always just make sure that you're logged in to the general AWS account as the I am admin user and you'll need to have the Northern Virginia region selected.

      Now at the end of the previous demo lesson you should have deleted all of the infrastructure that you've created up until that point so the animals for live VPC as well as the Bastion host and the associated networking.

      So you should have a relatively clean AWS account.

      So what we're going to do first is use a one-click deployment to create the infrastructure that we'll need within this demo lesson.

      So attached to this demo lesson is a one-click deployment link so go ahead and open that link.

      That's going to take you to a quick create stack screen.

      Everything should be pre-populated the stack name should be a4l just scroll down to the bottom check this capabilities box and then click on create stack.

      Now this will start the creation process of this a4l stack and we will need this to be in a create complete state before we continue.

      So go ahead pause the video wait for your stack to change into create complete and then we good to continue.

      Okay so now this stacks moved into a create complete state then we good to continue.

      So what we need to do before we start is make sure that all of our infrastructure has finished provisioning.

      To do that just go ahead and click on the resources tab of this cloud formation stack and look for a4l internal test.

      This is an EC2 instance a private EC2 instance so this doesn't have any public internet connectivity and we're going to use this to test on that gateway functionality.

      So go ahead and click on this icon under physical ID and this is going to move you to the EC2 console and you'll be able to see this a4l - internal - test instance.

      Now currently in my case it's showing as running but the status check is showing as initializing.

      Now we'll need this instance to finish provisioning before we can continue with the demo.

      What should happen is this status check should change from initializing to two out of two status checks and once you're at that point you should be able to right click and select connect and choose session manager and then have the option of connecting.

      Now you'll see that I don't because this instance hasn't finished its provisioning process.

      So what I want you to do is to go ahead and pause this video wait for your status checks to change to two out of two checks and then just go ahead and try to connect to this instance using session manager.

      Only resume the video once you've been able to click on connect under the session manager tab and don't worry if this takes a few more minutes after the instance finishes provisioning before you can connect to session manager.

      So go ahead and pause the video and when you can connect to the instance you're good to continue.

      Okay so in my case it took about five minutes for this to change to two out of two checks past and then another five minutes before I could connect to this EC2 instance.

      So I can right click on here and put connect.

      I'll have the option now of picking session manager and then I can click on connect and this will connect me in to this private EC2 instance.

      Now the reason why you're able to connect to this private instance is because we're using session manager and I'll explain exactly how this product works elsewhere in the course but essentially it allows us to connect into an EC2 instance with no public internet connectivity and it's using VPC interface endpoints to do that which I'll be explaining elsewhere in the course but what you should find when you're connected to this instance if you try to ping any internet IP address so let's go ahead and type ping and then a space 1.1.1.1.1 and press enter you'll note that we don't have any public internet connectivity and that's because this instance doesn't have a public IP version for address and it's not in a subnet with a route table which points at the internet gateway.

      This EC2 instance has been deployed into the application a subnet which is a private subnet and it also doesn't have a public IP version for address.

      So at this point what we need to do is go ahead and deploy our NAT gateways and these NAT gateways are what will provide this private EC2 instance with connectivity to the public IP version for internet so let's go ahead and do that.

      Now to do that we need to be back at the main AWS console click in the services search box at the top type VPC and then right click and open that in a new tab.

      Once you do that go ahead and move to that tab once you there click on NAT gateways and create a NAT gateway.

      Okay so once you're here you'll need to specify a few things you'll need to give the NAT gateway a name you'll need to pick a public subnet for the NAT gateway to go into and then you'll need to give the NAT gateway an elastic IP address which is an IP address which doesn't change.

      So first we'll set the name of the NAT gateway and we'll choose to use a4l for animals for life -vpc1 -natgw and then -a because this is going into availability zone A.

      Next we'll need to pick the public subnet that the NAT gateway will be going into so click on the subnet drop down and then select the web a subnet which is the public subnet in availability zone a so sn -web -a.

      Now we need to give this NAT gateway an elastic IP it doesn't currently have one so we need to click on allocate elastic IP which gives it an allocation.

      Don't worry about the connectivity type we'll be covering that elsewhere in the course just scroll down to the bottom and create the NAT gateway.

      Now this process will take some time and so we need to go ahead and create the two other NAT gateways.

      So click on NAT gateways at the top and then we're going to create a second NAT gateway.

      So go ahead and click on create NAT gateway again this time we'll call the NAT gateway a4l -vpc1 -natgw -b and this time we'll pick the web b subnet so sn -web -b allocated elastic IP again and click on create NAT gateway then we'll follow the same process a third time so click create NAT gateway use the same naming scheme but with -c pick the web c subnet from the list allocate an elastic IP and then scroll down and click on create NAT gateway and at this point we've got the three NAT gateways that are being created they're all in appending state if we go to elastic IPs we can see the three elastic IPs which have been allocated to the NAT gateways and we can scroll to the right or left and see details on these IPs and if we wanted we could release these IPs back to the account once we'd finish with them now at this point you need to go ahead and pause the video and resume it once all three of those NAT gateways have moved away from appending state we need them to be in an available state ready to go before we can continue with this demo so go ahead and pause and resume once all three have changed to an available state okay so all these are now in an available state so that means they're good to go they're providing service now if you scroll to the right in this list you're able to see additional information about these NAT gateways so you can see the elastic and private IP address the VPC and then the subnet that each of these NAT gateways are located in what we need to do now is configure the routing so that the private instances can communicate via the NAT gateways so right click on route tables and open in a new tab and we need to create a new route table for each of the availability zones so go ahead and click on create route table first we need to pick the VPC for this route table so click on the VPC drop down and then select the animals for live VPC so a for L hyphen VPC one once selected go ahead and name at the route table we're going to keep the naming scheme consistent so a for L hyphen VPC one hyphen RT for route table hyphen private a so enter that and click on create then close that dialogue down and create another route table this time we'll use the same naming scheme but of course this time it will be RT hyphen private B select the animals for life VPC and click on create close that down and then finally click on create route table again this time a for L hyphen VPC one hyphen RT hyphen private C again click on the VPC drop down and select the animals for life VPC and then click on create so that's going to leave us with three route tables one for each availability zone what we need to do now is create a default route within each of these route tables and that route is going to point at the NAT gateway in the same availability zone so select the route table private a and then click on the routes tab once you've selected the routes tab click on edit routes and we're going to add a new route it's going to be the IP version for default route of 0.0.0.0/0 and then click on target and pick NAT gateway and we're going to pick the NAT gateway in availability zone a and because we named them it makes it easy to select the relevant one from this list so go ahead and pick a for L hyphen VPC one hyphen NAT GW hyphen a so because this is the route table in availability zone a we need to pick the same NAT gateway so save that and close and now we'll be doing the same process for the route table in availability zone B make sure the routes tab is selected and click on edit routes click on add route again 0.0.0.0/0 and then for target pick NAT gateway and then pick the NAT gateway that's in availability zone B so NAT GW hyphen B once you've done that save the route table and then next select the route table in availability zone C so select RT hyphen private C make sure the routes tab is selected and click on edit routes again we'll be adding a route it will be the IP version for default route so 0.0.0.0/0 select a target go to NAT gateway and pick the NAT gateway in availability zone C so NAT GW hyphen C once you've done that save the route table and now our private EC2 instance should be able to ping 1.1.1.1 because we have the routing infrastructure in place so let's move back to our private instance and we can see that it's not actually working now the reason for this is that although we have created these routes we haven't actually associated these route tables with any of the subnets subnets in a VPC which don't have an explicit route table association are associated with the main route table now we need to explicitly associate each of these route tables with the subnets inside that same AZ so let's go ahead and pick RT hyphen private A we'll go through in order so select it click on the subnet associations tab and edit subnet associations and then you need to pick all of the private subnets in AZ A so that's the reserved subnet so reserved hyphen A the app subnet so app hyphen A and the DB subnet so DB hyphen A so all of these are the private subnets in availability zone A notice how all the public subnets are associated with this custom route table you created earlier but the ones we're setting up now are still associated with the main route table so we're going to resolve that now by associating this route table with those subnets so click on save and this will associate all of the private subnets in AZ A with the AZ A route table so now we're going to do the same process for AZ B and AZ C and we'll start with AZ B so select the private B route table click on subnet associations edit subnet associations so select application B database B and then reserved B and then scroll down and save the associations and then select the private C route table click on subnet associations edit subnet associations and then select reserved C database C and then application C and then scroll down and save those associations and now that we've associated these route tables with the subnets and now that we've added those default routes if we go back to session manager where we still have the connection open to the private EC2 instance we should see that the ping has started to work and that's because we now have a NAT gateway providing service to each of the private subnets in all of the three availability zones okay so that's everything you needed to cover in this demo lesson now it's time to clean up the account and return it to the same state as it was at the start of this demo lesson from this point on within the course you're going to be using automation and so we can remove all the configuration that we've done inside this demo lesson so the first thing we need to do is to reverse the route table changes that we've done so we need to go ahead and select the RT hyphen private a route table go ahead and select subnet associations and then edit the subnet associations and then just uncheck all of these subnets and this will return these to being associated with the main route table so scroll down and click on save do the same for RT hyphen private be so deselect all of these associations and click on save and then the same for RT hyphen private see so select it go to subnet associations and then edit them and remove all of these subnets and click on save next select all of these private route tables these are the ones that we created in this lesson so select them all click on the actions drop down and then delete route table and confirm by clicking delete route tables go to NAT gateways on the left and we need to select each of the NAT gateways in turn so a and then click on actions and delete NAT gateway type delete click delete then select be and do the same process actions delete NAT gateway type delete click delete and finally the same for see so select the C NAT gateway click on actions and delete NAT gateway you'll need to type delete to confirm click on delete now we're going to need all of these to be in a fully deleted state before we can continue so hit refresh and make sure that all three NAT gateways are deleted if yours aren't deleted if they're still listed in a deleting state then go ahead and pause the video and resume once all of these have changed to deleted at this point all of the NAT gateways have deleted so you can go ahead and click on elastic IPs and we need to release each of these IPs so select one of them and then click on actions and release elastic IP addresses and click release and do the same process for the other two click on release then finally actions release IP click on release once that's done move back to the cloud formation console select the stack which was created by the one click deployment at the start of the lesson and click on delete and then confirm that deletion and that will remove the cloud formation stack and any resources created as part of this demo and at that point once that finishes deleting the account has been returned into the same state as it was at the start of this demo lesson so I hope this demo lesson has been useful just to reiterate what you've done you've created three NAT gateways for a region resilient design you've created three route tables one in each availability zone added a default IP version for route pointing at the corresponding NAT gateway and associated each of those route tables with the private subnets in those availability zones so you've implemented a regionally resilient NAT gateway architecture so that's a great job that's a pretty complex demo but it's going to be functionality that will be really useful if you're using AWS in the real world or if you have to answer any exam questions on NAT gateways with that being said at this point you have cleared up the account you've deleted all the resources so go ahead complete this video and when you're ready I'll see you in the next.

    1. And gropes his way, finding the stairs unlit . . .

      Interestingly, throughout this entire long stanza, the night seems to become darker as the actions become darker. First, we're just in the "violet hour", then time passes throughout the stanza, and it ends with "And gropes his way, finding the stairs unlit" (Eliot, 248), after Tiresias has raped a woman. The way light and darkness is used here draws a contrast to how it's used in Fragment 149 of a Sappho poem, where she refers to "Bringing everything that shining Dawn scattered, you bring the sheep, you bring the goat, you bring the child back to its mother" (Sappho). Here, darkness and nighttime are seen as things that bring people/animals together in a pleasurable way by reuniting them, whereas in this stanza Tiresias and a woman are brought together at night, but he rapes her, thereby correlating darkness and nighttime with darker actions in "The Waste Land".

    2. I sat upon the shore

      I am interested in the indentation of “I sat upon the shore” in a section that is otherwise left-justified. The visual effect of the standalone line in the left-aligned stanza mimics a cliff / shore situation where a fisher can lower their hook. The “shore” is not supported by any words in the line directly beneath it, just as the sand underneath the waves sift under compression, shapeshifting and fluid. Extending this literal interpretation, what we find on the other end of the line is the subterranean substance beneath the shore’s surface. Our last line, “Shantih shantih shantih” is also indented, although not as much as the first line is. This is a call and response that not only sandwiches the mixture of content (similar yet unidentical as the shifting sands) with visibly-identifiable structure but also clarifies the mission the narrator has set out to achieve: finding peace.

      Nautical imagery is not limited to semantics, however. Beyond fishing on shores and London Bridge collapsing (into the River Thames), the spacing of closing line “Shantih shantih shantih” vaguely resembles the ebb and flow of waves departing shore. I am puzzled by the alignment of these last three words, and one justification (haha because it’s not left-justified) I concluded is that each “shanih” corresponds to a moment in time, with the subject “I” in the first line denoting the present. The first, capitalized “shantih” is for the past - a violent amalgamation of tragedies spanning centuries, mythologies, and even languages, yet the narrator still possesses the burgeoning hope to pray for peace. The second is for the present - a conflicted narrative between “Fishing”, present participle, and “have shored”, present perfect tense, an active search for reconciliation. And the last is for the future - nebulous with a promise of revenge, for “Hieronymo’s mad againe”. What strikes me aside from Eliot’s refusal to spell alluded character names correctly is the simultaneous looming and absence of destiny. A final prayer for peace suggests the future may need all the divine intervention it can get. The residual aggression from the Spanish Tragedy, which in TWL, is the universal tragedy, lingers in the falling infrastructure and human decay. Yet, the future is markedly absent throughout the stanza. The subject is positioned “with the arid plain behind me”. This direction acknowledges the past and deems it infertile. But what lies ahead? What of the future? It is unwritten, unpunctualized, and utterly neglected.

      Combining the Tarot-reading interpretation of the poem’s end and my earlier theory of fishing (if you were to draw a line between the “e” of “shore” and the “h” of the last“shantih”, you see a fishing line attached to a hook. If you really squint), we realize Eliot propels the reader into The Waste Land, or rather, he brings the waste land to us. We are all on our own holy grail quests, fishing for peace.

    3. Who is the third who walks always beside you?

      The vibe I got out of this line is the creepy motif of a doppelgänger and the unsettling psychological implications of the “other” that haunts the speaker. Both Eliot’s speaker here and the main character of Dracula, Jonathan Harker, confront spectral presences that embody their deepest fears and anxieties, suggesting that this “third” figure represents more than just a physical entity. It’s a shadow self, a manifestation of repressed desires, fears, and the destabilization of identity.

      In Dracula, Dracula the character functions not only as a literal antagonist but also as a projection of the unconscious fears and desires of Harker. When he is trapped in Dracula’s castle, he begins to experience a split in his sense of self, feeling his identity destabilize under the influence of the Count. He states, “I am beginning to feel this nocturnal existence tell on me. It is destroying my nerve. I start at my own shadow, and am full of all sorts of horrible imaginings” (Stoker). This vampiric presence of Dracula is both external and internal—an embodiment of everything Harker represses within himself.

      Similarly, in TWL, the “third” walking beside the speaker is neither fully acknowledged nor understood. The ambiguity of the figure’s identity—“I do not know whether a man or a woman”—reflects the same psychological dissonance present in Harker’s experiences with Dracula. The third figure, like Dracula, is elusive, undefined, and haunting, representing a part of the self that remains unrecognized yet constantly lurks at the edge of consciousness.

    1. Author response:

      The following is the authors’ response to the previous reviews.

      Reviewer #1 (Public review):

      Summary:

      Crosslinking mass spectrometry has become an important tool in structural biology, providing information about protein complex architecture, binding sites and interfaces, and conformational changes. One key challenge of this approach represents the quantitation of crosslinking data to interrogate differential binding states and distributions of conformational states.

      Here, Luo and Ranish present a novel class of isobaric crosslinkers ("Qlinkers"), conduct proof-of-concept benchmarking experiments on known protein complexes, and show example applications on selected target proteins. The data are solid and this could well be an exciting, convincing new approach in the field if the quantitation strategy is made more comprehensive and the quantitative power of isobaric labeling is fully leveraged as outlined below. It's a promising proof-of-concept, and potentially of broad interest for structural biologists.

      Strengths:

      The authors demonstrate the synthesis, application, and quantitation of their "Q2linkers", enabling relative quantitation of two conditions against each other. In benchmarking experiments, the Q2linkers provide accurate quantitation in mixing experiments. Then the authors show applications of Q2linkers on MBP, Calmodulin, selected transcription factors, and polymerase II, investigating protein binding, complex assembly, and conformational dynamics of the respective target proteins. For known interactions, their findings are in line with previous studies, and they show some interesting data for TFIIA/TBP/TFIIB complex formation and conformational changes in pol II upon Rbp4/7 binding.

      Weaknesses:

      This is an elegant approach but the power of isobaric mass tags is not fully leveraged in the current manuscript.

      First, "only" Q2linkers are used. This means only two conditions can be compared. Theoretically, higher-plexed Qlinkers should be accessible and would also be needed to make this a competitive method against other crosslinking quantitation strategies. As it is, two conditions can still be compared relatively easily using LFQ - or stable-isotope-labeling based approaches. A "Q5linker" would be a really useful crosslinker, which would open up comprehensive quantitative XLMS studies.

      We agree that a multiplexed Qlinker approach would be very useful. The multiplexed Qlinkers are more difficult and more expensive to synthesize. We are currently working on different schemes for synthesizing multiplexed Qlinkers.

      Second, the true power of isobaric labeling, accurate quantitation across multiple samples in a single run, is not fully exploited here. The authors only show differential trends for their interaction partners or different conformational states and do not make full quantitative use of their data or conduct statistical analyses. This should be investigated in more detail, e.g. examine Qlinker quantitation of MBP incubated with different concentrations of maltose or Calmodulin incubated with different concentrations of CBPs. Does Qlinker quantitation match ratios predicted using known binding constants or conformational state populations? Is it possible to extract ratios of protein populations in different conformations, assembly, or ligand-bound states?

      With these two points addressed this approach could be an important and convincing tool for structural biologists.

      We agree that multiplexed Qlinkers would open the door to exciting avenues of investigation such as studying conformational state populations.  We plan to conduct the suggested experiments when multiplexed Qlinkers are available.

      Reviewer #2 (Public review):

      The regulation of protein function heavily relies on the dynamic changes in the shape and structure of proteins and their complexes. These changes are widespread and crucial. However, examining such alterations presents significant challenges, particularly when dealing with large protein complexes in conditions that mimic the natural cellular environment. Therefore, much emphasis has been put on developing novel methods to study protein structure, interactions, and dynamics. Crosslinking mass spectrometry (CSMS) has established itself as such a prominent tool in recent years. However, doing this in a quantitative manner to compare structural changes between conditions has proven to be challenging due to several technical difficulties during sample preparation. Luo and Ranish introduce a novel set of isobaric labeling reagents, called Qlinkers, to allow for a more straightforward and reliable way to detect structural changes between conditions by quantitative CSMS (qCSMS).

      The authors do an excellent job describing the design choices of the isobaric crosslinkers and how they have been optimized to allow for efficient intra- and inter-protein crosslinking to provide relevant structural information. Next, they do a series of experiments to provide compelling evidence that the Qlinker strategy is well suited to detect structural changes between conditions by qCSMS. First, they confirm the quantitative power of the novel-developed isobaric crosslinkers by a controlled mixing experiment. Then they show that they can indeed recover known structural changes in a set of purified proteins (complexes) - starting with single subunit proteins up to a very large 0.5 MDa multi-subunit protein complex - the polII complex.

      The authors give a very measured and fair assessment of this novel isobaric crosslinker and its potential power to contribute to the study of protein structure changes. They show that indeed their novel strategy picks up expected structural changes, changes in surface exposure of certain protein domains, changes within a single protein subunit but also changes in protein-protein interactions. However, they also point out that not all expected dynamic changes are captured and that there is still considerable room for improvement (many not limited to this crosslinker specifically but many crosslinkers used for CSMS).

      Taken together the study presents a novel set of isobaric crosslinkers that indeed open up the opportunity to provide better qCSMS data, which will enable researchers to study dynamic changes in the shape and structure of proteins and their complexes. However, in its current form, the study some aspects of the study should be expanded upon in order for the research community to assess the true power of these isobaric crosslinkers. Specifically:

      Although the authors do mention some of the current weaknesses of their isobaric crosslinkers and qCSMS in general, more detail would be extremely helpful. Throughout the article a few key numbers (or even discussions) that would allow one to better evaluate the sensitivity (and the applicability) of the method are missing. This includes:

      (1) Throughout all the performed experiments it would be helpful to provide information on how many peptides are identified per experiment and how many have actually a crosslinker attached to it.

      As the goal of the experiments is to maximize identification of crosslinked peptides which tend to have higher charge states, we targeted ions with charge states of 3+ or higher in our MS acquisition settings for CLMS, and ignored ions with 2+ charge states, which correspond to many of the normal (i.e., not crosslinked) peptides that are identified by MS. As a result, normal peptides are less likely to be identified by the MS procedure used in our CLMS experiments compared to MS settings typically used to identify normal peptides. Our settings may also fail to identify some mono-modified peptides. Like most other CLMS methods, the total number of identified crosslinked peptide spectra is usually less than 1% of the total acquired spectra and we normally expect the crosslinked species to be approximately 1% of the total peptides. 

      We added information about the number of crosslinked and monolinked peptides identified in the pol I benchmarking experiments (line 173).  The number of crosslinks and monolinks identified in the pol II +/- a-amanitin experiment, the TBP/TFIIA/TFIIB experiment and the pol II experiment +/- Rpb4/7 are also provided.

      (2) Of all the potential lysines that can be modified - how many are actually modified? Do the authors have an estimate for that? It would be interesting to evaluate in a denatured sample the modification efficiency of the isobaric crosslinker (as an upper limit as here all lysines should be accessible) and then also in a native sample. For example, in the MBP experiment, the authors report the change of one mono-linked peptide in samples containing maltose relative to the one not containing maltose. The authors then give a great description of why this fits to known structural changes. What is missing here is a bit of what changes were expected overall and which ones the authors would have expected to pick up with their method and why have they not been picked up. For example, were they picked up as modified by the crosslinker but not differential? I think this is important to discuss appropriately throughout the manuscript to help the reader evaluate/estimate the potential sensitivity of the method. There are passages where the authors do an excellent job doing that - for example when they mention the missed site that they expected to see in the initial the pol II experiments (lines 191 to 207). This kind of "power analysis" should be heavily discussed throughout the manuscript so that the reader is better informed of what sensitivity can be expected from applying this method.

      Regarding the Pol II complex experiment described in Figures 4 and 5, out of the 277 lysine residues in the complex, 207 were identified as monolinked residues (74.7%), and 817 crosslinked pairs out of 38,226 potential pairs (2.1%) were observed. The ability of CLMS to detect proximity/reactivity changes may be impacted by several factors including 1) the (low) abundance of crosslinked peptides in complex mixtures, 2) the presence of crosslinkable residues in close proximity with appropriate orientation, and 3) the ability to generate crosslinked peptides by enzymatic digestion that are amenable to MS analysis (i.e., the peptides have appropriate m/z’s and charge states, the peptides ionize well, the peptides produce sufficient fragment ions during MS2 analysis to allow confident identification). Future efforts to enrich crosslinked peptides prior to MS analysis may improve sensitivity.

      It is very difficult to estimate the modification efficiency of Qlinker (or many other crosslinkers) based on peptide identification results. One major reason for this is that trypsin is not able to cleave after a crosslinker-modified lysine residue.  As a result, the peptides generated after the modification reaction have different lengths, compositions, charge states, and ionization efficiencies compared to unmodified peptides. These differences make it very difficult to estimate the modification efficiencies based on the presence/absence of certain peptide ions, and/or the intensities of the modified and unmodified versions of a peptide. Also, 2+ ions which correspond to many normal (i.e., unmodified) peptides were excluded by our MS acquisition settings.

      It is also very difficult to predict which structural changes are expected and which crosslinked peptides and/or modified peptides can be observed by MS.  This is especially true when the experiment involves proteins containing unstructured regions such as the experiments involving Pol II, and TBP, TFIIA and TFIIB. Since we are at the early stages of using qCLMS to study structural changes, we are not sure which changes we can expect to observe by qCLMS. Additional applications of Qlinker-CLMS are needed to better understand the types of structural changes that can be studied using the approach.

      We hope that our discussions of some the limitations of CLMS for detecting conformational/reactivity changes provide the reader with an understanding of the sensitivity that can be expected with the approach.  At the end of the paragraph about the pol II a-amanitin experiment we say, “Unfortunately, no Q2linker-modified peptides were identified near the site where α-amanitin binds. This experiment also highlights one of the limitations of residue-specific, quantitative CLMS methods in general. Reactive residues must be available near the region of interest, and the modified peptides must be identifiable by mass spectrometry.” In the section about Rbp4/7-induced structural changes in pol II we describe the under-sampling issue. And in the last paragraph we reiterate these limitations and say, “This implies that this strategy, like all MS-based strategies, can only be used for interpretation of positively identified crosslinks or monolinks. Sensitivity and under sampling are common problems for MS analysis of complex samples.”

      (3) It would be very helpful to provide information on how much better (or not) the Qlinker approach works relative to label-free qCLMS. One is missing the reference to a potential qCLMS gold standard (data set) or if such a dataset is not readily available, maybe one of the experiments could be performed by label-free qCLMS. For example, one of the differential biosensor experiments would have been well suited.

      We agree with the reviewer that it will be very helpful to establish gold standard datasets for CLMS. As we further develop and promote this technology, we will try to establish a standardized qCLMS.

      Reviewer #1 (Recommendations for the authors):

      Only a very minor point:

      I may have missed it but it's not really clear how many independent experiments were used for the benchmarking quantitation and mixing experiments for Figure 1. What is the reproducibility across experiments on average and on a per-peptide basis?

      Otherwise, I think the approach would really benefit from at least "Q5linkers" or even "Q10linkers", if possible. And then conduct detailed quantitative studies, either using dilution series or maybe investigating the kinetics of complex formation.

      We used a sample of BSA crosslinked peptides to optimize the MS settings, establish the MS acquisition strategies and test the quantification schemes.  The data in Figure 1 is based on one experiment, in which used ~150 ug of purified pol I complexes from a 6 L culture. We added this information to the Figure 1 legend. We also provide information about the reproducibility of peptide quantification by plotting the observed and expected ratios for each monolinked and crosslinked peptide identified in all of the runs in Figure S3.

      We agree with the reviewer that the Qlinker approach would be even more attractive if multiplex Qlinker reagents were designed. The multiplexed Qlinkers are more difficult and more expensive to synthesize. We are currently working on different schemes for synthesizing multiplexed Qlinkers.

      Reviewer #2 (Recommendations for the authors):

      In addition to the public review I have the following recommendations/questions:

      (1) The first part of the results section where the synthesis of the crosslinker is explained is excellent for mass spec specialists, but problematic for general readers - either more info should be provided (e.g. b1+ ions - most readers will have no idea why that is) - or potentially it could be simplified here and the details shifted to Materials and Methods for the expert reader. The same is true below for the length of spacer arms.

      However - in general this level of detail is great - but can impact the ease of understanding for the more mass spec affine but not expert reader.

      We have added the following sentence to assist the general reader: A b1+ ion is an ion with a charge state of +1 corresponding to the first N-terminal amino acid residue after breakage of the first peptide bond (lines 126-128).

      (2) The Calmodulin experiment (lines 239 to 257) - it is a very nice result that they see the change in the crosslinked peptide between residues K78-K95, but the monolinks are not just detected as described in the text but actually go 2 fold up. This would have been actually a bit expected if the residues are now too far away to be still crosslinked that the monolinks increase. In this case, this counteraction of monolinks to crosslinked sites can also be potentially used as a "selection criteria" for interesting sites that change. Is that a possible interpretation or do the authors think that upregulation of the monolinks is a coincidence and should not be interpreted?

      We agree with the reviewer that both monolinks and crosslinks can be used as potential indicators for some changes. However, it is much more difficult to interpret the abundance information from monolinks because, unlike crosslinks, there is little associated structural/proximity information with monolinks. Because it is difficult to understand the reason(s) for changes in monolink abundance, we concentrate on changes in crosslink abundances, which provide proximity/structural information about the crosslinked residues.

      (3) Lines 267 to 274: a small thing but the structural information provided is quite dense I have to say. Maybe simplify or accompany with some supplemental figures?

      We agree that the structural information is a bit dense especially for readers who are not familiar with the pol II system.  We added a reference to Figure 3c (line 177) to help the reader follow the structural information. 

      As qCLMS is still a relatively new approach for studying conformational changes, the utility of the approach for studying different types of conformational changes is still unclear. Thus, one of the goals of the experiments is to demonstrate the types of conformational changes that can be detected by Q2linkers.  We hope that the detailed descriptions will help structural biologists understand the types of conformational changes that can be detected using Qlinkers.

      (4) Line 280: explain maybe why the sample was fractionated by SCX (I guess to separate the different complexes?).

      SCX was used to reduce the complexity of the peptide mixtures. As the samples are complex and crosslinked peptides are of low abundance compared to normal peptides, SCX can separate the peptides based on their positive charges.  Larger peptides and peptides with higher charge states, such as crosslinked peptides, tend to elute at higher salt concentration during SCX chromatography.  The use of SCX to fractionate complex peptide mixtures is described in the “General crosslinking protocol and workflow optimization” section of the Methods, and we added a sentence to explain why the sample was fractionated by SCX (lines 278-279).

      (5) Lines 354 to 357: "This suggests that the inability to identity most of these crosslinked peptides in both experiments is mainly due to under-sampling during mass spectrometry analysis of the complex samples, rather than the absence of the crosslinked peptides in one of the experiments."

      This is an extremely important point for the interpretation of missing values - have the authors tried to also collect the mass spec data with DIA which is better in recovery of the same peptide signals between different samples? I realize that these are isobaric samples so DIA measurements per se are not useful as the quantification is done on the reporter channels in the MS2, but it would at least give a better idea if the missing signals were simply not picked up for MS2 as claimed by the authors or the modified peptides are just not present. Another possibility is for the authors to at least try to use a "match between the run" function as can be done in Maxquant. One of the strengths of the method is that it is quantitative and two states are analyzed together, but as can be seen in this experiment, more than two states might want to be compared. In such cases, the under-sampling issue (if that is indeed the cause) makes interpretation of many sites hard (due to missing values) and it would be interesting if for example, an analysis approach with a "match between the runs" function could recover some of the missing values.

      We agree that undersampling/missing values is an important issue that needs to be addressed more thoroughly. This also highlights the importance of qCLMS, as conclusions about structural changes based on the presence/absence of certain crosslinked species in database search results may be misleading if the absence of a species is due to under-sampling. We have not tried to collect the data with DIA since we would lose the quantitative information. It would be interesting to see if match between runs can recover some of the missing values. While this could provide evidence to support the under-sampling hypothesis, it would not recover the quantitative information.

      We recommend performing label swap experiments and focusing downstream analysis on the crosslinks/monolinks that are identified on both experiments. Future development of multiplexed Qlinker reagents should help to alleviate under-sampling issues. See response to Reviewer #1.

      (6) Lines 375 to 393 (the whole paragraph): extremely detailed and not easy to follow. Is that level of detail necessary to drive home that point or could it be visualized in enough detail to help follow the text?

      We agree that the paragraph is quite detailed, but we feel that the level of detailed is necessary to describe the types of conformational changes that can be detected by the quantitative crosslinking data, and also illustrate the challenges of interpreting the structural basis for some crosslink abundance changes even when high resolution structural data exists.

      To make it easier to follow, we added a sentence to the legend of Figure 5b. “In the holo-pol II structure (right), Switch 5 bending pulls Rpb1:D1442 away from K15, breaking the salt bridge that is formed in the core pol II structure (left). The increase in the abundances of the Rpb1:15-Rpb6:76 and Rpb1:15-Rpb6:72 crosslinks in holo-pol II is likely attributed to the salt bridge between K15 and D1442 in core pol II which impedes the NHS ester-based reaction between the epsilon amino group of K15 and the crosslinker.”

      (7) Final paragraph in the results section - lines 397 and 398: "All of the intralinks involving Rpb4 are more abundant in holo-pol II as expected." If I understand that experiment correctly the intralinks with Rpb4 should not be present at all as Rpb4 has been deleted. Is that due to interference between the 126 and 127 channels in MS2? If so, then this also sets a bit of the upper limit of quantitative differences that can be seen. The authors should at least comment on that "limitation".

      Yes, we shouldn’t detect any Rpb4 peptides in the sample derived from the Rpb4 knockout strain. The signal from Rpb4 peptides in the DRpb4 sample is likely due to co-eluting ions. To clarify, we changed the text to:

      All of the intralinks involving Rpb4 are more abundant in the holo-pol II sample (even though we don’t expect any reporter ion signal from Rpb4 peptides derived from the ∆Rpb4 pol II sample, we still observed reporter ion signals from the channel corresponding to the DRpb4 sample, potentially due to the presence of low abundance, co-eluting ions)(lines 395-399).

      (8) Materials and Methods - line 690: I am probably missing something but why were two different mass additions to lysine added to the search (I would have expected only one for the crosslinker)?

      The 297 Da modification is for monolinked peptides with one end of the crosslinker hydrolyzed and 18 Da water molecule is added. The 279 Da modification is for crosslinks and sometimes for looplinks (crosslinks involving two lysine residues on the same tryptic peptide).

    1. One criticism that applies equally to both proposals is that the semantics of the code now depends on the presence or absence of type information – in particular type annotations, while OCaml programmers are used to consider that they are useful for clarity and debugging purposes only.

      Violation of the gradual guarantee. I'm coming around to this being fine but I think that it needs to be declared loudly in a languages design description and possibly alternative syntax for type declarations should be used so it's more obvious it's not just for clarity.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      PPARgamma is a nuclear receptor that binds to orthosteric ligands to coordinate transcriptional programs that are critical for adipocyte biogenesis and insulin sensitivity. Consequently, it is a critical therapeutic target for many diseases, but especially diabetes. The malleable nature and promiscuity of the PPARgamma orthosteric ligand binding pocket have confounded the development of improved therapeutic modulators. Covalent inhibitors have been developed but they show unanticipated mechanisms of action depending on which orthosteric ligands are present. In this work, Shang and Kojetin present a compelling and comprehensive structural, biochemical, and biophysical analysis that shows how covalent and noncovalent ligands can co-occupy the PPARgamma ligand binding pocket to elicit distinctive preferences of coactivator and corepressor proteins. Importantly, this work shows how the covalent inhibitors GW9662 and T0070907 may be unreliable tools as pan-PPARgamma inhibitors despite their widespread use.

      Strengths:

      - Highly detailed structure and functional analyses provide a comprehensive structure-based hypothesis for the relationship between PPARgamma ligand binding domain co-occupancy and allosteric mechanisms of action. - Multiple orthogonal approaches are used to provide high-resolution information on ligand binding poses and protein dynamics.

      - The large number of x-ray crystal structures solved for this manuscript should be applauded along with their rigorous validation and interpretation.

      Weaknesses

      - Inclusion of statistical analysis is missing in several places in the text. - Functional analysis beyond coregulator binding is needed.

      We added additional statistical analyses as recommended (Source Data 1, a Microsoft Excel spreadsheet).

      Related to functional analysis, we cite and studies from our previous publication (Hughes et al. Nature Communications 2014 5:3571) where we demonstrated that the covalent inhibitor ligands (GW9662 and T0070907) do not block the activity of other ligands using a PPARγ transcriptional reporter assay and gene expression analysis in 3T3-L1 preadipocytes. Our study here expands on this finding and other published studies showing the structural mechanism for the lack of blocking activity by the covalent inhibitors.

      Reviewer #2 (Public Review):

      Summary:

      The flexibility of the ligand binding domain (LBD) of NRs allows various modes of ligand binding leading to various cellular outcomes. In the case of PPARγ, it's known that two ligands can co-bind to the receptor. However, whether a covalent inhibitor functions by blocking the binding of a non-covalent ligand, or co-bind in a manner that weakens the binding of a non-covalent ligand remains unclear. In this study, the authors first used TR-FRET and NMR to demonstrate that covalent inhibitors (such as GW9662 and T0070907) weaken but do not prevent non-covalent synthetic ligands from binding, likely via an allosteric mechanism. The AF-2 helix can exchange between active and repressive conformations, and covalent inhibitors shift the conformation toward a transcriptionally repressive one to reduce the orthosteric binding of the non-covalent ligands. By co-crystal studies, the authors further reveal the structural details of various non-covalent ligand binding mechanisms in a ligand-specific manner (e.g., an alternate binding site, or a new orthosteric binding mode by alerting covalent ligand binding pose).

      Strengths:

      The biochemical and biophysical evidence presented is strong and convincing.

      Weaknesses:

      However, the co-crystal studies were performed by soaking non-covalent ligands to LBD pre-crystalized with a covalent inhibitor. Since the covalent inhibitors would shift the LBD toward transcriptionally repressive conformation which reduces orthosteric binding of non-covalent ligands, if the sequence was reversed (i.e., soaking a covalent inhibitor to LBD pre-crystalized with a non-covalent ligand), would a similar conclusion be drawn? Additional discussion will broaden the implications of the conclusion.

      This is an interesting point, which we now expand upon in a new (third) paragraph of the discussion in our revised manuscript:

      “In our previous study, we observed synthetic and natural/endogenous ligand co-binding via co-crystallography where preformed crystals of PPARγ LBD bound to unsaturated fatty acids (UFAs) were soaked with a synthetic ligand, which pushed the bound UFA to an alternate site within the orthosteric ligand-binding pocket 8. In the scenario of synthetic ligand cobinding with a covalent inhibitor, it is possible that soaking a covalent inhibitor into preformed crystals where the PPARγ LBD is already bound to a non-covalent ligand may prove to be difficult. The covalent inhibitor would need to flow through solvent channels within the crystal lattice, which may not be a problem. However, upon reaching the entrance surface to the orthosteric ligand-binding pocket, it may be difficult for the covalent inhibitor to gain access to the region of the orthosteric pocket required for covalent modification as the larger non-covalent ligand could block access. This potential order of addition problem may not be a problem for studies in solution or in cells, where the non-covalent ligand can more freely exchange in and out of the orthosteric pocket and over time the covalent reaction would reach full occupancy.”

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):

      - IC50 or EC50 values are not reported for the coregulator interaction assays, R2 for fit should also be reported where Ki and IC50s are disclosed.

      We now report fitting statistics and IC50/EC50 values when possible in Figure 2B and Source Data 1 along with R2 values for the fit. We note that some data do not show complete or robust enough binding curves to faithfully fit to a dose response equation.

      -  Reporter gene or qPCR should be performed for the combinations of covalent and noncovalent ligands to show how these molecules impact transcriptional activities rather than just coregulator binding profiles.

      We previously performed PPARγ transcriptional reporter assay and gene expression analysis in 3T3-L1 preadipocytes to demonstrate that cotreatment of a covalent inhibitor (GW9662 or T0070907) with a non-covalent ligand does not block activity of the non-covalent ligand and showed cobinding-induced activation relative to DMSO control (Hughes et al., 2024 Nature Communications). We did not specifically mention this in our original manuscript, but we now call this out in the first paragraph of the results section.

      - Inclusion of a structure figure to show the different helix 12 orientations should be included in the introduction. Likewise, how the overall structure of the LBD changes as a result of the cobinding in the discussion or a summary model would be helpful.

      Our revised manuscript includes a structure figure called out in the introduction describing the active and repressive helix 12 PPARγ LBD conformations (new Figure 1). There are no major changes to the overall structure of the LBD compared to the active conformation that crystallized, so we did not include a summary model figure but we do refer readers to our previous paper (Shang and Kojetin, Structure 2021 29(9):940-950) in the penultimate paragraph of the discussion. We also added the following sentence to the crystallography results section related to the overall LBD changes:

      “The structures show high structural similarity to the transcriptionally active LBD conformation with rmsd values ranging from 0.77–1.03Å (Supplementary Table S2)”

      A typo in paragraph 3 of the discussion says "long-live" when it should probably say "long-lived."

      We corrected this typo.

      Reviewer #2 (Recommendations For The Authors):

      It's interesting that ligand-specific binding mode of non-covalent ligands was observed. Would modifications of the chemical structure of a covalent inhibitor alter the allosteric binding behavior of non-covalent ligands in a predictive manner? If so, how can such SAR be used to guide the design of covalent inhibitors to more broadly and effectively inhibit agonists of various chemical structures? Discussion on this topic could be valuable.

      This is an interesting point, which we now discuss in the penultimate and last paragraphs of the discussion:

      “Another way to test this structural model could be through the use of covalent PPARγ inverse agonist analogs with graded activity 23, where one might posit that covalent inverse agonist analogs that shift the LBD conformational ensemble towards a fully repressive LBD conformation may better inhibit synthetic ligand cobinding.”

      “It may be possible to use the crystal structures we obtained to guide structure-informed design of covalent inhibitors that would physically block cobinding of a synthetic ligand. This could be the potential mechanism of a newer generation covalent antagonist inhibitor we developed, SR16832, that more completely inhibit alternate site ligand binding of an analog of MRL20, rosiglitazone and the UFA docosahexaenoic acid (DHA)

      21 and thus may be a better choice for the field to use as a covalent ligand inhibitor of PPARγ.”

    1. Reporter John Dickerson talking about his notebook.

      While he doesn't mention it, he's capturing the spirit of the commonplace book and the zettelkasten.

      [...] I see my job as basically helping people see and to grab ahold of what's going on.

      You can decide to do that the minute you sit down to start writing or you can just do it all the time. And by the time you get to writing you have a notebook full of stuff that can be used.

      And it's not just about the thing you're writing about at that moment or the question you're going to ask that has to do with that week's event on Face the Nation on Sunday.

      If you've been collecting all week long and wondering why a thing happens or making an observation about something and using that as a piece of color to explain the political process to somebody, then you've been doing your work before you ever sat down to do your work.

      <div style="padding:56.25% 0 0 0;position:relative;"><iframe src="https://player.vimeo.com/video/169725470?h=778a09c06f&title=0&byline=0&portrait=0" style="position:absolute;top:0;left:0;width:100%;height:100%;" frameborder="0" allow="autoplay; fullscreen; picture-in-picture" allowfullscreen></iframe></div> <script src="https://player.vimeo.com/api/player.js"></script>

      Field Notes: Reporter's Notebook from Coudal Partners on Vimeo.

    1. In addition, a U.S. animation company made a cartoon (Mr. Wong) and placed at its center an extreme caricature of a Chinese “hunchbacked, yellow-skinned, squinty-eyed character who spoke with a thick accent and starred in an interactive music video titled Saturday Night Yellow Fever.”24 Again Asian American and other civil rights groups protested this anti-Asian mocking, but many whites and a few Asian Americans inside and outside the entertainment industry defended such racist cartoons as “only good humor.” Similarly, the makers of a puppet movie, Team America: World Police, portrayed a Korean political leader speaking gibberish in a mock Asian accent. One Asian American commentator noted the movie was “an hour and a half of racial mockery with an ‘if you are offended, you obviously can’t take a joke’ tacked on at the end.”25 Moreover, in an episode of the popular television series Desperate Housewives a main character, played by actor Teri Hatcher, visits a physician for a medical checkup. Shocked that the doctor suggests she may be going through menopause, she replies, “Okay, before we go any further, can I check these diplomas? Just to make sure they aren’t, like, from some med school in the Philippines.” This racialized stereotyping was protested by many in the Asian and Pacific Islander communities

      It really shows how harmful stereotypes about Asian Americans are still everywhere in media. Cartoons like "Mr. Wong" feature ridiculous, over-the-top characters that just feed into negative views, and some people think it’s just a joke, which is super frustrating. Movies like "Team America: World Police" do the same thing, piling on racial mockery and telling anyone who’s offended to lighten up. Even shows like "Desperate Housewives" join in with lines that reinforce stereotypes, like questioning a doctor’s background just because of where they’re from. It’s disappointing that this kind of stuff is still considered okay in mainstream media, and it’s awesome to see Asian and Pacific Islander communities standing up against it.

    1. For an inexpensive starter machine ($5-25) that's easy to find, easy to get parts for and has a reasonable chance of working when in "unknown" or "untested" condition, I'd recommend one of the following ubiquitous, but solid machines which show up almost daily on ShopGoodwill.com:

      They'd all make excellent starter machines for a younger kid. The black models with glass keys from the 1940s will look a bit more old school/classic while the more industrial browns and grays with plastic keys from the 1950s are still solid choices. You might also find some later 60s/70s versions of these machines (or variations), and while they may be a bit more colorful, they'll usually have a lot more cheap plastic and can potentially have cheaper builds. (My parents got me my first typewriter, a 1948 Smith-Corona Clipper, in the mid-1980s when I was 10—I have it today and it still works as well as it did then; I still also love the airplane on the hood.)

      If you want something simple with a bit of color you can also look at the 70s/80s Brother Charger 11 which is pretty ubiquitious and inexpensive as well.

      Since you have some time, you can wait for one in better looking cosmetic condition (and with a case) which means it was probably better taken care of, and less likely to need aggressive cleaning, and more likely to work without needing any repairs. You can also wait to find one local that you can pick up in person (to save shipping cost and/or potential damage) or which will be cheaper to ship from nearby.

      Without any experience, you might try looking at Just My Typewriter's Typewriter 101 series on YouTube: https://www.youtube.com/playlist?list=PLJtHauPh529XYHI5QNj5w9PUdi89pOXsS She covers most of the basics there.

      Cleaning a machine isn't horribly difficult and can be done pretty cheaply ($20 or less for some paint thinner/isopropyl and a small toothbrush), but if you need it or get a machine that needs some repair work, try https://site.xavier.edu/polt/typewriters/tw-repair.html.

      If you're in an area with lots of yard sales, try shopping around and see if you find something interesting. It's at these that you'll have a potential chance of finding more collectible machines for pennies on the dollar and it'll also give you the chance to put your hands on machines to test them out to make sure they work.

      Good luck! 🎄


      reply to u/strawberystegosaurus at https://old.reddit.com/r/typewriters/comments/1g5rgi4/typewriter_for_christmas_please_help/

    1. Decades of research indicate that people around the world express emotions—particularly primary emotions such as happiness, sadness, fear, anger, surprise, and disgust—in highly similar ways

      This is very true! Especially when going to big cities it’s easy to be disgusted at the homelessness and drug use, it’s hard to not make faces or want to help. In bigger cities it’s more common to hide your facial expressions and just look at the ground instead of a more blunt approach.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Reviewer #1 (Public Review):

      Summary: The authors investigated the function of Microrchidia (MORC) proteins in the human malaria parasite Plasmodium falciparum. Recognizing MORC's implication in DNA compaction and gene silencing across diverse species, the study aimed to explore the influence of PfMORC on transcriptional regulation, life cycle progression and survival of the malaria parasite. Depletion of PfMORC leads to the collapse of heterochromatin and thus to the killing of the parasite. The potential regulatory role of PfMORC in the survival of the parasite suggests that it may be central to the development of new antimalarial strategies.

      Strengths: The application of the cutting-edge CRISPR/Cas9 genome editing tool, combined with other molecular and genomic approaches, provides a robust methodology. Comprehensive ChIP-seq experiments indicate PfMORC's interaction with sub-telomeric areas and genes tied to antigenic variation, suggesting its pivotal role in stage transition. The incorporation of Hi-C studies is noteworthy, enabling the visualization of changes in chromatin conformation in response to PfMORC knockdown.

      We greatly appreciate the overall positive feedback and cognisense of our efforts. Our application of CRISPR/Cas9 genome editing tools coupled with complementary cellular and functional approaches shed light on the importance of _Pf_MORC in maintaining chromatin structural integrity in the parasite and highlights this protein as a promising target for novel therapeutic intervention.

      Weaknesses: Although disruption of PfMORC affects chromatin architecture and stage-specific gene expression, determining a direct cause-effect relationship requires further investigation.

      Our conclusions were made on the basis of multiple, unbiased molecular and functional genomic assays that point to the relevance of the _Pf_MORC protein in maintaining the parasite’s chromatin landscape. Although we do not claim to have precise evidence on the step-by-step pathway to which _Pf_MORC is involved, we bring forth first-hand evidence of its role in heterochromatin binding, gene-regulation and its association with major TFs as well as chromatin remodeling and modifying enzymes. We however agree with the comment regarding the lack of direct effects of _Pf_MORC KD and have since provided additional evidence by performing ChIP-seq experiments against H3K9me3 and H3K9ac during KD. Our new results are presented in Fig. 5. We showed that the level of H3K9me3 decreased significantly during _Pf_MORC KD.

      Furthermore, while numerous interacting partners have been identified, their validation is critical and understanding their role in directing MORC to its targets or in influencing the chromatin compaction activities of MORC is essential for further clarification. In addition, the authors should adjust their conclusions in the manuscript to more accurately represent the multifaceted functions of MORC in the parasite.

      Validation of the identified interacting partners is indeed critical and essential to understanding their role in directing MORC to its targets. Our protein pull down experiments have been done using several biological replicates. Several of the interacting partners have also been identified and published by other labs and collaborators. To confirm our results, we completed a direct comparison of our work with previous published work. Results have now been incorporated into the revised manuscript to confirm the identified interacting partners and the accuracy of the data we obtained in our experiment. Molecular validation of novel proteins identified in our protein pull down requires generation of tagged lines and may take a few more years but will be submitted for publication in a follow up manuscript.

      Reviewer #2 (Public Review):

      Summary: This paper, titled "Regulation of Chromatin Accessibility and Transcriptional Repression by PfMORC Protein in Plasmodium falciparum," delves into the PfMORC protein's role during the intra-erythrocytic cycle of the malaria parasite, P. falciparum. Le Roch et al. examined PfMORC's interactions with proteins, its genomic distribution in different parasite life stages (rings, trophozoites, schizonts), and the transcriptome's response to PfMORC depletion. They conducted a chromatin conformation capture on PfMORC-depleted parasites and observed significant alterations. Furthermore, they demonstrated that PfMORC depletion is lethal to the parasite.

      Strengths: This study significantly advances our understanding of PfMORC's role in establishing heterochromatin. The direct consequences of the PfMORC depletion are addressed using chromatin conformation capture.

      We appreciate the Reviewer’s comments and reflection on the importance of our work.

      Weaknesses: The study only partially addressed the direct effects of PfMORC depletion on other heterochromatin markers.

      Here again, we agree with the reviewer’s comment and have performed additional experiments to delve deeper into the multifaceted roles of _Pf_MORC. We have performed additional ChIP-sequencing analysis on _Pf_MORC depleted conditions focusing on known heterochromatin and euchromatin markers H3K9me3 and H3K9ac respectively. We hope our new results presented in figure 5 will shed light on the more direct implications of _Pf_MORC on heterochromatin and gene silencing.

      Reviewer #1 (Recommendations For The Authors):

      Suggestions for improved or additional experiments, data or analyses.

      • Why does MORC, which was used in the pull-down, seem to be only minimally enriched in the volcano plot, while a series of proteins (marked in red) and AP2 (highlighted in green) are enriched with log2 fold changes exceeding 15?

      We apologize for the confusion. MORC was detected with the highest number of peptides (97 and 113) and spectra (1041 and 1177) confirming the efficiency of our pull-down. However, considering the relatively large size of the MORC protein (295kDa) and it weak detection in the control (5 and 7 peptides; 16 and 43 spectra), the Log2 FoldChange and Z-statistic after normalization are minimal compared to smaller proteins that were not identified in the control samples.

      Additionally, can you explain why these proteins appear to be enriched at the same fold? 

      We can postulate that these proteins form a complex with a ratio of 1:1. Two of these three proteins are described to interact with MORC in several publications, supporting a strong interaction between them.

      Variations in the interactome could result from the washing buffer's stringency.

      We agree that the IP conditions could affect the detection of the interactome as well as the parasite stage used. As indicated below, the overlap with previous publications and the presence of AP2 TFs and chromatin remodelers strongly support our results.

      It would be highly appropriate for the authors, similar to the co-submitted article (Maneesh Kumar Singh et al.), to present their mass spectrometry data in relation to previous purifications in Plasmodium (Bryant et al. 2020; Subudhi et al. 2023; Hillier et al. 2019) and also in Toxoplasma (Farhat et al. 2020). It would be good if authors could also put their results into perspective in light of the following pre-prints:

      We agree with the reviewer’s comment. In this revised manuscript, we compared our IP-MS data to previous published manuscripts. Key proteins including the AP2-P (PF3D7_1107800) and HDAC1 were indeed identified in several experiments validating our initial findings of the formation of large complexes with MORC. However, it’s important to highlight that the MORC protein was not used as the bait protein in previously published papers, and thus some discrepancies can be observed.

      Given the tendency of MORCs to form multiple complexes with AP2 factors, have you explored whether specific AP2s are conserved between Plasmodium and Toxoplasma, within the phylum?

      P. falciparum encodes for 27 putative AP2s, while T. gondii has over 60 AP2s, making direct comparison challenging. Some Plasmodium AP2s have multiple counterparts in T. gondii and typically conservation is limited to the AP2 binding domains. Attempts to identify sequence homology among AP2s and the regions of conservation have been performed (PMID: 30959972, PMID: 30959972, PMID: 16040597). Although this information would provide interesting insight, we believe exploring this topic at this time would diverge from our primary objectives. It would be more appropriate to address this in future studies.

      Could this conservation be identified either through phylogenetic means or by using tools such as AlphaFold, especially considering not just the AP2 domains but also any existing ACDC domains?

      Although this may reveal important information regarding the association between MORC proteins and AP2 domains, we believe investigating the conservation between AP2 across apicomplexan parasites may prove too challenging and is beyond the scope of this work.

      Most of the genes are depicted without their immediate surroundings (Fig. 2d and Fig S2c, d). For instance, the promoter region of AP2g is not shown (Fig. 2d). It is therefore very challenging to determine the presence or absence of MORC upstream or downstream; considering that this factor, which can create DNA loop protrusions, might bind at a distance from the genes in question.

      All gene coverage plots, including AP2-G, show 500 bp up- and downstream of the displayed gene. We have modified our figure legends to make sure that this information is provided.

      Upon examining Figure S3, it is evident that the authors have indicated a decline in PfMORC expression, represented as percentages over two unique time frames. The methodology behind this quantification remains ambiguous. It's essential for the authors to specify whether normalization was done using a loading control. As a benchmark, Singh et al. (2021) in their Figure 4 transparently used GAPDH as a loading control and included an untreated sample in their western blot analysis.

      We thank the Reviewer for bringing this to our attention. Our initial quantification was performed using ImageJ. To address the Reviewer’s comment, we have reperformed the experiment. Our quantitative analysis was performed through Bio-Rad ImageLab software using aldolase expression as a loading control (50% of the MORC loading). This information has now been incorporated into the supplementary figures (Figure S3).

      There's a striking observation that, despite significant degradation of PfMORC (as depicted in Figures S1 and S3), only the upper band in the western blot diminishes. This inconsistency needs addressing, as it can raise questions about the interpretation of the results.

      We agree with the reviewer's comment. We experienced some challenges upon performing a Western Blot on such a large protein (295kDa). Our initial attempts required long exposure that may have highlighted non-specific signals of smaller proteins. To address the reviewer’s comment, we have performed the experiment one more time and made necessary changes to our WB protocol. Our new result better reflects the expected down regulation of _Pf_MORC. These changes have been incorporated to our manuscript and Fig S3.

      Recommendations for improving the writing and presentation.

      MORC KD quantification and consistency with previous findings (Figure S3): When comparing their results with those from another study (Singh et al. 2021), it's critical to ensure that the experimental conditions, especially the methodology for KD and the quantification of protein levels, are similar. If not, a direct comparison might be misleading.

      We greatly appreciate the suggestions and have made efforts to redesign the MORC KD quantifications according to the reviewer’s recommendations.

      While the manuscript mentions the level of KD, it does not delve into the functional consequences of such a decrease in protein levels. It would be of interest to understand how this level of KD affects the parasite's biology, especially in the context of the paper's main findings.

      We have addressed this question by looking at the changes in chromatin structure in WT versus KD parasites upon atc removal. We have also validated this initial result by designing an additional ChIP-seq experiment against histone marks in WT versus KD parasites upon atc removal. Our findings showed a significant downregulation in H3K9me coverage in heterochromatin regions, specifically in genes associated with antigenic variation and invasion genes. These findings suggest that PfMORC regulates at least partially gene silencing and chromatin arrangements. The manuscript has been edited accordingly. 

      Concluding page 5, the authors present an interpretation of their findings that suggests a multi-faceted role of PfMORC in regulating stage-specific gene families, particularly the gametocyte-related genes and merozoite surface proteins. While the narrative they present is intriguing, several concerns arise:

      Over-reliance on correlation: The authors draw a direct line between the levels of PfMORC binding and the function of these genes in the parasite's life cycle. However, a mere correlation between PfMORC binding and stage-specific gene activity does not necessarily imply causation. They would need to provide experimental evidence showing that manipulation of PfMORC levels directly impacts these genes' expression.

      We agree with the reviewer's comment. We have however partially addressed this issue by comparing our ChIP-seq, RNA-seq and Hi-C experiments. We concluded that several of the transcriptional changes observed were due to an indirect effect of PfMORC KD and were most likely induced by a cell cycle arrest and partial collapse of the chromatin structure. The collapse of the heterochromatin structure was validated using our Hi-C experiment. To further address additional concerns the review’s had, we have included additional ChIP-seq experiments targeting histone marks to confirm our initial hypothesis. Result of this additional experiment has been incorporated in the revised version of the manuscript.

      Ambiguity surrounding "low levels" and "high levels": The terms "low levels" and "high levels" of PfMORC binding are qualitative and could be subject to interpretation. Without quantification or a clear benchmark, these descriptions remain vague.

      We agree with the reviewers that the terms "low levels" and "high levels" of PfMORC binding are qualitative and could be subject to interpretation. We have however quantified our change in DNA binding using normalized reads (RPKM). In trophozoite and schizont stages, most of the genes contain a mean of <0.5 RPKM normalized reads per nucleotide of Pf_MORC binding within their promoter region, whereas antigenic gene families such as _var and rifin contain ~1.5 and 0.5 normalized reads, respectively (Fig. 2b). Similar results are also obtained for the gametocyte-specific transcription factor AP2-G  that contains levels of Pf_MORC binding similar to what is observed in _var genes (Fig. 2c and S2c, d).

      Shift in Binding Sites: The observed minor switch in PfMORC binding sites from gene bodies to intergenic and promoter regions is mentioned, but without context on how these shifts impact gene expression or any comparative analysis with other proteins showing similar shifts. The claim that this shift implicates PfMORC as an "insulator" is a leap without direct evidence.

      We apologize for the confusion. We  have compared our ChIP-seq with RNA seq results at different time points of the cell cycle and demonstrated that the shift observed has an effect in gene expression. We have edit the manuscript to clarify these results.

      Overextension of PfMORC's Role: The authors suggest that PfMORC moves to the regulatory regions around the TSS to guide RNA Polymerase and transcription factors. This is a substantial claim and would require additional experiments to validate. Simply observing binding in a region is insufficient to assign a specific functional role, especially one as critical as guiding RNA Polymerase. Historically, the MORC family has been primarily linked with gene silencing across Apicomplexan, plants, and metazoans. On page 7, the authors noted a minimal overlap between the ChIP-seq and RNA-seq signals (Fig. 4e). They also acknowledged that the pronounced gene expression shifts at schizont stages result from a combination of direct and indirect impacts of PfMORC degradation, which could cause cell cycle arrest and potential heterochromatin disintegration, rather than just decreased PfMORC binding. Therefore, the authors should adjust their conclusions in the manuscript to more accurately represent the multifaceted functions of MORC in the parasite.

      We agree with the reviewer's comment and have edited the manuscript accordingly.  

      DISCUSSION:

      The authors concluded that "Using a combination of ChIP-seq, protein knock down, RNA-seq and Hi-C experiments, we have demonstrated that the MORC protein is essential for the tight regulation of gene expression through chromatin compaction, preventing access to gene promoters from TFs and the general transcriptional machinery in a stage specific manner."

      Again, the assertion that MORC protein is essential for tight regulation of gene expression, based purely on correlational data (e.g., ChIP-seq showing binding doesn't prove functionality), assumes causality which might not be fully substantiated. The phrase "preventing access to gene promoters from TFs and the general transcriptional machinery in a stage-specific manner" needs also validation. Asserting that MORC is essential for this function might oversimplify the process and overlook other critical contributors.

      We agree with the reviewer’s comments and the conclusion has since been edited accordingly.

      The discussion is quite poor. It would be pertinent to put MORC in perspective within the broader picture of regulatory mechanisms of chromatin state at telomeres and var genes. For instance, how do SIR2 and HDAC1 (associated with MORC) divide the task of deacetylation? Or the contribution of HP1 and other non-coding RNAs.

      We agree with the reviewer’s suggestion. However, in order to put MORC in perspective within a broader picture, we would need to measure changes in localization of several molecular components regulating heterochromatin in WT versus KD condition. This will require access to several molecular tools and specific antibodies that we do not currently have. We have addressed these issues in our discussion.  

      Minor corrections to the text and figures.

      Figure 1d: Could you provide the ID for each AP2 directly on the volcano plot? While some IDs are referenced in the manuscript, visual representation in the plot would facilitate a clearer understanding of their enrichment levels.

      ID for unknown AP2 proteins have been added on the volcano plot.

      I recommend presenting Figure S2b as a panel within a primary figure. This change would offer readers a more quantitative understanding of the distinct differences between developmental stages. Notably, there seems to be a limited number of genes in common when considering the total, and there is an apparent lack of enrichment in the ring stage.

      This has been done.

      The captions are very minimally detailed. An effort must be made to better describe the panels as well as which statistical tests were used. 

      We have improved the figure legends and add the number of biological replicates as well as the statistic used in each figure legend.

      Figure 1A: The protein diagram with its domains does not take scale into account.

      The figure has been modified.

      Reviewer #2 (Recommendations For The Authors):

      (1) The study lacks a direct link between PfMORC's inferred function and the state of heterochromatin in the genome post-depletion.

      We agree with the reviewer's comment and have included additional ChIP-seq experiments to measure changes in histone marks in PfMORC depleted parasite line. We show a significant decrease in histone H3K9me3 marks in PfMORC KD condition.

      Conducting ChIP-seq on well-known heterochromatin markers such as H3K9me3, HP1, or H3K36me2/3 could shed light on the consequences of PfMORC depletion on global heterochromatin and its boundaries.

      With no access to an anti-HP1 antibody with reasonable affinity, we have not been able to study the impact of MORC KD on HP1 but have successfully observed the impact on H3K9me3 marks. These results have been added to the revised manuscript in (Fig. 5).

      (2) The authors should conduct a more comprehensive analysis of PfMORC's genomic localization, comparing it to ApiAP2 binding (interacting proteins) and histone modifications. This would provide valuable insights.

      We have performed a more comprehensive genome wide analysis of MORC binding through ChIP-seq on WT and MORC-KD conditions. Our results show that Pf_MORC localizes to heterochromatin with significant overlap with H3K9-trimethylation (H3K9me3) marks, at or near _var gene regions. When downregulated, level of H3K9me3 was detected at a lower level, validating a possible role of _Pf_MORC in gene repression. Regarding the comparison with AP2 binding, our proteomics datasets have shown extensive MORC binding with several AP2 proteins.

      (3) RNA-seq data reveals that only a few genes are affected after 24 hours of PfMORC depletion, with an equivalent number of up-regulated and down-regulated genes. The reasons behind down-regulation resulting from a heterochromatin marker depletion are not clearly established.

      We agree with the reviewer’s comment. At this stage (24 hours), _Pf_MORC depletion is limited and the effects at the transcriptional level are quite restricted. Furthermore, it is highly probable that down-regulated genes are most likely due to an indirect effect of a cell cycle arrest. We have edited the manuscript to address this comment. 

      The relationship between this data and the partial depletion of PfMORC needs further discussion.

      We agree with the reviewers and have improved our discussion in the revised version of the manuscript.

      (4) The authors did not compare their ChIP-seq data with the genes found downregulated in the RNA-seq data. Examining the correlation between these datasets would enhance the study.

      We apologize for the confusion. We have compared ChIP-seq and RNA-seq data and identified a very limited number of overlapping genes indicating that most of the changes observed in gene expression are in fact most likely indirect due to a cell cycle arrest and a collapse of the chromatin. We have edited the manuscript to clarify this issue.

      (5) The discussion section is relatively concise and does not fully address the complexity of the data, warranting further exploration.

      We have improved the discussion section in the revised version of the manuscript.

    1. Pangur Bán and I at work,

      By using this choice of words the author is making the connection that the cat isn't just a thing that's there. It's a thing that goes where the person goes.

    1. Piispan tulee puhutella kyseessä olevaa pappia ennen asian saattamista konsistorin tietoon. Mikäli piispa katsoo,että tapahtunutta ei ole tarpeen käsitellä konsistorissa lankeemuksen vähäisyyden vuoksi, voi piispa yksinäänantaa papille huomautuksen, jonka jälkeen asia katsotaan käsitellyksi. Annetusta huomautuksesta annetaantieto konsistorille sen seuraavassa kokouksessa. Mikäli piispa katsoo, että hänelle tehty ilmoitus on aiheeton, eiasiaa viedä konsistorin käsiteltäväksi.Raamatun ohjeen mukaisesti (1. Tim. 5:19) piispan tulee suhtautua pidättyvästi sellaisten ilmoitustenkäsittelemiseen, joiden takana on vain yksi seurakuntalainen, ellei teon luonteesta johtuen ole kohtuutontaedellyttää useampaa ilmoittajaa. Vain poikkeustapauksessa voidaan käsitellä seurakunnan ulkopuoliselta taholtatullut ilmoitus.Piispan ja konsistorin tulee huolehtia siitä, että papin maineen varjelemiseksi tietoa asian vireillä olosta ei annetamuille kuin asiaan osallisille.Mikäli piispa katsoo, että epäily lankeemuksesta on yleisessä tiedossa, voidaan asia ottaa esille konsistorinkokouksessa jo ennen 2 momentissa mainittua keskustelua. Tällöinkin piispan tulee yksityisesti keskustellakyseessä olevan papin kanssa ennen kuin konsistori tekee asiasta päätöksen. Konsistorin tulee tällaisessakintapauksessa 4 momentin mukaisesti varjella asian yksityisyyttä. Mikäli epäilyn ei katsota antavan aihetta 5 §:ssäkuvattuun kurinpidolliseen rangaistukseen, tulee piispan yhdessä epäillyn papin kanssa sopia siitä, missä määrinkonsistorin päätöksestä tiedotetaan.4 § Kurinpidollisen asian käsittelyssä tulee noudattaa hyvän hallinnon ja oikeudenmukaisen oikeudenkäynninperiaatteita. Tutkinnan aikana noudatetaan syyttömyysolettamaa.Papilla on velvollisuus myötävaikuttaa häntä koskevan asian selvittämiseen.

      This part just talks about how the thing (investigation) must be kept a secret/hidden while it's going on, how there have to usually be multiple witnesses and that the pastor is innocent until proven guilty.

      A lot of very good, protective legal text, which for a pastor is good. IF a single congregant tries to smear you, it very like won't succeed.

  8. inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
    1. f they want to take it, I don’t think any student should be denied access to it. Let them prove themselves. If they’re failing the class, well, they’re failing the class . . . if they can do it, let them prove it.

      I mean, I can understand how easy it is to say this, but it's way more difficult than that because we have students who haven't had the same education as others because these students don't have enough money to get tutoring, get books, or maybe even go to school so I think we must consider other things instead of just making them take a class and basing their education on this class.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Response to reviewers

      We thank the Editor and the Reviewers for their constructure review. In the light of this feedback, we have made a number of changes and additions to the manuscript, that we think improved the presentation and hopefully address the majority of the concerns by the reviewers.

      Main changes:

      •   We added a new SI section (B1) with a population dynamics simulation in the high clonal interference regime and without expiring fitness (see R1: (1)).

      •   We added a new SI section (A9) with the derivation of the equilibrium state of our SIR model in the case of 𝑀 immune groups and in the limit 𝜀 → 0 (see R1: (5)).

      •   The text of the section Abstraction as “expiring” fitness advantage has been modified.

      •   We added a new SI section (A4) describing the links between parameters of the “expiring fitness” and SIR models.

      All three reviewers had concerns about the relation between our SIR model and the “expiring fitness” model, that we hope will be addressed by the last two items listed above. In particular, we would like to underline the following points:

      •   The goal of our SIR model is to give a mechanistic explanation of partial sweeps using traditional epidemiological models. While ecological models (e.g. consumer resource) can give rise to the same phenomenology, we believe that in the context of host-pathogen interaction it is relevant to explicitely show that SIR models can result in partial sweeps.

      •   The expiring fitness model is mainly an effective model: it reproduces some qualitative features of the SIR but does not quantitatively match all aspects of the frequency dynamics in SIR models.

      •   It is possible to link the parameters of the SIR (𝛼,𝛾,𝑏,𝑓) and expiring fitness (𝑠,𝑥,𝜈) models at the beginning of the invasion of the variant (new SI section A4). However, the two models also differ in significant ways (the SIR model can for example oscillate, while the effective model can not). The correspondence of quantities like the initial invasion rate and the ‘expiration rate’ of fitness effects is thus only expected to hold for some time after the emergence of a novel variant.

      Public reviews:

      Reviewer 1:

      Summary In this work, the authors study the dynamics of fast-adapting pathogens under immune pressure in a host population with prior immunity. In an immunologically diverse population, an antigenically escaping variant can perform a partial sweep, as opposed to a sweep in a homogeneous population. In a certain parameter regime, the frequency dynamics can be mapped onto a random walk with zero mean, which is reminiscent of neutral dynamics, albeit with differences in higher order moments. Next, they develop a simplified effective model of time dependent selection with expiring fitness advantage, and posit that the resulting partial sweep dynamics could explain the behaviour of influenza trajectories empirically found in earlier work (Barrat-Charlaix et al. Molecular Biology and Evolution, 2021). Finally, the authors put forward an interesting hypothesis: the mode of evolution is connected to the age of a lineage since ingression into the human population. A mode of meandering frequency trajectories and delayed fixation has indeed been observed in one of the long-established subtypes of human influenza, albeit so far only over a limited period from 2013 to 2020. The paper is overall interesting and well-written. Some aspects, detailed below, are not yet fully convincing and should be treated in a substantial revision.

      We thank the reviewer for their constructive criticism. The deep split in the A/H3N2 HA segment from 2013 to 2020 is indeed the one of the more striking examples of such meandering frequency dynamics in otherwise rapidly adapting populations. But the up and down of H1N1pdm clade 5a.2a.1 in recent years might be a more recent example. We argue that such meandering dynamics might be a common contributor to seasonal influenza dynamics, even if it only spans 3-6 years.

      (1) The quasi-neutral behaviour of amino acid changes above a certain frequency (reported in Fig, 3), which is the main overlap between influenza data and the authors’ model, is not a specific property of that model. Rather, it is a generic property of travelling wave models and more broadly, of evolution under clonal interference (Rice et al. Genetics 2015, Schiffels et al. Genetics 2011). The authors should discuss in more detail the relation to this broader class of models with emergent neutrality. Moreover, the authors’ simulations of the model dynamics are performed up to the onset of clonal interference 𝜌/ 𝑠0 \= 1 (see Fig. 4). Additional simulations more deeply in the regime of clonal interference (e.g. 𝜌/ 𝑠0 \= 5) show more clearly the behaviour in this regime.

      We agree with the reviewer that we did not discuss in detail the effects of clonal interference on quasi-neutrality and predictability. As suggested, we conducted additional simulations of our population model in the regime of high clonal interference (𝜌/ 𝑠0 ≫ 1) and without expiring fitness effects. The results are shown in a new section of the supplementary information. These simulations show, as expected, that increasing clonal interference tends to decrease predictability: the fixation probability of an adaptive mutation found at frequency 𝑥 moves closer to 𝑥 as 𝜌 increases. However, even in a case of strong interference 𝜌/ 𝑠0 \= 32, 𝑝fix remains significantly different from the neutral expectation. We conclude from this that while it is true that dynamics tend to quasi-neutrality in the case of strong interference, this effect alone is unlikely to explain observations of H3N2 influenza dynamics. In our previous publication (BarratCharlaix et al, MBE, 2021) we have also investigated the effect of epistatic interactions between mutations, along side strong clonal interference. We concluded that, while most of these processes make evolution less predictable and push 𝑝fix towards the diagonal, it is hard to reproduce the empirical observations with realistic parameters. The “expiring fitness” model, however, produces this quite readily.

      But there are qualitative differences between quasi-neutrality in traveling wave models and the expiring fitness model. In the traveling wave, a genotype carrying an adaptive mutation is always fitter than if it didn’t carry the mutation. Quasi-neutrality emerges from the accumulation of fitness variation at other loci and the fact that the coalescence time is not much bigger than the inverse selection coefficient of the mutation. In the expiring fitness model, the selective effect of the mutation itself goes away with time. We now discuss the literature on quasi-neutrality and cite Rice et al. 2015 and Schiffels et al. 2011.

      In this context, I also note that the modelling results of this paper, in particular the stalling of frequency increase and the decrease in the number of fixations, are very similar to established results obtained from similar dynamical assumptions in the broader context of consumer resource models; see, e.g., Good et al. PNAS 2018. The authors should place their model in this broader context.

      We thank the reviewer for pointing out the link between consumer resource models and our work. We further strengthened our discussion of the similarity of the phenomenology to models typically used in ecology and made an effort to highlight the link between consumer-resource models and ours in the introduction and in the part on the SIR model.

      (2) The main conceptual problem of this paper is the inference of generic non-predictability from the quasi-neutral behaviour of influenza changes. There is no question that new mutations limit the range of predictions, this problem being most important in lineages with diverse immune groups such as influenza A(H3N2). However, inferring generic non-predictability from quasi-neutrality is logically problematic because predictability refers to individual trajectories, while quasi-neutrality is a property obtained by averaging over many trajectories (Fig. 3). Given an SIR dynamical model for trajectories, as employed here and elsewhere in the literature, the up and down of individual trajectories may be predictable for a while even though allele frequencies do not increase on average. The authors should discuss this point more carefully.

      We agree with the reviewer that the deterministic SIR model is of course predictable. Similarly, a partial sweep is predictable. But we argue that expiring fitness makes evolution less predictable in two ways: (i) When a new adaptive mutation emerges and rises in frequency, we typically don’t know how rapidly its fitness effect is ‘expiring’. Thus even if we can measure its instantaneous growth rate accurately, we can’t predict its fate far into the future. (ii) Compared to the situation where fitness effects are not expiring, time to fixation is longer and there are more opportunities for novel mutations to emergence and change the course of the trajectory. We have tried to make this point clearer in the manuscript.

      (3) To analyze predictability and population dynamics (section 5), the authors use a Wright-Fisher model with expiring fitness dynamics. While here the two sources of the emerging neutrality are easily tuneable (expiring fitness and clonal interference), the connection of this model to the SIR model needs to be substantiated: what is the starting selection 𝑠0 as a function of the SIR parameters (𝑓,𝑏,𝑀,𝜀), the selection decay 𝜈 = 𝜈(𝑓,𝑏,𝑀,𝜀,𝛾)? This would enable the comparison of the partial sweep timing in both models and corroborate the mapping of the SIR onto the simplified W-F model. In addition, the authors’ point would be strengthened if the SIR partial sweeps in Fig.1 and Fig.2 were obtained for a combination of parameters that results in a realistic timescale of partial sweeps.

      We added a new section to the SI (A4) that relates the parameters of the SIR and expiring fitness models. In particular, we compute the initial growth rate 𝑠0 and a proxy for the fitness expiry rate 𝜈 as a function of the SIR parameters 𝛼,𝛾,𝑓,𝑏,𝑀, at the instant where the variant is introduced. The initial growth rate depends primarily on the degree of immune escape 𝑓, while the expiration rate 𝜈 is related to incidence 𝐼wt + 𝐼𝑚. However, as both models have fundamentally different dynamics, these relations are only valid on time scales shorter than potential oscillations of the SIR model. Beyond that, the connection between the models is mostly qualitative: both rely on the fact that growth rate of a strain diminishes when the strain becomes more frequent, and give rise to partial sweeps.

      In Figure 1, the time it takes a partial sweep to finish is roughly 100− 200 generations (bottom right panel). If we consider H3N2 influenza and take one generation to be one week, this corresponds to a sweep time of 2 to 4 years, which is slightly slower but roughly in line with observations for selective sweeps. This time is harder to define if oscillatory dynamics takes place (middle right panel), but the time from the introduction of the mutant to the peak frequency is again of about 4 years. The other parameters of the model correspond to a waning time of 200 weeks and immune escape on the order of 20-30% change in susceptibility.

      Reviewer 2:

      Summary

      This work addresses a puzzling finding in the viral forecasting literature: high-frequency viral variants evince signatures of neutral dynamics, despite strong evidence for adaptive antigenic evolution. The authors explicitly model interactions between the dynamics of viral adaptations and of the environment of host immune memory, making a solid theoretical and simulation-based case for the essential role of host-pathogen eco-evolutionary dynamics. While the work does not directly address improved data-driven viral forecasting, it makes a valuable conceptual contribution to the key dynamical ingredients (and perhaps intrinsic limitations) of such efforts.

      Strengths

      This paper follows up on previous work from these authors and others concerning the problem of predicting future viral variant frequency from variant trajectory (or phylogenetic tree) data, and a model of evolving fitness. This is a problem of high impact: if such predictions are reliable, they empower vaccine design and immunization strategies. A key feature of this previous work is a “traveling fitness wave” picture, in which absolute fitnesses of genotypes degrade at a fixed rate due to an advancing external field, or “degradation of the environment”. The authors have contributed to these modeling efforts, as well as to work that critically evaluates fitness prediction (references 11 and 12). A key point of that prior work was the finding that fitness metrics performed no better than a baseline neutral model estimate (Hamming distance to a consensus nucleotide sequence). Indeed, the apparent good performance of their well-adopted “local branching index” (LBI) was found to be an artifact of its tendency to function as a proxy for the neutral predictor. A commendable strength of this line of work is the scrutiny and critique the authors apply to their own previous projects. The current manuscript follows with a theory and simulation treatment of model elaborations that may explain previous difficulties, as well as point to the intrinsic hardness of the viral forecasting inference problem.

      This work abandons the mathematical expedience of traveling fitness waves in favor of explicitly coupled eco-evolutionary dynamics. The authors develop a multi-compartment susceptible/infected model of the host population, with variant cross-immunity parameters, immune waning, and infectious contact among compartments, alongside the viral growth dynamics. Studying the invasion of adaptive variants in this setting, they discover dynamics that differ qualitatively from the fitness wave setting: instead of a succession of adaptive fixations, invading variants have a characteristic “expiring fitness”: as the immune memories of the host population reconfigure in response to an adaptive variant, the fitness advantage transitions to quasi-neutral behavior. Although their minimal model is not designed for inference, the authors have shown how an elaboration of host immunity dynamics can reproduce a transition to neutral dynamics. This is a valuable contribution that clarifies previously puzzling findings and may facilitate future elaborations for fitness inference methods.

      The authors provide open access to their modeling and simulation code, facilitating future applications of their ideas or critiques of their conclusions.

      We thank the reviewer for their summary, assessement, and constructive critique.

      (1) The current modeling work does not make direct contact with data. I was hoping to see a more direct application of the model to a data-driven prediction problem. In the end, although the results are compelling as is, this disconnect leaves me wondering if the proposed model captures the phenomena in detail, beyond the qualitative phenomenology of expiring fitness. I would imagine that some data is available about cross-immunity between strains of influenza and sarscov2, so hopefully some validation of these mechanisms would be possible.

      We agree with the reviewer that quantitatively confronting our model with data would be very interesting. Unfortunately, most available serological data for influenza and SARS-CoV-2 is obtained using post-infection sera from previoulsy naive animal models. To test our model, we would require human serology data, ideally demographically resolved, and a way to link serology to transmission dynamics. Furthermore, our model is mostly an explanation for qualitative features of variant dynamics and their apparent lack of predictability. We therefore considered that quantitative validation using data is out of scope of this work.

      (2) After developing the SIR model, the authors introduce an effective “expiring fitness” model that avoids the oscillatory behavior of the SIR model. I hoped this could be motivated more directly, perhaps as a limit of the SIR model with many immune groups. As is, the expiring fitness model seems to lose the eco-evolutionary interpretability of the SIR model, retreating to a more phenomenological approach. In particular, it’s not clear how the fitness decay parameter 𝜈 and the initial fitness advantage 𝑠0 relate to the key ecological parameters: the strain cross-immunity and immune group interaction matrices.

      The expiring fitness model emerges as a limiting case, at least qualitatively, of the SIR model when growth rate of the new variant is small compared to the waning rate and the SIR model does not oscillate. This can be readily achieved by many immune groups, which reconciles the large effect of many escape mutations and the lack of oscillation by confining the escape to some fraction of the population. Beyond that, the expiring fitness model is mainly an effective model that allows us to study the consequences of partial sweeps on predictability on long timescales. As stated in the “Main changes” section at the start of this reply, we added an SI section which links parameters of the two models. However, we underline the fact that beyond the phenomenon of partial sweeps, the dynamics of the two are different.

      Reviewer 3:

      Summary

      In this work the authors start presenting a multi-strain SIR model in which viruses circulate in an heterogeneous population with different groups characterized by different cross-immunity structures. They argue that this model can be reformulated as a random walk characterized by new variants saturating at intermediate frequencies. Then they recast their microscopic description to an effective formalism in which viral strains lose fitness independently from one another. They study several features of this process numerically and analytically, such as the average variants frequency, the probability of fixation, and the coalescent time. They compare qualitatively the dynamics of this model to variants dynamics in RNA viruses such as flu and SARS-CoV-2.

      Strengths

      The idea that a vanishing fitness mechanisms that produce partial sweeps may explain important features of flu evolution is very interesting. Its simplicity and potential generality make it a powerful framework. As noted by the authors, this may have important implications for predictability of virus evolution and such a framework may be beneficial when trying to build predictive models for vaccine design. The vanishing fitness model is well analyzed and produces interesting structures in the strains coalescent. Even though the comparison with data is largely qualitative, this formalism would be helpful when developing more accurate microscopic ingredients that could reproduce viral dynamics quantitatively. This general framework has a potential to be more universal than human RNA viruses, in situations where invading mutants would saturate at intermediate frequencies.

      We thank the reviewer for their positive remarks and constructive criticism below.

      Weaknesses

      The authors build the narrative around a multi-strain SIR model in which viruses circulate in an heterogeneous population, but the connection of this model to the rest of the paper is not well supported by the analysis. When presenting the random walk coarse-grained description in section 3 of the Results, there is no quantitative relation between the random walk ingredients importantly 𝑃(𝛽) - and the SIR model, just a qualitative reasoning that strains would initially grow exponentially and saturate at intermediate frequencies. So essentially any other microscopic description with these two features would give rise to the same random walk.

      As also highlighted in the response to other reviewers, we now discuss how the parameter of the SIR model are related to the initial growth rate and the ‘expiration’ rate of the effective model. While the phenomenology of the SIR model is of course richer, this correspondence describes its overdamped limit qualitatively well.

      Currently it’s unclear whether the specific choices for population heterogeneity and cross-immunity structure in the SIR model matter for the main results of the paper. In section 2, it seems that the main effect of these ingredients are reduced oscillations in variants frequencies and a rescaled initial growth rate. But ultimately a homogeneous population would also produce steady state coexistence between strains, and oscillation amplitude likely depends on parameters choices. Thus a homogeneous population may lead to a similar coarse-grained random walk.

      The reviewer is correct that the primary effects of using many immune groups is to slow down the increase of novel variant, which in turn dampens the oscillations. Having multiple immune groups widens the parameter space in which partial sweeps without dramatic oscillations are observed. For slow sweeps, similar dymamics are observed in a homogeneous population.

      Similarly, it’s unclear how the SIR model relates to the vanishing fitness framework, other than on a qualitative level given by the fact that both descriptions produce variants saturating at intermediate frequencies. Other microscopic ingredients may lead to a similar description, yet with quantitative differences.

      Both of these points were also raised by other reviewers and we agree that it is worth discussing them at greater length. We now discuss how the parameters of the ‘expiring fitness’ model relate to those of the SIR. We also discuss how other models such as ecological models give rise to similar coarse grained models.

      At the same time, from the current analysis the reader cannot appreciate the impact of such a mean field approximation where strains lose fitness independently from one another, and under what conditions such assumption may be valid.

      In the SIR model, the rate at which strains lose fitness does depend on the precise state of the host population through the quantities 𝑆𝑚 and 𝑆wt , which is apparent in equation (A27) of the new SI section. The fact that a new variant shifts the equilibrium frequencies of previous strains in a proportional way is valid if the “antigenic space” is of very high dimensions, as explained in section Change in frequency when adding subsequent strains of the SI. It would indeed be interesting to explore relaxations of this assumption by considering a larger class of cross immunity matrices 𝐾. However, in the expiring fitness model, the fact that strains lose fitness independently from each ohter is a necessary simplification.

      In summary, the central and most thoroughly supported results in this paper refer to a vanishing fitness model for human RNA viruses. The current narrative, built around the SIR model as a general work on host-pathogen eco-evolution in the abstract, introduction, discussion and even title, does not seem to match the key results and may mislead readers. The SIR description rather seems one of the several possible models, featuring a negative frequency dependent selection, that would produce coarse-grained dynamics qualitatively similar to the vanishing fitness description analyzed here.

      We have revised the text throughout to make the connections between the different parts of the manuscript, in particular the SIR model and the expiring fitness model, clearer. We agree that the phenomenology of the expiring fitness model is more general than the case of human RNA viruses described by the SIR model, but we think this generality is an attractive feature of the coarse-graining, not a shortcoming. Indeed, other settings with negative frequency dependent selection or eco-systems that adapt on appropriate time scale generate similar dynamics.

      Recommendations for the authors:

      Reviewer 1:

      (4) Line 74: what does fitness mean?

      Many population dynamics models, including ones used for viral forecasting, attach a scalar fitness to each strain. The growth rate of each strain is then computed by substracting the average population fitness to the strain’s fitness. In this sentence, fitness is intended in this way.

      (5) Fig. 1: The equilibrium frequency in the middle and bottom rows is hardly smaller than the equilibrium frequency in the top row for one immune group. This is surprising since for M=10, the variant escapes in only 1/10th of the population, which naively should impact the equilibrium frequency more strongly. Could the authors comment on this?

      This is indeed non-trivial, and a hand-waving argument can be made by considering the extreme case 𝜀 = 0. The variant is then completely neutral for the immune groups 𝑖 > 1, and would be at equilibrium at any frequency in these immune groups. Its equilibrium frequency is then only determined by group 1, which is the only one breaking degeneracy. For 𝜀 > 0 but small, we naturally expect a small deviation from the 𝜀 = 0 case and thus 𝛽 should only change slightly.

      A more rigorous argument with a mathematical proof in the case 𝜀 = 0 is now given in section A4 of the supplementary information.

      (6) Fig. 1: In the caption, it is stated that the simulations are performed with 𝜀 = 0.99. Is this a typo? It seems that it should be 𝜀 = 0.01, as in and just below equation (7).

      This was indeed a typo. It is now fixed.

      (7) Fig. 3: The data analysis should be improved. In order to link the average frequency trajectories to standard population genetics of conditional fixation probabilities, the focal time should always be the time where the trajectory crosses the threshold frequency for the first time. Plotting some trajectories from a later time onwards, on their downward path destined to loss, introduces a systematic bias towards negative clonal interference (for these trajectories, the time between the first and the second crossing of the threshold frequency is simply omitted). The focal time of first crossing of the threshold frequency can easily be obtained, e.g., by linear interpolation of the trajectory between subsequent time points of frequency evalution. In light of the modified procedure, the statements on the on the inertia of the trajectories after crossing 𝑥⋆ (line 356) should be re-examined.

      The way we process the data is already in line with the suggestions of the reviewer. In particular, we use as focal time the first time at which a trajectory is found in the threshold frequency bin. Trajectories that are never seen in the bin because of limited time-resolution are simply ignored.

      In Fig. 3, there are no trajectories that are on their downward path at the focal time and when crossing the threshold frequency. Our other work on predictability of flu Barrat-Charlaix et. al. (2021) has a similar figure, which maybe created confusion.

      (8) Fig. 4: authors write 𝛼/ 𝑠0 in the figure, but should be 𝜈/ 𝑠0.

      Fixed.

      (9) Line 420: authors refer to the blue curve in panel B as the case with strong interference. However, strong interference is for higher 𝜌/ 𝑠0, that is panel D (see point 1).

      Fixed.

      (10) Line 477: typo “there will a variety of mutations”.

      Fixed.

      Reviewer 2:

      Should 𝛼 be 𝜈 in Figure 4 legends?

      Thank you very much for spotting this error. We fixed it.

      Equations 4-5 could be further simplified.

      We factorised the 𝐼 term in equation 4. In equation 5, we prefered to keep the 1− 𝛿/ 𝛼 term as this quantity appears in different calculations concerning the model. For instance, 𝑆 = 𝛿/ 𝛼 at equilibrium.

      The sentence before equation 8 references 𝑃𝛽(𝛽), but this wasn’t previously introduced.

      We now introduce 𝑃𝑏𝜂 at the beginning of the section Ultimate fate of the variant.

      In the last paragraph of page 12, “monotonously” maybe should be “monotonically”.

      Fixed.

      For the supplement section B, you might want a more descriptive title than “other”.

      We renamed this section to Expiring fitness model and random walk.

      Reviewer 3:

      To expand on my previous comments, my main concerns regard the connection of section 2 and the SIR model with the rest of the paper.

      In the first paragraph of page 9 the authors argue that a stochastic version of the SIR model would lead to different fixation dynamics in homogeneous vs heterogeneous populations due to the oscillations. This paragraph is quite speculative, some numerical simulations would be necessary to quantitatively address to what extent these two scenarios actually differ in a stochastic setting, and how that depends on parameters.

      Likewise, the connection between the SIR model, the random walk coarse-grained description and the vanishing fitness model can be investigated through numerical simulations of a stochastic SIR given the chosen population and cross-immunity structures with i.e. 10-20 strains. This would allow for a direct comparison of individual strain dynamics rather than the frequency averages, as well as other scalar properties such as higher moments, coalescent, and fixation probability once reaching a given frequency. It would also be possible to characterize numerically the SIR P(beta) bridging the gap with the random walk description. It’s not obvious to me that the SIR P(beta) would not depend on the population size in the presence of birth-death stochasticity, potentially changing the moments scalings. I appreciate that such simulations may be computationally expensive, but similar numerical studies have been performed in previous phylodynamics works so it shouldn’t be out of reach.

      An alternative, the authors should consider re-centering the narrative directly on the random walk of the vanishing fitness model, mentioning the SIR more briefly as a possible qualitative way to get there. Either way the authors should comment on other ways in which this coarse-grained dynamics could arise.

      In the vanishing fitness model, where variants fitnesses are independent, is an infinite dimensional antigenic space implicitly assumed? If that’s the case, it should be explained in the main text.

      A long simulation of the SIR model would indeed be interesting, but is numerically demanding and our current simulation framework doesn’t scale well for many strains and susceptibilities. We thus refrained from adding extensive simulations.

      In Figure 2B of the main text, the simulation with 7 strains illustrates the qualitative match between the expiring fitness and the SIR model. However, it is clearly not long enough to discuss statistical properties of the corresponding random walk. Furthermore, we do not expect the individual strain dynamics of the SIR and expiring fitness models to match. The latter depends on few parameters (𝛼, 𝑠0), while the former depends on the full state of the host population and of the previous variants.

      In the sectin linking the parameters of the two models, we now discuss the distribution 𝑃(𝛽) of the SIR model for two strains and a specific choice of distribution for the cross immunity 𝑏 and 𝑓.

      Minor comments:

      There is some back and forth in the writing. For instance, when introducing the model, 𝐶𝑖𝑗 is first defined as 1/ 𝑀, then a few paragraphs later the authors introduce that in another limit 𝐶𝑖𝑖 is just much higher than any 𝐶𝑖𝑗, and finally they specify that the former is the fast mixing scenario.

      Another example is in section 2, in the first paragraph they put forward that heterogeneity and crossimmunity have different impacts on the dynamics, but the meaning attributed to these different ingredients becomes clear only a while later after the homogeneous population analysis. Uniforming the writing would make it easier for the reader to follow the authors’ train of thought.

      We removed the paragraph below Equation (1) mentioning the 𝐶𝑖𝑗 \= 1/ 𝑀 case, which we hope will linearize the writing.

      When mentioning geographical structure, why would geography affect how immunity sees pairs of viral strains (differences in 𝐾)?

      Geographic structure could influence cross-immunity because of exposure histories of hosts. For instance in the case of influenza, different geographical regions do not have the same dominating strains in each season, and hosts from different regions may thus build up different immunity.

      In the current narrative there are some speculations about non-scalar fitness, especially in section 2. The heterogeneity in this section does not seem so strong to produce a disordered landscape that defies the notion of scalar fitness in the same way some complex ecological systems do. A more parsimonious explanation for the coexistence dynamics observed here may be a negative frequency dependent selection.

      Our language here was not very precise and we agree that the phenomenology we describe is related to that of frequency dependent selection (mediated by via immunity of the host population that integrates past frequencies). Traveling wave models typically use fitness function that are independent of the population distribution and only account for the evolution via an increasing average fitness. We have made discussion more accurate by stating that we consider a case where fitness depends explicitly on present and past population composition, which includes the case of negative frequency dependent selection.

      I don’t understand the comparison with genetic drift (typo here, draft) in the last paragraph of section 3 given that there is no stochasticity in growth death dynamics.

      We compare the random walk to genetic drift because of the expression of the second moment of the step size. The genetic draft has the same functional form. If one defines the effective population size as in the text, the drift due to random sampling of alleles (neutral drift) and the changes in strain frequency in our model have the same first and second moments. The stochasticity here does not come from the dynamics, which are indeed deterministic, but from the appearance of new mutations (variants) on backgrounds that are randomly sampled in the population. This latter property is shared with genetic draft.

      In the vanishing fitness model, I think the reader would benefit from having 𝑃(𝑠) in the main text, and it should be made more clear what simulations assume what different choice of 𝑃(𝑠).

      We added the expression of 𝑃(𝑠) in the main text. Simulations use the value 𝑠0 \= 0.03, which we added in the caption of Figure 4.

      When comparing the model and data, is the point that COVID is not reproduced due to clonal interference? It seems from the plot that flu has clonal interference as well though. Why is that negligible?

      A similar point has been raised by the first reviewer (see R1-(1)). Clonal interference is not negligible, but we find it to be insufficient to explain the observations made for H3N2 influenza, namely the lack of inertia of frequency trajectories or the probability of fixation. This is shown in the new section (B1) of the SI. Both SARS-CoV-2 and H3N2 influenza experience clonal interference, but the former is more predictable than the latter. Our point is that expiring fitness effects should be stronger in influenza because of the higher immune heterogeneity of the host population, making it less predictable than SARS-CoV-2.

      Does the fixation probability as a function of frequency threshold match the flu data for some parameters sets?

      For H3N2 influenza, the fixation probability is found to be equal to the threshold frequency (see Barrat-Charlaix MBE 2021, also indirectly visible from Fig. 3). In Figure 4, we obtain that either a high expiry rate or intermediate expiry rates and clonal interference regimes match this observation.

      It would be instructive to see examples of the individual variant dynamics of the vanishing fitness model compared to the presented data.

      We added an extra SI figure (S7) showing 10 randomly selected trajectories of individual variants in the case of H3N2/HA influenza and for the expiring fitness model with different parameter choices.

      Figure 4E has no colorbar label. The reader shouldn’t have to look for what that means in the bottom of the SIs. In panels A and B the label should be 𝜈, not 𝛼. Same thing in most equations of page 42.

      We added the colorbar label to the figure and also updated the caption: a darker color corresponds to a higher probability of sweeps to overlap. We fixed the 𝜈 – 𝛼 confusion in the SI and in the caption of the figure.

    1. 7.4. Responding to trolls?# One of the traditional pieces of advice for dealing with trolls is “Don’t feed the trolls,” which means that if you don’t respond to trolls, they will get bored and stop trolling. We can see this advice as well in the trolling community’s own “Rules of the Internet”: Do not argue with trolls - it means that they win But the essayist Film Crit Hulk argues against this in Don’t feed the trolls, and other hideous lies. That piece argues that the “don’t feed the trolls” strategy doesn’t stop trolls from harassing: Ask anyone who has dealt with persistent harassment online, especially women: [trolls stopping because they are ignored] is not usually what happens. Instead, the harasser keeps pushing and pushing to get the reaction they want with even more tenacity and intensity. It’s the same pattern on display in the litany of abusers and stalkers, both online and off, who escalate to more dangerous and threatening behavior when they feel like they are being ignored. Film Crit Hulk goes on to say that the “don’t feed the trolls” advice puts the burden on victims of abuse to stop being abused, giving all the power to trolls. Instead, Film Crit Hulk suggests giving power to the victims and using “skilled moderation and the willingness to kick people off platforms for violating rules about abuse”

      The "don't feed the trolls" advice seems out of date now, especially given how frequent internet harassment is. I agree with Film Crit Hulk that ignoring trolls would just make them more aggressive. Instead than blaming victims, platforms should step up and implement stricter moderation to prevent trolls from worsening situations.

    2. Do not argue with trolls - it means that they win

      I agree with this statement because if you continue to respond to someone who is trolling they are doing this for fun to get an emotional reaction out of someone. I think it's best if you just ignore them so they can just feel good about what there doing.

    1. Trolling is a method of disrupting the way things are, including group structure and practices. Like these group-forming practices, disruptive trolling can be deployed in just or unjust ways. (We will come back to that.) These disruptive tactics can also be engaged with different moods, ranging from playful (like some flashmobs), to demonstrative (like activism and protests), to hostile, to warring, to genocidal. You may have heard people say that the difference between a coup and a revolution is whether it succeeds and gets to later tell the story, or gets quashed. You may have also heard that the difference between a traitor and a hero depends on who is telling the story.

      I feel that is a very fine line that people walk on the internet. These types of actions create Karens with a self-righteous behavior. Without proper research they just come across as jerks. It's not everyone's job to police and give their opinions.

    1. For me, music ismy cultural artifact; it is a mirror to society’s values, struggles, and successes. Everybody listensto music at some point, if it's in the car, while you're studying, or just want to have a good time.Music allows everyone to express their emotions and experiences in a way that words alonecannot. Frank Ocean is one of my favorite artists to listen to; he is amazing at showing howpersonal and thoughtful lyrics can resonate with listeners.

      Music is a bit too broad to be thought of as one "artifact," but the artist Frank Ocean could certainly count.

    Annotators

    1. Sunday Morning Fake news, social media, and "The Death of Truth" By Ted Koppel Updated on: September 8, 2024 / 10:24 AM EDT / CBS News Fake news and "The Death of Truth" Fake news and "The Death of Truth" 06:43 We live in an age of alternate facts. More and more Americans are getting their information almost entirely from outlets that echo their own political point of view. And then, of course, there's social media, where there are few (if any) filters between users and a wide world of misinformation.For example: On July 13 a sniper came within inches of assassinating Donald Trump as he addressed an outdoor rally in Pennsylvania. Within minutes, social media was alive with uninformed speculation. One woman posted, "Who did it?  I bet you it was the government themselves. They're all on the same side."Koppel said, "We have no idea who she is, she has no particular credibility. Why should I even care that she is out there?" "Because she could potentially have an audience," said journalist and author Steven Brill. "If the algorithm gives it steam, that could be seen by millions of people."And then on X (formerly Twitter), this message: "You're telling me the Secret Service let a guy climb up on a roof with a rifle only 150 yards from Trump? Inside job." That message has seven million views and counting. 1/100:00CBS NEWS 5.10 v2 Skip Ad Continue watching10/16: CBS Mornings Plusafter the adVisit Advertiser websiteGO TO PAGE.cnx-non-linear-ad-container .cnx-ad-bid-slot{position:absolute;top:0;left:0;grid-area:adslot;opacity:0;background:none;width:100%;height:100%}.cnx-non-linear-ad-container .cnx-ad-bid-slot.cnx-ad-bid-slot-selected{opacity:1;z-index:10}.cnx-non-linear-ad-container .cnx-ad-slot{display:flex;position:absolute;top:0;left:0;justify-content:center;align-items:center;width:100%;height:100%;overflow:hidden}.cnx-non-linear-ad-container .cnx-ad-slot video,.cnx-non-linear-ad-container video.cnx-ad-slot{background-color:unset}.cnx-ad-container .cnx-ad-bid-slot{position:absolute;top:0;left:0;grid-area:adslot;opacity:0;background:#f4f4f4;width:100%;height:100%}.cnx-ad-container .cnx-ad-bid-slot.cnx-ad-bid-slot-selected{opacity:1;z-index:10}.cnx-ad-container .cnx-ad-slot{display:flex;position:absolute;top:0;left:0;justify-content:center;align-items:center;width:100%;height:100%;overflow:hidden}.cnx-ad-container .cnx-ad-slot div{background-color:transparent !important}.cnx-ad-container .cnx-ad-slot iframe{box-sizing:border-box;border:3px solid #ffffff !important;color-scheme:none}.cnx-ad-container .cnx-ad-slot iframe:not([id]){border:none !important}.cnx-ad-container .cnx-ad-slot-video-type iframe{border:none !important}.cnx-ad-container .cnx-ad-slot video,.cnx-ad-container video.cnx-ad-slot{background-color:#f4f4f4} Brill said, "We're at a point where nobody believes anything. Truth as a concept is really in trouble.  It's suspect."Misinformation and conspiracy theories swirl in wake of Trump assassination attemptThe cumulative impact of the lies and distortions just keeps growing, such that Brill titled his new book "The Death of Truth." "There are facts," he said, "and it used to be in this world that people could at least agree on the same set of facts and then they could debate what to do about those facts."

      I think the "Death of Truth" starts with the communist-thinking maniacs being displayed all over national TV. Telling us what we need to believe, telling us how we need to make decisions. At the end of the day, I'm making my decisions based on what benefits me as an American citizen, not just settling for Bidenomics and bad economy, because I don't necessarily agree with EVERYTHING Donal Trump says or does.

    1. The Ukrainians, unfortunately, can only hold out for so much longer before even those Western-provided air defenses are destroyed and Ukrainian positions along the front are overrun

      Both sides are losing air defences. But arguably Russia is most affected. It is not just about numbers, it's about supply of air interceptors, replacing lost systems, how much area the systems need to protect, and about how effectrive the systems are.

      The most important Ukrainian Patriot air defences are very difficult for Russia to destroy and the other ones are also highly mobile too. So far Russia has only destroyed two launchers for one Patriot air defence system. https://postlmg.cc/Th15s6RY and two launchers for a NASAMS air defence system according to Oryx. For most of the systems, Russia is losing more than Ukraine according to Oryx.

      However Russia has a big disadvantage here.

      1. It has a far larger area to cover - especially now that Ukraine can hit targets as far away as the Arctic Circle.

      2. Russia has no external supplier but has to replace any destroyed air defence systems itself. So as time goes on Russia is getting less protected, but with help from its allies, Ukraine is getting better protected.

      3. Russia's air defence systems are not as good. The Patriot air defence system is very effective if supplied with enough interceptors the main problem Ukraine has is that it doens't have enough of them. But the Russian air defences, even its most modern S-400 seem to be rather easy for the ATACMS to penetrate. It can't protect even the Kerch bridge in Crimea, Ukraine has frequently hit nearby targets right next to the Kerch bridge which is a priority for Russian air defences. Here is an example, hitting the Crimean side of the Kerch ferry crossing:

      QUOTE STARTS

      On the night of May 30, 2024, the Ukrainian Defense Forces successfully struck a Russian ferry crossing in the temporarily occupied Crimea.

      The strike was reportedly carried out with ATACMS ballistic missiles. They targeted the Kerch ferry, which was actively used by the enemy for its troops.

      The General Staff noted that the ferry was covered by modern Russian air defense systems – Pantsir-S1, Tor, and S-400 Triumph, but despite this, American-made missiles successfully hit the target. https://mil.in.ua/en/news/atacms-ballistic-missiles-target-kerch-ferry-crossing-general-staff-reveals/

      This is another example, when it destroyed a strategic Russian railway roll on roll off ferry while docked in Kavkaz port in Crimea. https://maritime-executive.com/article/video-russian-kerch-strait-ferry-destroyed-by-fire-after-ukrainian-attack

      This is 12.5 km from the bridge.

      https://www.google.com/maps/place/Port+Kavkaz/@45.27063,36.4302071,20173m/data=!3m1!1e3!4m6!3m5!1s0x40ee95e045949fa7:0xf39c8a7c88a3fcaa!8m2!3d45.341195!4d36.6737069!16zL20vMDJ0emw3?entry=ttu&g_ep=EgoyMDI0MTAxMy4wIKXMDSoASAFQAw%3D%3D

      And the Ukrainians continue to destroy many Russian air defence systems. For most types of system, the Ukrainians destroy more than the Russians do.

      The Oryx site lists the following figures:

      RUSSIA:

      Anti-Aircraft Guns (55, of which destroyed: 36, damaged: 1, captured: 18)

      Self-Propelled Anti-Aircraft Guns (27, of which destroyed: 16, damaged: 2, abandoned: 2, captured: 7)

      Jammers And Deception Systems (84, of which destroyed: 63, damaged: 12, captured: 9)

      Surface-To-Air Missile Systems (280, of which destroyed: 206, damaged: 46, abandoned: 4, captured: 24)

      Radars (81, of which destroyed: 49, damaged: 22, captured: 10)

      https://www.oryxspioenkop.com/2022/02/attack-on-europe-documenting-equipment.html

      UKRAINE:

      Anti-Aircraft Guns (4, of which captured: 4) [captured 18 Russian ones so Ukraine gained 16, Russia +4-55 = -51]

      Self-Propelled Anti-Aircraft Guns (34, of which destroyed: 23, damaged: 5, abandoned: 1, captured: 5) [Ukraine: 9-34 = -25. Russia: 5-27 = -22.

      Jammers And Deception Systems (7, of which destroyed: 4, damaged: 2, captured: 1 [Ukraine: 9-7 = +2. Russia: 1-84 = -83]

      Surface-To-Air Missile Systems (166, of which destroyed: 140, damaged: 19, abandoned: 1, captured: 6) [Ukraine: 24-166 = -142, Russia: 6-280 = -274]

      Radars And Communications Equipment (131, of which destroyed: 97, damaged: 20, abandoned: 1, captured: 13) [Ukraine: 10-131 = -121, Russia: 13-81 = -68]

      https://www.oryxspioenkop.com/2022/02/attack-on-europe-documenting-ukrainian.html

    2. The Russians can continue churning out Soviet-era airframes like sausage

      Russia can sustain the current loss of figher jets for a long time so long as the rate of attrition doesn't go up very much. But its airforce is shrinking wiht time and it loses some irreplaceable aircraft such as its airborne command units.

      The airframes are also getting battterred by the war. It means more maintenance and more chance of planes crashing and the aircraft not quite at peak efficiency.

      Also Russia is losing its pilots and other skilled crew - partly through direct losses and partly because Russia often sends skilled people to the front line as ordinary infantrymen that they are not trained for and loses them.

      So far its production rate has not yet been able to keep up with the losses. It did start the war with a huge numerical advantage and it is going to keep that advantage through the next few years unless the attrition ratio goes way up - but it's not going to improve on it any time soon.

      The Russian airforce is shrinking and not likely to turn around and expand for sure though they can reduce the rate at which it shrinks.

      Meanwhile the Ukrainian airforce is growing with the F-16s. Which its allies can easily supply because they are surplus aircraft they are replacing with F-35s. The West produces 260 aircraft a year. Russia produced maybe 50 in 2024, fewer in 2022 and 2023 and it has issues with quality because of the sanctions. Also it is losing some irreplaceable aircraft that it can't build any more such as air combat command and control aircraft.

      The Western production rate is far greater, no comparison. It also has large numbers of decommissioned F-16s. The ones that Ukraine gets from its allies are being replaced by the far more capable F-35 which is way more capable than anything Russia has, even its new Su-57 is not nearly as stealthy as the F-35.

      So Ukraine is mainly getting surplus jets that its suppliers would otherwise send to scrap or try to find a buyer somewhere else in the world.

      It is not hitting the numbers of fighter jets available in the West at all. Indeed even if ti gets all the promised F-16s and more this doesn't impact on the Western airforce.

      The bottleneck for Ukraine is training its pilots not the supply of F-16s.

      Its allies could remove this bottleneck by permitting Ukriane to employ retired fighter jet pilots from any of the many countries that now fly them but so far has not considered giving it this permission. They would already be combat trained and experienced if they did this. It wold not make the countries of origin participants in the war, it would just make them foreign fighters like the many other volunteer foreign fighters already in Ukraine.

      QUOTE STARTS

      • Russia produced 22-26 combat jets in 2022 and 29-50 in 2023, focusing on fighter jets over civilian aircraft.
      • Russian Air Force has been ineffective in gaining air superiority against a resourceful Ukraine.
      • Russia lost 121 military aircraft in the Ukraine war but can sustain these losses for a long time. ,,, While international sanctions brought civilian Russian aircraft production to a shuddering halt, Russia continues producing fighter jets. Due to Russia running a war economy, fighter jets are likely to be prioritized over the production of civilian airliners and other aircraft.

      While sanctions may be unable to stop Russian fighter jet production, they work to make production more difficult, create supply bottlenecks, slow production down, sometimes force compromises on production quality, and make the jets more expensive. ... According to data gathered by the military analysis channel Binkov's Battlegrounds, Russia produced 22 to 26 combat jets (including one Tu-160M strategic bomber) in 2022 and 29 to 50 new combat jets in 2023 (with one Tu-160M strategic bomber).

      Binkov concludes that Russia's production is likely to be sufficient to sustain the level of Su-27 Flanker (and derivatives) losses seen in this war so far. ... By contrast, Lockheed Martin alone produces around 156 fifth-generation fighter jets a year (including many for export). Western fighter jets in production include F-35s, F-16s, F-15EXs, Dassault Rafales, Typhoon Eurofighters, and Swedish Gripens. The combined Western fighter jet production is within the ballpark of 260 fighter jets. ...

      "The high utilisation rates of some tactical aircraft types are also increasing demand on maintenance and support, placing further stress on industry."- IISS

      https://simpleflying.com/russian-combat-aircraft-production-rates/

    3. even though its force is more advanced, better equipped, and far more numerous than the opposing Ukrainian Air Force.

      This is a remarkable thing about the war. Ukraine with only 72 fighters holds off 809 fighters. This is a simple matter of numbers. At a ratio of 11 Russian fighters to every 1 Ukrainian fighter, even higher in 2022, Russia has never been able to take over the Ukrainian air space beyond the occupied region.

      These numbers show that Ukraine MUST have far far better pilots than Russia. It would be impossible for one Mig-29 to fight off 11 Russian fighter jets many of them far more advanced than the Mig-29.

      Early in 2022 they just had the Stinger shoulder mounted ground to air missiles. Later on they got S-300 systems from Slovakia which forced the Russians to fly close to the ground.

      This is not because of one brave and extraordinary "Ghost of Kyiv". People make up explanations for Ukraine being able to hold back the vastly superior Russian air force and this was a popular fiction to explain it - such stories are common in war same happened in WW2. But it's not the real reason.

      It is because the Ukrainian air force have had training with NATO and have focused on changing how they do things since 2014 and are a modern airforce that uses modern ideas. It still is somewhat stuck in Soviet ideas but it is far more modern than Russia

      It is not so much that the Ukrainians are superior though they have also done a lot of innovation on top of what NATO taught them making stuff up for the war such as experience in how to fly very close to the ground and they way they distracted the Russian air defences with a simple drone to sink the Moskva with a Neptune.

      But the reason Ukraine could hold off Russia is because the Russians are so very weak in the air.

      It is because of endemic issues in the Russian airforce. Their pilots are not permitted to take initiative much but have to obey the orders of the general.

      If the general says "Fly from here to there and bomb that target" that is what they have to do.

      They mostly do point to point missions with a single fighter jet on a mission as in WW2.

      They are dependent on mobile air commands in the air, large expensive aircraft that fly far behind the front line because they can be shot down easily.

      The generals and the air command don't have a good idea of the situation.

      But most of all Russia clearly has not trained in combined operations where large groups of pilots work together to achieve an objective. All they can do is to do these point to point missions under the command of a general.

      Russian fighter pilots work on their own. They are not used to working with other pilots just to working with generals that tell them what to do.

      The details would be more complex but you can understand the basics with simple maths.

      100 fighter jets working together could surely easily overpower 10 Mig29s working together.

      But even 100 fighter jets coming one at a time on separate missions can surely be held back by 10 Mig29s working together using modern methods indeed they wouldn't even try as it would be a massacre with a 10 to 1 advantage for Ukraine.

      This is not theoretical. It happened all through 2022 before Ukraine got its advanced air defences.

      So that is the reason that experts give. This was a huge surprise to most Western analysts, they had no idea how very poor the training was for Russian pilots and given the huge ratio of numbers expected Russia to take over the Ukrainian air space in the first few days. It never happened.

      It is partly also that Putin didn't prioritize it.

      The experts expected that if Russia invaded, it would first spend a couple of days destroying the Ukrainian air force before any tanks enter Ukraine and they would have had far fewer aircraft left if he'd done that. Instead Putin just did it for a few hours which warned the Ukrainians. A Mig29 can fly off a short section of highway - so the pilots got into their remaining planes and dispersed all over Ukraine and then Ukraine rapidly built lots of secret runways hidden in woods etc and Russia lost that opportunity to destroy them.

      But it is also partly because the Russian airforce just don't have the training. Even with an 11 to 1 ratio and a few dozen fighter jets defending Ukraine, they should have been able to take over the Ukrainian air space very quickly. Especially in the first few weeks when Ukraine didn't even have the S-300 for air defences and the Russian pilots could fly too high to be hit by Stingers.

      But they didn't and they haven't been able to learn since then and still do these point to point missions.

      Things like this can't be fixed quickly because of the many years of training needed for a top quality pilot. After the war is over perhaps Russia can change. But changing it in the middle of an active war would be confusing with the pilots not knowing what to do as it would go against all their training for many years.

      Professor Phillips P. OBrien talks about this issue here

      https://web.archive.org/web/20220509173612/https://www.theatlantic.com/ideas/archive/2022/05/russian-military-air-force-failure-Ukraine/629803/

      The article was later updated and the title changed and is now behind a paywall but the original version wasn't paywalled

      SUMMARY:

      Summary This article by Phillips Payson O’Brien and Edward Stringer, writing for The Atlantic, makes the following points:

      • Airpower should have been one of Russia’s greatest advantages over Ukraine, with almost 4,000 combat aircraft and extensive experience.
      • More than two months into the war, Russia’s air force is still fighting for control of the skies.
      • The failure of the Russian air force is the most important, but least discussed, story of the conflict so far.
      • The recent modernization of the Russian air force was mostly for show.
      • Money was wasted and the Russian air force continues to suffer from flawed logistics and lack of regular training.

      https://runway.airforce.gov.au/resources/link-article/overlooked-reason-russia-s-invasion-floundering

      Upated article behind a paywall which as far as I know is just the title changed. https://www.theatlantic.com/ideas/archive/2022/05/russian-military-air-force-failure-Ukraine/629803/

      As to why Putin didn't want to spend even 2 days destroying the airforce this is a guess but it may well be because he was persuaded by false information from his spies that he would be able to take over the Ukrainian government in a couple of days and didn't bother to do a proper military operation.

      He didn't even make sure the tanks had enough fuel to get from Belarus to Kyiv on the ground which is why the tanks kept running out of fuel in the first week or two.

      From leaked intelligence information since then, it was all just a distraction for the main operation which was to develop an air bridge to Hostomol airport, send in an elite group of tanks, soldiers etc and rapidly advance into Kyiv before the Ukrainians were able to defend themselves. Which of course failed.

      So perhaps he didn't want to spend 2 days destroying the planes because by 2 days of bombing he'd have lost the element of surprise which was what he was counting on for the Hostomel air bridge. Even though the air bridge would have been far easier to establish after those 2 days.

      The Ukrainians did have training from 2014 to 2022 this is not in any way secret it is public and there are lots of stories about it. The Ukrainians also did joint training with NATO and as recently as 2021 F-16 fighter jets landed in Ukraine as part of those exercises. But NATO did not give them any offensive equipment they just trained them. This was NOT and very CLEARLY NOT with the intent to try to attack Rsusia in any way just to train them to defend themselves which became a priority after Russia took ove rCrimea.

      With the pilots the results stand for themselves. If the Russian piliots were as good as the Ukrainian ones then 72 Ukrainian fighter jets would have no chance against 814 Russians. It is then a question of why that is.

      I didn't say it was because of corruption. Though that may be a factor. It is mainly that the Russians still use WW2 tactics where each fighter pilot is given its own separate mission and the pilots are not able to work wit each other on the field.

      At least that is what Western analysts that I follow say. There may be other reasons but what is absolutely certain is that the Ukrainians are far better pilots than the Russians. As to why that is then you can work on your own theories of course.

      According to Global Fire power, Ukraine has 72 fighter jets as of 2024 and Russia has 809, So it has 10 times as many. When you look at total aircraft it's an even bigger ratio,

      Ukraine will be getting 85 F-16s eventually promised by Netherlands, Denmark and Norway. Russia will still have many more fighter jets than Ukraine. Also the Ukrainians have only had a year to learn how to fly their jets and it takes a lot longer to really master them though they'd be able to fly them like a Mig-29 with more stealth quite quickly.

      Biden gave countries permission to send them to Ukraine in August 2023. So it is not new, all that's new is that they may arrive in Ukraine soon. Other countries gave Ukraine the Mig-29 fighter jets starting in March 2023 and Ukraine had about 50 fighter jets since soon after the war started. It had probably 98 when the war started. Russia destroyed about half of those in the first few days but it only did a short half-hearted attempt at destroying them so Ukraine was able to save half of them.

      Ever since then it's been flying them off remoter air fields hidden away in forests and from roads

      So Russia has 10 military aircraft for every 1 Ukrainian aircraft. Also the Ukrainian ones are ancient Soviet era ones mainly a legacy from when Ukraine split off from the Soviet Union. Russia has far more modern aircraft that Ukraine doesn't have which can fire missiles from the air and can spot Mig29s from far too far away for a Mig29 to see them and can fire air to air missiles to hit the Mig 29 with the Mig 29 not able to do anything back except hide by flying too low for the radar to spot.

      Western analysts expected Russia to take over Ukraine's air space quickly with waves of fighter jets. But it turned out that Russian pilots have never learnt how to do that, all they know is how to fly to a point set in advance by a commander and drop a bomb there and quickly fly back again. Russia is simply unable to win battles in the air even with an advantage of 10 to 1. The only explanation that makes sense is that the Russian pilots are simply not trained to do this. By NATO standards they are very badly trained and that can't be changed in the middle of a war, not easily. They have made some adaptations in their ability to drop bombs, e.g. to fly low and then throw the glide bombs into the air at the last minute and quickly turn back. But the Russian commanders are not prepared to give the pilots the initiative to make decisions by themselves in a quickly changing battle in the air so it is partly because the Russian approach is very hierarchical with the pilots not trained to be able to take any initiative themselves just do what the commanders tell them to do. They also can't work effectively with ground forces, often making mistakes and not trained in combined operations.

      Ukraine quickly got the ability to stop them dropping bombs easily on most of Ukraine and they kept control of the air space over most of Ukraine through to spring 2023 when NATO countries started giving them advanced air defences to protect themselves.

      So - NATO countries are going to give Ukraine a few dozen F-16 fighter jets. These are ancient technology for NATO as they are destined for scrap otherwise. NATO has far too many F-16s because they are replacing them by F-35s which are vastly superior to anything Russia has. But the F-16s are equivalent to the most modern Russian fighter jets.

      Russia still has many more modern fighter jets than the F-16s NATO is giving to Ukraine. It will still have a 5 to 1 ratio of fighter jets and with many modern fighter jets.

      So this donation would be of very little use if Russia was able to fight in the air like NATO. That's partly why NATO countries think this will hardly make any difference in the war.

      But Ukraine thinks it will make a big difference and they are the ones who have experience fighting Russian pilots in the air. If it does make a big difference this will be another confirmation that the Russian pilots are just not very well trained.

      So we'll see who was right. They are not magic weapons and to start with the Ukrainians will be very inexperienced at using therm in combat so they won't make a big difference on day 1. However by the end of the war the Ukrainians will be the only country in the world with experience fighting Russian fighter jets with F-16s.

      To start with the F-16s will fly far from the front line just shooting down drones and cruise missiles which they are able to do with air to air missiles. That will help protect the cities. The F-16s in turn would be protected by the Patriot air defences and shoot down missiles that get through.

      Later they may be able to fly closer to the front line and shoot down the bombers that fire glide bombs at Ukraine.

      Then as they get more experienced they will be able to fly along the front line and support any Ukrainian counteroffensives and a counteroffensive supported by their Mig29s along with a dozen or so F-16s will be much safer than one that has to try to fight with Russian military jets flying overhead until they can set up their air defences.

      So - the F-16s may make a big difference. But nothing like if NATO was to give them F-35s.

      And Putin is not going to attack NATO that makes no sense. If he is so bothered by F-16s that he worries this will mean he loses the war against Ukraine quickly it makes no sense to then attack NATO with its F-35s that have a radar cross section like a supersonic baked potato in size, and are effectively invisible to its radar and with its tomahawk cruise missiles and other missiles with a range of 2,400 km instead of the ATACMS with similar payload and a range of 300 km etc etc.

      An F-35 test pilot said that with a few F-35s Ukraine could quickly take over all the occupied air space and shoot out the radar systems from the air before Russia could see them and get total air control over the occupied regions of Ukraine quickly.

      But NATO is very very cautious. It's aim is to give Ukraine enough by way of equipment so that it can win, but not to give it enough capability so that it can win dramatically by e.g. sinking the entire Black Sea fleet in a few hours or taking over the air space over occupied Ukraine in a few hours like a NATO country could do. Ukraine isn't asking for that capability either.

      So that is not going to happen. But Ukraine CAN do major counteroffensives by blocking off the supply routes because Russia's war depends on a very few vulnerable supply routes such as the Azov coast road to supply the war. As we saw with Kherson city in the fall of 2022, if Ukraine can cut off the supply route - in that case the Antonovsky bridge across the Dnipro river - then Russian soldiers at the front line run out of fuel, and shells and missiles and their air defences run out of air interceptors. With no way to supply them then they have to retreat.

      So - Ukraine has opportunities to do that by cutting through the Azov sea coast road and the bridges from Crimea to Kherson oblast and the Kerch bridge. That would liberate half of the current occupied Ukraine and put Crimea at risk. It would then be very hard for Russia to supply Crimea once Ukraine has control of Kherson oblast and part of Zaporizhzhia oblast and perhaps has regained Mariupol.

      It is not impossible Ukraine gets that far even this year, but most likely in 2025. Then once that happens Putin is likely to be more in a mood for treaty negotiations.

      BLOG: Why F-16s will make such a difference to Ukraine - can fly from Ukraine - ancient technology by NATO standards - roughly equal in capability to Russia’s best fighter jets which currently dominate the air space over front lines https://debunkingdoomsday.quora.com/Why-F-16s-will-make-such-a-difference-to-Ukraine-can-fly-from-Ukraine-ancient-technology-by-NATO-standards-roughly

    1. Epiphany:

      Migration has proved to be a powerful force for development, improving the lives of hundreds of millions of migrants, their families, and the societies in which they live across the world."

      Migration is more than just individual movement; it’s a significant driver of global development. Migrants contribute to their destination countries by filling labor gaps, bringing diverse skills, and adding cultural richness, which can enhance societal growth. Recognizing migration’s broader benefits highlights the importance of policies that support and maximize these developmental gains for both migrants and host countries. However, the real barrier isn’t the physical act of migration; it’s the legal and social constraints migrants face once they arrive. This lack of citizenship, and the rights it brings, reveals why so many migrants struggle despite reaching safer or more prosperous countries. Access to rights, rather than mere location, is key to improving the migrant experience and enabling them to truly thrive in their new environments.

      This is connected to this article: https://www.un.org/en/global-issues/migration

    1. using a Twitter poll as an excuse, but how many of the votes were bots?

      I thought this was interesting just because of all the misinformation that led Trumps twitter to get banned in the first place, bots could've been the reason for reinstating him. it's just an interesting thought of how so much cannot be trusted from the internet. Even previously stated, with the Android and iPhone, it's hard to tell what's true and what's not.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews:

      Reviewer #1 (Public Review):

      Summary:

      The authors investigated how the presence of interspecific introgressions in the genome affects the recombination landscape. This research was intended to inform about genetic phenomena influencing the evolution of introgressed regions, although it should be noted that the research itself is based on examining only one generation, which limits the possibility of drawing far-reaching evolutionary conclusions. In this work, yeast hybrids with large (from several to several dozen percent of the chromosome length) introgressions from another yeast species were crossed. Then, the products of meiosis were isolated and sequenced, and on this basis, the genome-wide distribution of both crossovers (COs) and noncrossovers (NCOs) was examined. Carrying out the analysis at different levels of resolution, it was found that in the regions of introduction, there is a very significant reduction in the frequency of COs and a simultaneous increase in the frequency of NCOs. Moreover, it was confirmed that introgressions significantly limit the local shuffling of genetic information, and NCOs are only able to slightly contribute to the shuffling, thus they do not compensate for the loss of CO recombination.

      Strengths:

      - Previously, experiments examining the impact of SNP polymorphism on meiotic recombination were conducted either on the scale of single hotspots or the entire hybrid genome, but the impact of large introgressed regions from another species was not examined. Therefore, the strength of this work is its interesting research setup, which allows for providing data from a different perspective.

      - Good quality genome-wide data on the distribution of CO and NCO were obtained, which could be related to local changes in the level of polymorphism.

      Weaknesses:

      (1)  The research is based on examining only one generation, which limits the possibility of drawing far-reaching evolutionary conclusions. Moreover, meiosis is stimulated in hybrids in which introgressions occur in a heterozygous state, which is a very unlikely situation in nature. Therefore, I see the main value of the work in providing information on the CO/NCO decision in regions with high sequence diversification, but not in the context of evolution.

      While we are indeed only examining recombination in a single generation, we respectfully disagree that our results aren't relevant to evolutionary processes. The broad goals of our study are to compare recombination landscapes between closely related strains, and we highlight dramatic differences between recombination landscapes. These results add to a body of literature that seeks to understand the existence of variation in traits like recombination rate, and how recombination rate can evolve between populations and species. We show here that the presence of introgression can contribute to changes in recombination rate measured in different individuals or populations, which has not been previously appreciated. We furthermore show that introgression can reduce shuffling between alleles on a chromosome, which is recognized as one of the most important determinants for the existence and persistence of sexual reproduction across all organisms. As we describe in our introduction and conclusion, we see our experimental exploration of the impacts of introgression on the recombination landscape as complementary to studies inferring recombination and introgression from population sequencing data and simulations. There are benefits and challenges to each approach, but both can help us better understand these processes. In regards to the utility of exploring heterozygous introgression, we point out that introgression is often found in a heterozygous state (including in modern humans with Neanderthal and/or Denisovan ancestry). Introgression will always be heterozygous immediately after hybridization, and depending on the frequency of gene flow into the population, the level of inbreeding, selection against introgression, etc., introgression will typically be found as heterozygous.

      - The work requires greater care in preparing informative figures and, more importantly, re-analysis of some of the data (see comments below).

      More specific comments:

      (1) The authors themselves admit that the detection of NCO, due to the short size of conversion tracts, depends on the density of SNPs in a given region. Consequently, more NCOs will be detected in introgressed regions with a high density of polymorphisms compared to the rest of the genome. To investigate what impact this has on the analysis, the authors should demonstrate that the efficiency of detecting NCOs in introgressed regions is not significantly higher than the efficiency of detecting NCOs in the rest of the genome. If it turns out that this impact is significant, analyses should be presented proving that it does not entirely explain the increase in the frequency of NCOs in introgressed regions.

      We conducted a deeper exploration of the effect of marker resolution on NCO detection by randomly removing different proportions of markers from introgressed regions of the fermentation cross in order to simulate different marker resolutions from non-introgressed regions. We chose proportions of markers that would simulate different quantiles of the resolution of non-introgressed regions and repeated our standard pipeline in order to compare our NCO detection at the chosen marker densities. More details of this analysis have been added to the manuscript (lines 188-199, 525-538). We confirmed the effect of marker resolution on NCO detection (as reported in the updated manuscript and new supplementary figures S2-S10, new Table S10) and decided to repeat our analyses on the original data with a more stringent correction. For this we chose our observed average tract size for NCOs in introgressed regions (550bp), which leads to a far more conservative estimate of NCO counts (As seen in the updated Figure 2 and Table 2). This better accounts for the increased resolution in introgressed regions, and while it's possible to be more stringent with our corrections, we believe that further stringency would be unreasonable. We also see promising signs that the correction is sufficient when counting our CO and NCO events in both crosses, as described in our response to comment 39 (response to reviewer #3).

      (2) CO and NCO analyses performed separately for individual regions rarely show statistical significance (Figures 3 and 4). I think that the authors, after dividing the introgressed regions into non-overlapping windows of 100 bp (I suggest also trying 200 bp, 500 bp, and 1kb windows), should combine the data for all regions and perform correlations to SNP density in each window for the whole set of data. Such an analysis has a greater chance of demonstrating statistically significant relationships. This could replace the analysis presented in Figure 3 (which can be moved to Supplement). Moreover, the analysis should also take into account indels.

      We're uncertain of what is being requested here. If the comment refers to the effect of marker density on NCO detection, we hope the response to comment 2 will help resolve this comment as well. Otherwise, we ask for some clarification so that we may correct or revise as appropriate.

      (3) In Arabidopsis, it has been shown that crossover is stimulated in heterozygous regions that are adjacent to homozygous regions on the same chromosome (http://dx.doi.org/10.7554/eLife.03708.001, https://doi.org/10.1038/s41467-022-35722-3).

      This effect applies only to class I crossovers, and is reversed for class II crossovers (https://doi.org/10.15252/embj.2020104858, https://doi.org/10.1038/s41467-023-42511-z). This research system is very similar to the system used by the authors, although it likely differs in the level of DNA sequence divergence. The authors could discuss their work in this context.

      We thank the reviewer for sharing these references. We have added a discussion of our work in the context of these findings in the Discussion, lines 367-376.

      Reviewer #2 (Public Review):

      Summary:

      Schwartzkopf et al characterized the meiotic recombination impact of highly heterozygous introgressed regions within the budding yeast Saccharomyces uvarum, a close relative of the canonical model Saccharomyces cerevisiae. To do so, they took advantage of the naturally occurring Saccharomyces bayanus introgressions specifically within fermentation isolates of S. uvarum and compared their behavior to the syntenic regions of a cross between natural isolates that do not contain such introgressions. Analysis of crossover (CO) and noncrossover (NCO) recombination events shows both a depletion in CO frequency within highly heterozygous introgressed regions and an increase in NCO frequency. These results strongly support the hypothesis that DNA sequence polymorphism inhibits CO formation, and has no or much weaker effects on NCO formation. Eventually, the authors show that the presence of introgressions negatively impacts "r", the parameter that reflects the probability that a randomly chosen pair of loci shuffles their alleles in a gamete.

      The authors chose a sound experimental setup that allowed them to directly compare recombination properties of orthologous syntenic regions in an otherwise intra-specific genetic background. The way the analyses have been performed looks right, although this reviewer is unable to judge the relevance of the statistical tests used. Eventually, most of their results which are elegant and of interest to the community are present in Figure 2.

      Strengths:

      Analysis of crossover (CO) and noncrossover (NCO) recombination events is compelling in showing both a depletion in CO frequency within highly heterozygous introgressed regions and an increase in NCO frequency.

      Weaknesses:

      The main weaknesses refer to a few text issues and a lack of discussion about the mechanistic implications of the present findings.

      - Introduction

      (1) The introduction is rather long. | I suggest specifically referring to "meiotic" recombination (line 71) and to "meiotic" DSBs (line 73) since recombination can occur outside of meiosis (ie somatic cells).

      We agree and have condensed the introduction to be more focused. We also made the suggested edits to include “meiotic” when referring to recombination and DSBs.

      (2) From lines 79 to 87: the description of recombination is unnecessarily complex and confusing. I suggest the authors simply remind that DSB repair through homologous recombination is inherently associated with a gene conversion tract (primarily as a result of the repair of heteroduplex DNA by the mismatch repair (MMR) machinery) that can be associated or not to a crossover. The former recombination product is a crossover (CO), the latter product is a noncrossover (NCO) or gene conversion. Limited markers may prevent the detection of gene conversions, which erase NCO but do not affect CO detection.

      We changed the language in this section to reflect the reviewer’s suggestions.

      (3) In addition, "resolution" in the recombination field refers to the processing of a double Holliday junction containing intermediates by structure-specific nucleases. To avoid any confusion, I suggest avoiding using "resolution" and simply sticking with "DSB repair" all along the text.

      We made the suggested correction throughout the paper.

      (4) Note that there are several studies about S. cerevisiae meiotic recombination landscapes using different hybrids that show different CO counts. In the introduction, the authors refer to Mancera et al 2008, a reference paper in the field. In this paper, the hybrid used showed ca. 90 CO per meiosis, while their reference to Liu et al 2018 in Figure 2 shows less than 80 COs per meiosis for S. cerevisiae. This shows that it is not easy to come up with a definitive CO count per meiosis in a given species. This needs to be taken into account for the result section line 315-321.

      This is an excellent point. We added this context in the results (lines 180-187).

      (5) In line 104, the authors refer to S. paradoxus and mention that its recombination rate is significantly different from that of S. cerevisiae. This is inaccurate since this paper claims that the CO landscape is even more conserved than the DSB landscape between these two species, and they even identify a strong role played by the subtelomeric regions. So, the discussion about this paper cannot stand as it is.

      We agree with the reviewer's point. We also found that the entire paragraph was unnecessary, so it and the sentence in question have been removed.

      (6) Line 150, when the authors refer to the anti-recombinogenic activity of the MMR, I suggest referring to the published work from Martini et al 2011 rather than the not-yet-published work from Copper et al 2021, or both, if needed.

      Added the suggested citation.

      Results

      (7) The clear depletion in CO and the concomitant increase in NCO within the introgressed regions strongly suggest that DNA sequence polymorphism triggers CO inhibition but does not affect NCO or to a much lower extent. Because most CO likely arises from the ZMM pathway (CO interference pathway mainly relying on Zip1, 2, 3, 4, Spo16, Msh4, 5, and Mer3) in S. uvarum as in S. cerevisiae, and because the effect of sequence polymorphism is likely mediated by the MMR machinery, this would imply that MMR specifically inhibits the ZMM pathway at some point in S. uvarum. The weak effect or potential absence of the effect of sequence polymorphism on NCO formation suggests that heteroduplex DNA tracts, at least the way they form during NCO formation, escape the anti-recombinogenic effect of MMR in S. uvarum. A few comments about this could be added.

      We have added discussion and citations regarding the biased repair of DSB to NCO in introgression, lines 380-386.

      (8) The same applies to the fact that the CO number is lower in the natural cross compared to the fermentation cross, while the NCO number is the same. This suggests that under similar initiating Spo11-DSB numbers in both crosses, the decrease in CO is likely compensated by a similar increase in inter-sister recombination.

      Thank you to the reviewer for this observation. We agree that this could explain some differences between the crosses.

      (9) Introgressions represent only 10% of the genome, while the decrease in CO is at least 20%. This is a bit surprising especially in light of CO regulation mechanisms such as CO homeostasis that tends to keep CO constant. Could the authors comment on that?

      We interpret these results to reflect two underlying mechanisms. First, the presence of heterozygous introgression does reduce the number of COs. Second, we believe the difference in COs reflects variation in recombination rate between strains. We note that CO homeostasis need not apply across different genetic backgrounds. Indeed, recombination rate is appreciated to significantly differ between strains of S. cerevisiae (Raffoux et al. 2018), and recombination rate variation has been observed between strains/lines/populations in many different species including Drosophila, mice, humans, Arabidopsis, maize, etc. We reference S. cerevisiae strain variability in the Introduction lines 128-130, and have added context in the Results lines 180-187, and Discussion lines 343-350.

      (10) Finally, the frequency of NCOs in introgressed regions is about twice the frequency of CO in non-introgressed regions. Both CO and NCO result from Spo11-initiating DSBs.

      This suggests that more Spo11-DSBs are formed within introgressed regions and that such DSBs specifically give rise to NCO. Could this be related to the lack of homolog engagement which in turn shuts down Spo11-DSB formation as observed in ZMM mutants by the Keeney lab? Could this simply result from better detection of NCO in introgressed regions related to the increased marker density, although the authors claim that NCO counts are corrected for marker resolution?

      The effect noted by the reviewer remains despite the more conservative correction for marker density applied to NCO counts (as described in the response to Reviewer 1, comment #2). Given that CO+NCO counts in introgressed regions are not statistically different between crosses, it is likely that these regions are simply predisposed to a higher rate of DSBs than the rest of the genome. This is an interesting observation, however, and one that we would like to further explore in future work.

      (11) What could be the explanation for chromosome 12 to have more shuffling in the natural cross compared to the fermentation cross which is deprived of the introgressed region?

      We added this text to the Results, lines 323-327, "While it is unclear what potential mechanism is mediating the difference in shuffling on chromosome 12, we note that the rDNA locus on chromosome 12 is known to differ dramatically in repeat content across strains of S. cerevisiae (22–227 copies) (Sharma et a. 2022), and we speculate that differences in rDNA copy number between strains in our crosses could impact shuffling."

      Technical points:

      (12) In line 248, the authors removed NCO with fewer than three associated markers.

      What is the rationale for this? Is the genotyping strategy not reliable enough to consider events with only one or two markers? NCO events can be rather small and even escape detection due to low local marker density.

      We trust the genotyping strategy we used, but chose to be conservative in our detection of NCOs to account for potential sequencing biases.

      (13) Line 270: The way homology is calculated looks odd to this reviewer, especially the meaning of 0.5 homology. A site is either identical (1 homology) or not (0 homology).

      We've changed the language to better reflect what we are calculating (diploid sequence similarity; see comment #28). Essentially, the metric is a probability that two randomly selected chromatids--one from each parent--will share the same nucleotide at a given locus (akin to calculating the probability of homozygous offspring at a single locus). We average it along a segment of the genome to establish an expected sequence similarity if/when recombination occurs in that segment.

      (14) Line 365: beware that the estimates are for mitotic mismatch repair (MMR). Meiotic MMR may work differently.

      We removed the citation that refers exclusively to mitotic recombination. The statement regarding meiotic recombination is otherwise still reflective of results from Chen & Jinks-Robertson

      (15) Figure 1: there is no mention of potential 4:0 segregations. Did the authors find no such pattern? If not, how did they consider them?

      The program we used to call COs and NCOs (ReCombine's CrossOver program) can detect such patterns, but none were detected in our data.

      Reviewer #3 (Public Review):

      When members of two related but diverged species mate, the resulting hybrids can produce offspring where parts of one species' genome replace those of the other. These "introgressions" often create regions with a much greater density of sequence differences than are normally found between members of the same species. Previous studies have shown that increased sequence differences, when heterozygous, can reduce recombination during meiosis specifically in the region of increased difference. However, most of these studies have focused on crossover recombination, and have not measured noncrossovers. The current study uses a pair of Saccharomyces uvarum crosses: one between two natural isolates that, while exhibiting some divergence, do not contain introgressions; the other is between two fermentation strains that, when combined, are heterozygous for 9 large regions of introgression that have much greater divergence than the rest of the genome. The authors wished to determine if introgressions differently affected crossovers and noncrossovers, and, if so, what impact that would have on the gene shuffling that occurs during meiosis.

      (1) While both crossovers and noncrossovers were measured, assessing the true impact of increased heterology (inherent in heterozygous introgressions) is complicated by the fact that the increased marker density in heterozygous introgressions also increases the ability to detect noncrossovers. The authors used a relatively simple correction aimed at compensating for this difference, and based on that correction, conclude that, while as expected crossovers are decreased by increased sequence heterology, counter to expectations noncrossovers are substantially increased. They then show that, despite this, genetic shuffling overall is substantially reduced in regions of heterozygous introgression. However, it is likely that the correction used to compensate for the effect of increased sequence density is defective, and has not fully compensated for the ascertainment bias due to greater marker density. The simplest indication of this potential artifact is that, when crossover frequencies and "corrected" noncrossover frequencies are taken together, regions of introgression often appear to have greater levels of total recombination than flanking regions with much lower levels of heterology. This concern seriously undercuts virtually all of the novel conclusions of the study. Until this methodological concern is addressed, the work will not be a useful contribution to the field.

      We appreciate this concern. Please see response to comments #2 and #38. We further note that our results depicted in Figure 3 and 4 are not reliant on any correction or comparison with non-introgressed regions, and thus our results regarding sequence similarity and its effect on the repair of DSBs and the amount of genetic shuffling with/without introgression to be novel and important observations for the field.

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):

      (1) Line 149 - this sentence refers to a mixture of papers reporting somatic or meiotic recombination and as these processes are based on different crossover pathways, this should not be mixed. For example, it is known that in Arabidopsis MSH2 has a pro-crossover function during meiotic recombination.

      Corrected

      (2) What is unclear to me is how the crosses are planned. Line 308 shows that there were only two crosses (one "natural" and one "fermentation"), but I understand that this is a shorthand and in fact several (four?) different strains were used for the "fermentation cross". At least that's what I concluded from Fig. 1B and its figure caption. This needs to be further explained. Were different strains used for each fermentation cross, or was one strain repeated in several crosses? In Figure 1, it would be worth showing, next to the panel showing "fermentation cross", a diagram of how "natural cross" was performed, because as I understand it, panel A illustrates the procedure common to both types of crosses, and not for "natural cross".

      We thank the reviewer for drawing our attention to confusion about how our crosses were created. We performed two crosses, as depicted in Figure 1A. The fermentation cross is a single cross from two strains isolated from fermentation environments. The natural cross is a single cross from two strains isolated from a tree and insect. Table S1 and the methods section "Strain and library construction" describe the strains used in more detail. We modified Figure 1 and the figure legend to help clarify this. See also response to comment #37.

      (3) The authors should provide a more detailed characterization of the genetic differences between chromosomes in their hybrids. What is the level of polymorphism along the S. uvarum chromosomes used in the experiments? Is this polymorphism evenly distributed? What are the differences in the level of polymorphism for individual introgressions? Theoretically, this data should be visible in Figure 2D, but this figure is practically illegible in the present form (see next comment).

      As suggested, we remade Figure 2D to only include chromosomes with an introgression present, and moved the remaining chromosomes to the supplements (Figure S11). The patterns of markers (which are fixed differences between the strains in the focal cross) should be more clear now. As we detail in the Methods line 507-508, we utilized a total of 24,574 markers for the natural cross and 74,619 markers for the fermentation cross (the higher number in the fermentation cross being due to more fixed differences in regions of introgression).

      (4) Figure 2D should be prepared more clearly, I would suggest stretching the chromosomes, otherwise, it is difficult to see what is happening in the introgression regions for CO and NCO (data for SNPs are more readable). Maybe leave only the chromosomes with introgressions and transfer the rest to the supplement?

      See previous comment.

      (5) How are the Y scales defined for Figure 2D?

      Figure 2D now includes units for the y-axis.

      (6) Are increases in CO levels in fermentation cross-observed at the border with introgressions? This would indicate local compensation for recombination loss in the introgressed regions, similar to that often observed for chromosomal inversions.

      We see no evidence of an increase in CO levels at the borders of introgressions, neither through visual inspection or by comparing the average CO rate in all fermentation windows to that of windows at the edges of introgressions. This is included in the Discussion lines 360-366, "While we are limited in our interpretations by only comparing two crosses (one cross with heterozygous introgression and one without introgression), these results are in line with findings in inversions, where heterozygotes show sharp decreases in COs, but the presence of NCOs in the inverted region (Crown et al., 2018; Korunes & Noor, 2019). However, unlike heterozygous inversions where an increase in COs is observed on freely recombining chromosomes (the inter-chromosomal effect), we do not see an increase in COs on the borders flanking introgression or on chromosomes without introgression."

      (7) Line 336 - "We find positive correlations between CO counts..." - you should indicate here that between fermentation and natural crosses, it was quite hard for me to understand what you calculated.

      We corrected the language as suggested.

      (8) The term "homology" usually means "having a common evolutionary origin" and does not specify the level of similarity between sequences, thus it cannot be measured. It is used incorrectly throughout the manuscript (also in the intro). I would use the term "similarity" to indicate the degree of similarity between two sequences.

      We corrected the language as suggested throughout the document.

      (9) Paragraph 360 and Figure 3 - was the "sliding window" overlapping or non-overlapping?

      We added clarifying language to the text in both places. We use a 101bp sliding window with 50bp overlaps.

      (10) Line 369 - what is "...the proportion of bases that are expected to match between the two parent strains..."?

      We clarified the language in this location, and hopefully changes associated with the comment about sequence similarity will make the comment even clearer in context.

      (11) Line 378 - should it refer to Figure S1 and not Figure 4?

      Corrected.

      (12) Line 399 - should refer to Figure 4, not Figure 5.

      Corrected

      (13) Line 444-449 - the analysis of loss of shuffling in the context of the location of introgression on the chromosome should be presented in the result section.

      We shifted the core of the analysis to the results, while leaving a brief summary in the discussion.

      (14) The authors should also take into account the presence of indels in their analyses, and they should be marked in the figures, if possible.

      We filtered out indels in our variant calling. However, we did analyze our crosses for the presence of large insertions and deletions (Table S2), which can obscure true recombination rates, and found that they were not an issue in our dataset.

      Reviewer #2 (Recommendations For The Authors):

      This reviewer suggests that the authors address the different points raised in the public review.

      (1) This reviewer would like to challenge the relevance of the r-parameter in light of chromosome 12 which has no introgression and still a strong depletion in r in the fermentation cross.

      We added this text to the Results, lines 377-381, "While it is unclear what potential mechanism is mediating the difference in shuffling on chromosome 12, we note that the rDNA locus on chromosome 12 is known to differ dramatically in repeat content across strains of S. cerevisiae (22–227 copies) (Sharma et a. 2022), and we speculate that differences in rDNA copy number between strains in our crosses could impact shuffling."

      (2) This reviewer insists on making sure that NCO detection is unaffected by the marker density, notably in the highly polymorphic regions, to unambiguously support Figure 1C.

      We've changed our correction for resolution to be more aggressive (see response to comment #2), and believe we have now adequately adjusted for marker density (see response to comment #38).

      Reviewer #3 (Recommendations For The Authors):

      I regret using such harsh language in the public review, but in my opinion, there has been a serious error in how marker densities are corrected for, and, since the manuscript is now public, it seems important to make it clear in public that I think that the conclusions of the paper are likely to be incorrect. I regret the distress that the public airing of this may cause. Below are my major concerns:

      (1) The paper is written in a way that makes it difficult to figure out just what the sequence differences are within the crosses. Part of this is, to be frank, the unusual way that the crosses were done, between more than one segregant each from two diploids in both natural and fermentation cases. I gather, from the homology calculations description, that each of these four diploids, while largely homozygous, contained a substantial number of heterozygosities, so individual diploids had different patterns of heterology. Is this correct? And if so, why was this strategy chosen? Why not start with a single diploid where all of the heterologies are known? Why choose to insert this additional complication into the mix? It seems to me that this strategy might have the perverse effect of having the heterology due to the polymorphisms present in one diploid affect (by correction) the impact of a noncrossover that occurs in a diploid that lacks the additional heterology. If polymorphic markers are a small fraction of total markers, then this isn't such a great concern, but I could not find the information anywhere in the manuscript. As a courtesy to the reader, please consider providing at the beginning some basic details about the starting strains-what is the average level of heterology between natural A and natural B, and what fraction of markers are polymorphic; what is the average level of heterology between fermentation A and fermentation B in non-introgressed regions, in introgressed regions, and what fraction of markers are polymorphic? How do these levels of heterology compare to what has been examined before in whole-genome hybrid strains? It also might be worth looking at some of the old literature describing S. cerevisiae/S. carlsbergensis hybrids.

      We thank the reviewer for drawing our attention to confusion about the cross construction. These crosses were conducted as is typical for yeast genetic crosses: we crossed 2 genetically distinct haploid parents to create a heterozygous diploid, then collected the haploid products of meiosis from the same F1 diploid. Because the crosses were made with haploid parents, it is not possible for other genetic differences to be segregating in the crosses. We have revised Figure 1 and its caption to clarify this. Further details regarding the crosses are in the Methods section "Strain and library construction" and in Supplemental Table S1. We only utilized genetic markers that are fixed differences between our parental strains to call CO and NCO. As we detail in the Methods line 507-508, we utilized a total of 24,574 markers for the natural cross and 74,619 markers for the fermentation cross (the higher number in the fermentation cross being due to more fixed differences in regions of introgression). We additionally revised Figure 2D (and Figure S11) to help readers better visualize differences between the crosses.

      (2) There are serious concerns about the methods used to identify noncrossovers and to normalize their levels, which are probably resulting in an artifactually high level of calculated crossovers in Figure 2. As a primary indication of this, it appears in Figure 2 that the total frequency of events (crossovers + noncrossovers) in heterozygous introgressed regions are substantially greater than those in the same region in non-introgressed strains, while just shifting of crossovers to noncrossovers would result in no net increase. The simplest explanation for this is that noncrossovers are being undercounted in non-introgressed relative to introgressed heterozygous regions. There are two possible reasons for this: i. The exclusion of all noncrossover events spanning less than three markers means that many more noncrossovers in introgressed heterozygous regions than in non-introgressed. Assuming that average non-homology is 5% in the former and 1% in the latter, the average 3-marker event will be 60 nt in introgressed regions and 300 nt in non-introgressed regions - so many more noncrossovers will be counted in introgressed regions. A way to check on this - look at the number of crossover-associated markers that undergo gene conversion; use the fraction that involves < 3 markers to adjust noncrossover levels (this is the strategy used by Mancera et al.). ii. The distance used for noncrossover level adjustment (2kb) is considerably greater than the measured average noncrossover lengths in other studies. The effect of using a too-long distance is to differentially under-correct for noncrossovers in non-introgressed regions, while virtually all noncrossovers in heterozygous introgressed regions will be detected. This can be illustrated by simulations that reduce the density of scored markers in heterozygous introgressed regions to the density seen in non-introgressed regions. Because these concerns go to the heart of the conclusions of the paper, they must be addressed quantitatively - if not, the main conclusions of the paper are invalid.

      We adjusted the correction factor (See also response to comment #2) and compared the average number of CO and NCO events in introgressed and non-introgressed regions between crosses (two comparisons: introgression CO+NCO in natural cross vs introgression CO+NCO in fermentation cross; non-introgression CO+NCO in natural cross vs non-introgression CO+NCO in fermentation cross). We found no significant differences between the crosses in either of the comparisons. This indicates that the distribution of total events is replicated in both crosses once we correct for resolution.

      (3) It is important to distinguish the landscape of double-strand breaks from the landscape of recombination frequencies. Double-strand breaks, as measured by uncalibrated levels of Spo11-linked oligos, is a relative number - not an absolute frequency. So it is possible that two species could have a similar break landscape in terms of topography but have absolute levels higher in one species than in the other.

      We agree with this statement, however, we have removed the relevant text to streamline our introduction.

      (4) Lines 123-125. Just meiosis will produce mosaic genomes in the progeny of the F1; further backcrossing will reduce mosaicism to the level of isolated regions of introgression.

      Adjusted the language to be more specific.

      (5) Please provide actual units for the Y axes in Figure 2D.

      We have corrected the units on the axes.

      (6) Tables (general). Are the significance measures corrected for multiple comparisons?

      In Table 3, the cutoff was chosen to be more conservative than a Bonferroni corrected alpha=0.01 with 9 comparisons (0.0011). In text, any result referred to as significant has an associated hypothesis test with a p-value less than its corresponding Bonferroni-corrected alpha of 0.05. This has been clarified in the caption for Table 3 and in the text where relevant.

    1. Reviewer #3 (Public review):

      Summary:

      The authors provide an interesting and novel approach, RCSP, to determining what they call the "root causal genes" for a disease, i.e. the most upstream, initial causes of disease. RCSP leverages perturbation (e.g. Perturb-seq) and observational RNA-seq data, the latter from patients. They show using both theory and simulations that if their assumptions hold then the method performs remarkably well, compared to both simple and available state-of-the-art baselines. Whether the required assumptions hold for real diseases is questionable. They show superficially reasonable results on AMD and MS.

      Strengths:

      The idea of integrating perturbation and observational RNA-seq dataset to better understand the causal basis of disease is powerful and timely. We are just beginning to see genome-wide perturbation assay, albeit in limited cell-types currently. For many diseases, research cohorts have at least bulk observational RNA-seq from a/the disease-relevant tissue(s). Given this, RCSP's strategy of learning the required causal structure from perturbations and applying this knowledge in the observational context is pragmatic and will likely become widely applicable as Perturb-seq data in more cell-types/contexts becomes available.

      The causal inference reasoning is another strength. A more obvious approach would be to attempt to learn the causal network structure from the perturbation data and leverage this in the observational data. However, structure learning in high-dimensions is notoriously difficult, despite recent innovations such as differentiable approaches. The authors notice that to estimate the root causal effect for a gene X, one only needs access to a (superset of) the causal ancestors of X: much easier relationships to detect than the full network.

      The applications are also reasonably well chosen, being some of the few cases where genome-scale perturb-seq is available in a roughly appropriate (see below) cell-type, and observational RNA-seq is available at a reasonable sample size.

      Weaknesses:

      Several assumptions of the method are problematic. The most concerning is that the observational expression changes are all causally upstream of disease. There is work using Mendelian randomization (MR) showing that the _opposite_ is more likely to be true: most differential expression in disease cohorts is a consequence rather than a cause of disease (https://www.nature.com/articles/s41467-021-25805-y). Indeed, the oxidative stress of AMD has known cellular responses including the upregulation of p53. The authors need to think carefully about how this impacts their framework. Can the theory say anything in this light? Simulations could also be designed to address robustness.

      A closely related issue is the DAG assumption of no cycles. This assumption is brought to bear because it required for much classical causal machinery, but is unrealistic in biology where feedback is pervasive. How robust is RCSP to (mild) violations of this assumption? Simulations would be a straightforward way to address this.

      The authors spend considerable effort arguing that technical sampling noise in X can effectively be ignored (at least in bulk). While the mathematical arguments here are reasonable, they miss the bigger picture point that the measured gene expression X can only ever be a noisy/biased proxy for the expression changes that caused disease: 1) Those events happened before the disease manifested, possibly early in development for some conditions like neurodevelopmental disorders. 2) bulk RNA-seq gives only an average across cell-types, whereas specific cell-types are likely "causal". 3) only a small sample, at a single time point, is typically available. Expression in other parts of the tissue and at different times will be variable.

      My remaining concerns are more minor.

      While there are connections to the omnigenic model, the latter is somewhat misrepresented. 1) The authors refer to the "core genes" of the omnigenic model as being at the end (longitudinally) of pathogenesis. The omnigenic model makes no statements about temporally ordering: in causal inference terminology the core genes are simply the direct cause of disease. 2) "Complex diseases often have an overwhelming number of causes, but the root causal genes may only represent a small subset implicating a more omnigenic than polygenic model" A key observation underlying the omnigenic model is that genetic heritability is spread throughout the genome (and somewhat concentrated near genes expressed in disease relevant cell types). This implies that (almost) all expressed genes, or their associated (e)SNPs, are "root causes".

      The claim that root causal genes would be good therapeutic targets feels unfounded. If these are highly variable across individuals then the choice of treatment becomes challenging. By contrast the causal effects may converge on core genes before impacting disease, so that intervening on the core genes might be preferable. The jury is still out on these questions, so the claim should at least be made hypothetical.

      The closest thing to a gold standard I believe we have for "root causal genes" is integration of molecular QTLs and GWAS, specifically coloc/MR. Here the "E" of RCSP are explicitly represented as SNPs. I don't know if there is good data for AMD but there certainly is for MS. The authors should assess the overlap with their results. Another orthogonal avenue would be to check whether the root causal genes change early in disease progression.

      The available perturb-seq datasets have limitations beyond on the control of the authors. 1) The set of genes that are perturbed. The authors address this by simply sub-setting their analysis to the intersection of genes represented in the perturbation and observational data. However, this may mean that a true ancestor of X is not modeled/perturbed, limiting the formal claims that can be made. Additionally, some proportion of genes that are nominally perturbed show little to no actual perturbation effect (for example, due to poor guide RNA choice) which will also lead to missing ancestors.

      The authors provide no mechanism for statistical inference/significance for their results at either the individual or aggregated level. While I am a proponent of using effect sizes more than p-values, there is still value in understanding how much signal is present relative to a reasonable null.

      I agree with the authors that age coming out of a "root cause" is potentially encouraging. However, it is also quite different in nature to expression, including being "measured" exactly. Will RCSP be biased towards variables that have lower measurement error?

      Finally, it's a stretch to call K562 cells "lymphoblasts". They are more myeloid than lymphoid.

  9. inst-fs-iad-prod.inscloudgate.net inst-fs-iad-prod.inscloudgate.net
    1. The first involves a process of "de-Mex.icanization," or subtracting students' culture and language

      Wow, it’s surprising that schools admit that they take this step now back then they protested that they ever did this. I truly think this is so evil taking someone’s culture just for them to learn a language or a new subject is horrible k Instead of using these kids' prior knowledge or using their culture to help them learn they take it in front of them and get them in trouble when these students try to speak there first language or when they say something wrong.

    1. That's why it's vital that I make the most of my influence now. By the time they are 11, the message will have been well and truly drummed into them: Clever, ambitious children make the very best of friends.

      the best friends are not just good at studying. furthermore, it is bad to "drum" soething into your children, if it clearly has no obvious impact on their childhood, or maybe even negative impacts

    1. India-Canada relations hit a new low Monday, as each expelled the other’s diplomats in an escalating dispute over the 2023 assassination of a Canadian Sikh separatist. Ottawa accused six Indian diplomats, including New Delhi’s most senior envoy, of being involved in a violent campaign against Indian dissidents in Canada that included the murder of Hardeep Singh Nijjar last year. Denying the allegations, India expelled six Canadian diplomats in a tit-for-tat move that “conveys the full force of New Delhi’s anger with Canada” an analyst said. India has accused Canadian Prime Minister Justin Trudeau of trying to woo Sikh separatist voters ahead of next year’s election. If he loses, Ottawa will be saddled with “unwanted diplomatic baggage” with India, affecting business and trade, The Economic Times wrote.

      silence is causing "all religoin" to appear to be extremism

      it's not just here, and it's not just in the middle east; or in "switzerland" (of all places not to choose sides) .. the tacit fact that "silence and not speaking about the truth" has caused "freedom of what we wear" to become the front line of "how to define ..."

      just simply not understanding what it looks like to watch civilization collapse, "over silence."

      over silence.

    1. Author response:

      The following is the authors’ response to the previous reviews.

      Public Reviews:  

      Reviewer #1 (Public Review):  

      Summary:  

      The authors have presented data showing that there is a greater amount of spontaneous differentiation in human pluripotent cells cultured in suspension vs static and have used PKCβ and Wnt signaling pathway inhibitors to decrease the amount of differentiation in suspension culture.  

      Strengths:  

      This is a very comprehensive study that uses a number of different rector designs and scales in addition to a number of unbiased outcomes to determine how suspension impacts the behaviour of the cells and in turn how the addition of inhibitors counteracts this effect. Furthermore, the authors were also able to derive new hiPSC lines in suspension with this adapted protocol.  

      Weaknesses:  

      The main weakness of this study is the lack of optimization with each bioreactor change. It has been shown multiple times in the literature that the expansion and behaviour of pluripotent cells can be dramatically impacted by impeller shape, RPM, reactor design, and multiple other factors. It remains unclear to me how much of the results the authors observed (e.g. increased spontaneous differentiation) was due to not having an optimized bioreactor protocol in place (per bioreactor vessel type). For instance - was the starting seeding density, RPM, impeller shape, feeding schedule, and/or any other aspect optimized for any of the reactors used in the study, and if not, how were the values used in the study determined?  

      Thank you for your thoughtful comments. According to your comments, we have performed several experiments to optimize the bioreactor conditions in revised manuscripts. We tested several cell seeding densities and several stirring speeds with or without WNT/PKCβ inhibitors  (Figure 6—figure supplement 1). We found that 1 - 2 x 105 cells/mL of the seeding densities and 50 - 150 rpm of the stirring speeds were applicable in the proliferation of these cells. Also, PKCβ and Wnt inhibitors suppressed spontaneous differentiation in bioreactor conditions regardless with stirring speeds. As for the impeller shape and reactor design, we just used commonly-used ABLE's bioreactor for 30 mL scale and Eppendorf's bioreactors for 320 mL scale, which had been designed and used for human pluripotent stem cell culture conditions in previous studies, respectively (Matsumoto et al., 2022 (doi: 10.3390/bioengineering9110613); Kropp et al., 2016 (doi: 10.5966/sctm.2015-0253)). We cited these previous studies in the Results and Materials and Methods section. We believe that these additional data and explanation are sufficient to satisfy your concerns on the optimization of bioreactor experiments.

      Reviewer #2 (Public Review):  

      This study by Matsuo-Takasaki et al. reported the development of a novel suspension culture system for hiPSC maintenance using Wnt/PKC inhibitors. The authors showed elegantly that inhibition of the Wnt and PKC signaling pathways would repress spontaneous differentiation into neuroectoderm and mesendoderm in hiPSCs, thereby maintaining cell pluripotency in suspension culture. This is a solid study with substantial data to demonstrate the quality of the hiPSC maintained in the suspension culture system, including long-term maintenance in >10 passages, robust effect in multiple hiPSC lines, and a panel of conventional hiPSC QC assays. Notably, large-scale expansion of a clinical grade hiPSC using a bioreactor was also demonstrated, which highlighted the translational value of the findings here. In addition, the author demonstrated a wide range of applications for the IWR1+LY suspension culture system, including support for freezing/thawing and PBMC-iPSC generation in suspension culture format. The novel suspension culture system reported here is exciting, with significant implications in simplifying the current culture method of iPSC and upscaling iPSC manufacturing.  

      Another potential advantage that perhaps wasn't well discussed in the manuscript is the reported suspension culture system does not require additional ECM to provide biophysical support for iPSC, which differentiates from previous studies using hydrogel and this should further simplify the hiPSC culture protocol.  

      Interestingly, although several hiPSC suspension media are currently available commercially, the content of these suspension media remained proprietary, as such the signaling that represses differentiation/maintains pluripotency in hiPSC suspension culture remained unclear. This study provided clear evidence that inhibition of the Wnt/PKC pathways is critical to repress spontaneous differentiation in hiPSC suspension culture.  

      I have several concerns that the authors should address, in particular, it is important to benchmark the reported suspension system with the current conventional culture system (eg adherent feeder-free culture), which will be important to evaluate the usefulness of the reported suspension system.  

      Thank you for this insightful suggestion. In this revised manuscript, we have performed additional experiments using conventional media, mTeSR1 (Stem Cell Technologies, Vancouver, Canada), comparing with the adherent feeder-free culture system in four different hiPSC lines simultaneously. Compared to the adherent conditions, the suspension conditions without chemical treatment decreased the expression of self-renewal marker genes/proteins and increased the expression levels of SOX17, T, and PAX6 (Figure 4 - figure supplement 2). Importantly, the treatment of LY333531 and IWR-1-endo in mTeSR1 medium reversed the decreased expression of these undifferentiated markers and suppressed the increased expression of differentiation markers in suspension culture conditions, reaching the comparable levels of the adherent culture conditions. These results indicated that these chemical treatments in suspension culture are beneficial even when using a conventional culture medium.

      Also, the manuscript lacks a clear description of a consistent robust effect in hiPSC maintenance across multiple cell lines.  

      Thank you for this insightful suggestion. We have performed additional experiments on hiPSC maintenance across 5 hiPSC lines in suspension culture using StemFit AK02N medium simultaneously (Figure 3C - E). Overall, the treatment of LY333531 and IWR-1-endo in the StemFit AK02N medium reversed the decreased expression of these undifferentiated markers and suppressed the increased expression of differentiation markers in suspension culture conditions. Also as above, we have added results using conventional media, mTeSR1, in comparison to the adherent feeder-free culture system in four different hiPSC lines simultaneously. These results show that this chemical treatment consistently produced robust effects in hiPSC maintenance across multiple cell lines using multiple conventional media.

      There are also several minor comments that should be addressed to improve readability, including some modifications to the wording to better reflect the results and conclusions.  

      In the revised manuscript, we have added and corrected the descriptions to improve readability, including some modifications to the wording to better reflect the results and conclusions. 

      Reviewer #3 (Public Review):  

      In the current manuscript, Matsuo-Takasaki et al. have demonstrated that the addition of PKCβ and WNT signaling pathway inhibitors to the suspension cultures of iPSCs suppresses spontaneous differentiation. These conditions are suitable for large-scale expansion of iPSCs. The authors have shown that they can perform single-cell cloning, direct cryopreservation, and iPSC derivation from PBMCs in these conditions. Moreover, the authors have performed a thorough characterization of iPSCs cultured in these conditions, including an assessment of undifferentiated stem cell markers and genetic stability. The authors have elegantly shown that iPSCs cultured in these conditions can be differentiated into derivatives of three germ layers. By differentiating iPSCs into dopaminergic neural progenitors, cardiomyocytes, and hepatocytes they have shown that differentiation is comparable to adherent cultures.

      This new method of expanding iPSCs will benefit the clinical applications of iPSCs.  

      Recently, multiple protocols have been optimized for culturing human pluripotent stem cells in suspension conditions and their expansion. Additionally, a variety of commercially available media for suspension cultures are also accessible. However, the authors have not adequately justified why their conditions are superior to previously published protocols (indicated in Table 1) and commercially available media. They have not conducted direct comparisons.  

      Thank you for this careful suggestion. In this revised manuscript, we have added results using a conventional medium, mTeSR1 (Stem Cell Technologies), which has been used for the suspension culture in several studies. Compared to the adherent conditions using mTeSR1 medium, the suspension conditions with the same medium decreased the ratio of TRA1-60/SSEA4-positive cells and OCT4positive cells and the expression levels of OCT4 and NANOG and decreased the expression levels of SOX17, T, and PAX6 in 4 different hiPSC lines simultaneously (Figure 4 - Supplement 2). Importantly, the treatment of LY333531 and IWR-1-endo in the mTeSR1 medium reversed the decreased expression of these undifferentiated markers. With these direct comparisons, we were able to justify why our conditions are superior to previously published protocols using commercially available media.

      Additionally, the authors have not adequately addressed the observed variability among iPSC lines. While they claim in the Materials and Methods section to have tested multiple pluripotent stem cell lines, they do not clarify in the Results section which line they used for specific experiments and the rationale behind their choices. There is a lack of comparison among the different cell lines. It would also be beneficial to include testing with human embryonic stem cell lines.  

      Thank you for this insightful suggestion. In this revised manuscript, we have added results on 5 different hiPSC lines at the same time (Figure 3 C-E). Excuse for us, but it is hard to use human embryonic stem cell lines for this study due to ethical issues in Japanese governmental regulations. The treatment of LY333531 and IWR-1-endo increased the expression of self-renewal marker genes/proteins and decreased the expression levels of SOX17, T, and PAX6 in these hiPSC lines in general. These results indicated that these chemical treatments in suspension culture were robust in general while addressing the observed variability among iPSC lines.

      Additionally, there is a lack of information regarding the specific role of the two small molecules in these conditions.  

      In this revised manuscript, we have added data and discussion regarding the specific role of the two small molecules in these conditions in the Results and Discussion section. For using WNT signaling inhibitor, we hypothesized that adding Wnt signaling inhibitors may inhibit the spontaneous differentiation of hiPSCs into mesendoderm. Because exogenous Wnt signaling induces the differentiation of human pluripotent stem cells into mesendoderm lineages (Nakanishi et al, 2009; Sumi et al, 2008; Tran et al, 2009; Vijayaragavan et al, 2009; Woll et al, 2008). Also, endogenous expression and activation of Wnt signaling in pluripotent stem cells are involved in the regulation of mesendoderm differentiation potentials (Dziedzicka et al, 2021). For using PKC inhibitors, "To identify molecules with inhibitory activity on neuroectodermal differentiation, hiPSCs were treated with candidate molecules in suspension conditions. We selected these candidate molecules based on previous studies related to signaling pathways or epigenetic regulations in neuroectodermal development (reviewed in (GiacomanLozano et al, 2022; Imaizumi & Okano, 2021; Sasai et al, 2021; Stern, 2024) ) or in pluripotency safeguards (reviewed in (Hackett & Surani, 2014; Li & Belmonte, 2017; Takahashi & Yamanaka, 2016; Yagi et al, 2017))." 

      We also found that the expression of naïve pluripotency markers, KLF2, KLF4, KLF5, and DPPA3, were up-regulated in the suspension conditions treated with LY333531 and IWR-1-endo while the expression of OCT4 and NANOG was at the same levels (Figure 5—figure supplement 2). Combined with RT-qPCR analysis data on 5 different hiPSC lines (Figure 3E), these results suggest that IWRLY conditions may drive hiPSCs in suspension conditions to shift toward naïve pluripotent states.

      The authors have not attempted to elucidate the underlying mechanism other than RNA expression analysis.  

      Regarding the underlying mechanisms, we have added results and discussion in the revised manuscript.  For Wnt activation in human pluripotent stem cells, several studies reported some WNT agonists were expressed in undifferentiated human pluripotent stem cells (Dziedzicka et al., 2021; Jiang et al, 2013; Konze et al, 2014). In suspension culture, cell aggregation causes tight cell-cell interaction. The paracrine effect of WNT agonists in the cell aggregation may strongly affect neighbor cells to induce spontaneous differentiation into mesendodermal cells. Thus, we think that the inhibition of WNT signaling is effective to suppress the spontaneous differentiation into mesendodermal lineages in suspension culture.

      For PKC beta activation in human pluripotent stem cells, we have shown that phosphorylated PKC beta protein expression is up-regulated in suspension culture than in adherent culture with western blotting (Figure 3 - figure supplement 1). The treatment of PKCβ inhibitor is effective to suppress spontaneous differentiation into neuroectodermal lineages. For future perspectives, it is interesting to examine (1) how and why PKCβ is activated (or phosphorylated), especially in suspension culture conditions, and (2) how and why PKCβ inhibition can suppress the neuroectodermal differentiation. Conversely, it is also interesting to examine how and why PKCβ activation is related to neuroectodermal differentiation.

      For these reasons some aspects of the manuscript need to be extended:  

      (1) It is crucial for authors to specify the culture media used for suspension cultures. In the Materials and Methods section, the authors mentioned that cells in suspension were cultured in either StemFit AK02N medium, 415 StemFit AK03N (Cat# AK03N, Ajinomoto, Co., Ltd., Tokyo, Japan), or StemScale PSC416 suspension medium (A4965001, Thermo Fisher Scientific, MA, USA). The authors should clarify in the text which medium was used for suspension cultures and whether they observed any differences among these media.  

      Sorry for this confusion. Basically in this study, we use StemFit AK02N medium (Figure 1-5, 7-9). For bioreactor experiments (Figure 6), we use StemFit AK03N medium, which is free of human and animalderived components and GMP grade. To confirm the effect of IWRLY chemical treatment, we use StemScale suspension medium (Figure 4 - figure supplement 1) and mTeSR1 medium (Figure 4 - figure supplement 2 and Figure 8 - figure supplement 1). In the revised manuscript we clarified which medium was used for suspension cultures in the Results and Materials and Methods section.

      Although we have not compared directly among these media in suspension culture (, which is primarily out of the focus of this study), we have observed some differences in maintaining self-renewal characteristics, preventing spontaneous differentiation (including tendencies to differentiate into specific lineages), stability or variation among different experimental times in suspension culture conditions. Overcoming these heterogeneity caused by different media, the IWRLY chemical treatment stably maintain hiPSC self-renewal in general. We have added this issue in the Discussion section.

      (2) In the Materials and Methods section, the authors mentioned that they used multiple cell lines for this study. However, it is not clear in the text which cell lines were used for various experiments. Since there is considerable variation among iPSC lines, I suggest that the authors simultaneously compare 2 to 3 pluripotent stem cell lines for expansion, differentiation, etc.  

      Thank you for this careful suggestion. We have added more results on the simultaneous comparison using StemFit AK02N medium in 5 different hiPSC lines (Figure 3 C-E) and using mTeSR1 medium in 4 different hiPSC lines (Figure 4 - figure supplement 2). From both results, we have shown that the treatment of LY333531 and IWR-1-endo was beneficial in maintaining the self-renewal of hiPSCs while suppressing spontaneous differentiation.

      (3) Single-cell sorting can be confusing. Can iPSCs grown in suspensions be single-cell sorted?

      Additionally, what was the cloning efficiency? The cloning efficiency should be compared with adherent cultures.  

      Sorry for this confusion. With our method, iPSCs grown in IWRLY suspension conditions can be singlecell sorted. We have improved the clarity of the schematics (Figure 7A). Also, we added the data on the cloning efficiency, which are compared with adherent cultures (Figure 7B). The cloning efficiency of adherent cultures was around 30%. While the cloning efficiency of suspension cultures without any chemical treatment was less than 10%, the IWR-1-endo treatment in the suspension cultures increased the efficiency was more than 20%. However, the treatment of LY333531 decreased the efficiency. These results indicated that the IWR-1-endo treatment is beneficial in single-cell cloning in suspension culture.

      (4) The authors have not addressed the naïve pluripotent state in their suspension cultures, even though PKC inhibition has been shown to drive cells toward this state. I suggest the authors measure the expression of a few naïve pluripotent state markers and compare them with adherent cultures  

      Thank you for this insightful comment. In the revised manuscript, we have added the data of RT-qPCR in 5 different hiPSC lines and specific gene expression from RNA-seq on naïve pluripotent state markers (Figure 3E and Figure 5 - figure supplement 2), respectively. Interestingly, the expression of KLF2, KLF4, KLF5, and DPPA3 is significantly up-regulated in IWRLY conditions. These results suggested that IWRLY suspension conditions drove hiPSCs toward naïve pluripotent state.

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors):  

      Overall, I feel that this study is very interesting and comprehensive, but has significant weaknesses in the bioprocessing aspects. More optimization data is required for the suspension culture to truly show that the differentiation they are observing is not an artifact of a non-optimized protocol.  

      Thank you for your thoughtful comments. Following your comments, we have performed several experiments to optimize the bioreactor conditions in revised manuscripts. We tested several cell seeding densities and several stirring speeds with or without WNT/PKCβ inhibitors (Figure 6—figure supplement 1). From these optimization experiments, we found that 1 - 2 x 105 cells/mL of the seeding densities and 50 - 150 rpm of the stirring speeds were applicable in the proliferation of these cells. Also, PKCβ and Wnt inhibitors suppressed spontaneous differentiation in bioreactor conditions regardless with acceptable stirring speeds. As for the impeller shape and reactor design, we just used commonly-used ABLE's bioreactor for 30 mL scale and Eppendorf's bioreactors for 320 mL scale, which had been designed and used for human pluripotent stem cell culture conditions in previous studies, respectively (Matsumoto et al., 2022 (doi: 10.3390/bioengineering9110613); Kropp et al., 2016 (doi:10.5966/sctm.2015-0253). We cited these previous studies in the Results section. We believe that these additional data and explanation are sufficient to satisfy your concerns on the optimization of bioreactor experiments.

      Reviewer #2 (Recommendations For The Authors):  

      The following comments should be addressed by the authors to improve the manuscript:  

      (1) Abstract: '...a scalable culture system that can precisely control the cell status for hiPSCs is not developed yet.' There were previous reports for a scalable iPSC culture system so I would suggest toning down/rephrasing this point: eg that improvement in a scalable iPSC culture system is needed.  

      Thank you for this careful suggestion. Following this suggestion, We have changed the sentence as "the improvement in a scalable culture system that can precisely control the cell status for hiPSCs is needed."

      (2) Line 71: please specify what media was used as a 'conventional medium' for suspension culture, was it Stemscale?  

      As suggested, we specified the media as StemFit AK02N used for this experiment. 

      (3) Fig 1E: It's not easy to see gating in the FACS plots as the threshold line is very faint, please fix this issue.  

      As suggested, we used thicker lines for the gating in the FACS plots (Figure 1E).

      (4) Fig 1G-J, Fig 2D-H: The RNAseq figures appeared pixelated and the resolution of these figures should be improved. The x-axis label for Fig 1H is missing.  

      We have improved these figures in their resolution and clarity. Also, we have added the x-axis label as "enrichment distribution" for gene set enrichment analysis (GSEA) in Figures 1H, 5F, and 5- figure supplement 1B.

      (5) Line 103-107: 'Since Wnt signaling induces the differentiation of human pluripotent stem cells into mesendoderm lineages, and is endogenously involved in the regulation of mesendoderm differentiation of pluripotent stem cells.....'. The two points seem the same and should be clarified.  

      Sorry for this unclear description. We have changed this description as "Exogenous Wnt signaling induces the differentiation of human pluripotent stem cells into mesendoderm lineages (Nakanishi et al, 2009; Sumi et al, 2008; Tran et al, 2009; Vijayaragavan et al, 2009; Woll et al, 2008). Also, endogenous expression and activation of WNT signaling in pluripotent stem cells are involved in the regulation of mesendoderm differentiation potentials (Dziedzicka et al, 2021; Jiang et al, 2013)." With this description, we hope that you will understand the difference of two points.

      (6) Line 113: 'In samples treated with inhibitors' should be 'In samples treated with Wnt inhibitors'.  

      Thank you for this careful suggestion. We have corrected this. 

      (7) Line 115: '....there was no reduction in PAX6 expression.' That's not entirely correct, there was a reduction in PAX6 in IWR-1 endo treatment compared to control suspension culture (is this significant?), but not consistently for IWP-2 treatment. Please rephrase to more accurately describe the results.  

      Sorry for this inaccurate description. We have corrected this phrase as "there was only a small reduction in PAX6 expression in the IWR-1-endo-treated condition and no reduction in the IWP2-treated condition" as recommended.

      (8) It's critical to show that the effect of the suspension culture system developed here can maintain an undifferentiated state for multiple hiPSC lines. I think the author did test this in multiple cell lines, but the results are scattered and not easy to extract. I would recommend adding info for the hiPSC line used for the results in the legend, eg WTC11 line was used for Figure 3, 201B7 line was used for Figure 2. I would suggest compiling a figure that confirms the developed suspension system (IWR-1 +LY) can support the maintenance of multiple hiPSC lines.  

      Thank you for this insightful suggestion. We have added data on hiPSC maintenance across 5 hiPSC lines in suspension culture using StemFit AK02N medium simultaneously (Figure 3C - E) and on hiPSC maintenance across 4 hiPSC lines in suspension culture using mTeSR1 medium simultaneously  (Figure 4 - figure supplement 2). Together, the treatment of LY333531 and IWR-1-endo in these media reversed the decreased expression of these undifferentiated markers and suppressed the increased expression of differentiation markers in suspension culture conditions. These results show that these chemical treatment produced a consistent robust effect in hiPSC maintenance across multiple cell lines.

      (9) Line 166: Please use the correct gene nomenclature format for a human gene (italicised uppercase) throughout the manuscript. Also, list the full gene name rather than PAX2,3,5.  

      Sorry for the incorrectness of the gene names. We have corrected them.

      (10) Please improve the resolution for Figure 4D.  

      We have provided clearer images of Figure 4D.

      (11) In the first part of the study, the control condition was referred to as 'suspension culture' with spontaneous differentiation, but in the later parts sometimes the term 'suspension culture' was used to describe the IWR1+LY condition (ie lines 271-272). I would suggest the authors carefully go through the manuscript to avoid misinterpretation on this issue.  

      Thank you for this careful suggestion. To avoid this misinterpretation on this issue, we use 'suspension culture' for just the conventional culture medium and 'LYIWR suspension culture' for the culture medium supplemented with LY333531 and IWR1-endo in this manuscript.

      (12) Figure 5: It is impressive to demonstrate that the IWR1+LY suspension culture enables large-scale expansion of a clinical-grade hiPSC line using a bioreactor, yielding 300 vials/passage. Can the author add some information regarding cell yield using a conventional adherent culture system in this cell line? This will provide a comparison of the performance of the IWR1+LY suspension culture system to the conventional method.  

      Thank you for this valuable suggestion. We have provided information regarding cell yield using a conventional adherent culture system in this cell line in the Results as "Since the population doubling time (PDT) of this hiPSC line in adherent culture conditions is 21.8 - 32.9 hours at its production (https://www.cira-foundation.or.jp/e/assets/file/provision-of-ips-cells/QHJI14s04_en.pdf), this proliferation rate in this large scale suspension culture is comparable to adherent culture conditions."

      (13) Line 273: For testing the feasibility of using IWR1+LY media to support the freeze and thaw process, the author described the cell number and TRA160+/OCT4+ cell %. How is this compared to conventional media (eg E8)? It would be nice to see a head-to-head comparison with conventional media, quantification of cell count or survival would be helpful to determine this.  

      For this issue, we attempted a direct freeze and thaw process using conventional media, StemFit AK02N in 201B7 line (Figure 8) or mTeSR1 in 4 different hiPSC lines(Figure 8 - figure supplement 1) with or without IWR1+LY. However, since the hiPSCs cultured in suspension culture conditions without IWR1+LY quickly lost their self-renewal ability, these frozen cells could not be recovered in these conditions nor counted. Our results indicate that the addition to IWR1+LY in the thawing process support the successful recovery in suspension conditions.

      (14) More details of the passaging method should be added in the method section. Do you do cell count following accutase dissociation and replate a defined density (eg 1x10^5/ml)?  

      Yes. We counted the cells in every passage in suspension culture conditions. We have added more explanation in the Materials and Methods as below.

      "The dissociated cells were counted with an automatic cell counter (Model R1, Olympus) with Trypan Blue staining to detect live/dead cells. The cell-containing medium was spun down at 200 rpm for 3 minutes, and the supernatant was aspirated. The cell pellet was re-suspended with a new culture medium at an appropriate cell concentration and used for the next suspension culture."

      (15) The IWR1+LY suspension culture system requires passage every 3-5 days. Is there still spontaneous differentiation if the hiPSC aggregate grows too big?  

      Thank you for this insightful question.

      Yes. The size of hiPSC aggregates is critical in maintaining self-renewal in our method as previous studies showed. Stirring speed is a key to make the proper size of hiPSC aggregates in suspension culture. Also, the culture period between passages is another key not to exceed the proper size of hiPSC aggregates. Thus, we keep stirring speed at 90 rpm (135 rpm for bioreactor conditions) basically and passaging every 3 - 5 days in suspension culture conditions.

      (16) Several previous studies have described the development of hiPSC suspension culture system using hydrogel encapsulation to provide biophysical modulation (reviewed in PMID: 32117992). In comparison, it seems that the IWR1+LY suspension system described here does not require ECM addition which further simplifies the culture system for iPSC. It would be good to add more discussion on this topic in the manuscript, such as the potential role of the E-cadherin in mediating this effect - as RNAseq results indicated that CDH1 was upregulated in the IWR1+LY condition).  

      Thank you for this valuable suggestion. We have added more discussion on this topic in the Discussion section as below.

      "Thus, our findings show that suspension culture conditions with Wnt and PKCβ inhibitors (IWRLY suspension conditions) can precisely control cell conditions and are comparable to conventional adhesion cultures regarding cellular function and proliferation. Many previous 3D culture methods intended for mass expansion used hydrogel-based encapsulation or microcarrier-based methods to provide scaffolds and biophysical modulation (Chan et al, 2020). These methods are useful in that they enable mass culture while maintaining scaffold dependence. However, the need for special materials and equipment and the labor and cost involved are concerns toward industrial mass culture. On the other hand, our IWRLY suspension conditions do not require special materials such as hydrogels, microcarriers, or dialysis bags, and have the advantage that common bioreactors can be used. "

      "On the other hand, it is interesting to see whether and how the properties of hiPSCs cultured in IWRLY suspension culture conditions are altered from the adherent conditions. Our transcriptome results in comparison to adherent conditions show that gene expression associated with cell-to-cell attachment, including E-cadherin (CDH1), is more activated. This may be due to the status that these hiPSCs are more dependent on cell-to-cell adhesion where there is no exogenous cell-to-substrate attachment in the three-dimensional culture. Previous studies have shown that cell-to-cell adhesion by E-cadherin positively regulates the survival, proliferation, and self-renewal of human pluripotent stem cells (Aban et al, 2021; Li et al, 2012; Ohgushi et al, 2010). Furthermore, studies have shown that human pluripotent stem cells can be cultured using an artificial substrate consisting of recombinant E-cadherin protein alone without any ECM proteins (Nagaoka et al, 2010). Also, cell-to-cell adhesion through gap junctions regulates the survival and proliferation of human pluripotent stem cells (Wong et al, 2006; Wong et al, 2004). These findings raise the possibility that the cell-to-cell adhesion, such as E-cadherin and gap junctions, are compensatory activated and support hiPSC self-renewal in situations where there are no exogenous ECM components and its downstream integrin and focal adhesion signals are not forcedly activated in suspension culture conditions. It will be interesting to elucidate these molecular mechanisms related to E-cadherin in the hiPSC survival and self-renewal in IWRLY suspension conditions in the future."

      Reviewer #3 (Recommendations For The Authors):  

      (1) I am a bit confused about the passage of adherent cultures. The authors claim that they used EDTA for passaging and plated cells at a density of 2500 cells/cm2. My understanding is that EDTA is typically used for clump passaging rather than single-cell passaging.  

      Sorry about this confusion. We routinely use an automatic cell counter (model R1, Olympus) which can even count small clumpy cells accurately. Thus, we show the cell numbers in the passaging of adherent hiPSCs.  

      (2) Figure 2D- The authors have not directly compared IWR-1-endo with IWR-1-endo+Go6983 for the expression of T and SOX17, a simultaneous comparison would be an interesting data.  

      As recommended, we have added the data that directly compared IWR-1-endo with IWR-1endo+Go6983 for the expression of T and SOX17 in Figure 2D. The addition of IWR-1-endo alone decreased the expression of T and SOX17, but not PAX6, which were similar to the data in Figure 2C.

      (3) Oxygen levels play a crucial role in pluripotency maintenance. Could the authors please specify the oxygen levels used for culturing cells in suspension?  

      Sorry for not mentioning about oxygen levels in this study. We basically use normal oxygen levels (i.e., 21% O2) in suspension culture conditions. We have explained this in the Materials and Methods section.

      (4) Figure supplement 1 (G and H): In the images, it is difficult to determine whether the green (PAX6 and SOX17) overlaps with tdT tomato. For better visualization, I suggest that the authors provide separate images for the green and red colors, as well as an overlay.  

      Sorry for these unclear images. We have provided separate images for the green and red colors, as well as an overlay in Figure 1- figure supplement 1 G and H.

      (5) The authors have only compared quantitatively the expression of TRA-1-60 for most of the figures. I suggest that the authors quantitatively measure the expression of other markers of undifferentiated stem cells, such as NANOG, OCT4, SSEA4, TRA-1-81, etc.  

      We have added the quantitative data of the expression of markers of undifferentiated hiPSCs including NANOG, OCT4, SSEA4, and TRA-1-60 on 5 different hiPSC lines in Figure 3 C-E.

      (6) In Figure 2D, the authors have tested various small molecules but the rationale behind testing those molecules is missing in the text.  

      These molecules are chosen as putatively affecting neuroectodermal induction from the pluripotent state.

      We have added the rationale with appropriate references in the Results section as below.

      "We have chosen these candidate molecules based on previous studies related to signaling pathways or epigenetic regulations in neuroectodermal development (reviewed in (Giacoman-Lozano et al, 2022; Imaizumi & Okano, 2021; Sasai et al, 2021; Stern, 2024) ) or in pluripotency safeguards (reviewed in (Hackett & Surani, 2014; Li & Belmonte, 2017; Takahashi & Yamanaka, 2016; Yagi et al, 2017)) (Figure 2A; listed in Supplementary Table 1). "

      (7) In the beginning authors used Go6983 but later they switched to LY333531, the reasoning behind the switch is not explained well.  

      To explain the reasons for switching to LY333531 from Go6983 clearly, we reorganized the order of results and figures. In short, we found that the suppression of PAX6 expression in hiPSCs cultured in suspension conditions was observed with many PKC inhibitors, all of which possessed PKCβ inhibition activity (Figure 2—figure supplement 2B-D). Also, elevated expression of PKCβ in suspension-cultured hiPSCs could affect the spontaneous differentiation (Figure 3—figure supplement 1A-C). To further explore the possibility that the inhibition of PKCβ is critical for the maintenance of self-renewal of hiPSCs in the suspension culture, we evaluated the effect of LY333531, a PKCβ specific inhibitor. The maintenance of suspension-cultured hiPSCs is specifically facilitated by the combination of PKCβ and Wnt signaling inhibition (Figure 3A and B; Figure 2—figure supplement 1). Last, we performed longterm culture for 10 passages in suspension conditions and compared hiPSC growth in the presence of LY333531 or Go6983. LY333531 was superior in the proliferation rate and maintaining OCT4 protein expression in the long-term culture (Figure 4). Thus, we used IWR-1-endo and LY333531 for the rest of this study.

      (8) I suggest the authors measure cell death after the treatment with LY+IWR-1-endo.  

      Thank you for this valuable suggestion. We have measured cell death after the treatment with LY+IWR1-endo and found that the chemical combination had no or little effects on the cell death. We have added data in Figure 3—figure supplement 2 and the description in the Results section as below. "We also examined whether the combination of PKCb and Wnt signaling inhibition affects the cell survival in suspension conditions. In this experiment, we used another PKC inhibitor, Staurosporine (Omura et al, 1977), which has a strong cytotoxic effect as a positive control of cell death in suspension conditions. The addition of IWR-1-endo and LY333531 for 10 days had no effects on the apoptosis while the addition of Staurosporine for 2 hours induced Annexin-V-positive apoptotic cells  (Figure 3—figure supplement 2). These results indicate that the combination of PKCb and Wnt signaling inhibition has no or little effects on the cell survival in suspension conditions."

      (9) The authors have performed reprogramming using episomal vectors and using Sendai viruses. In both the protocols authors have added small molecules at different time points, for episomal vector protocol at day 3 and Sendai virus protocol at day 23. Why is this different?  

      Thank you for this insightful question. We intended that these differences should be reflected in the degree of the expression from these reprogramming vectors. The expression of reprogramming factors from these vectors should suppress the spontaneous differentiation in reprogramming cells. Sendai viral vectors should last longer than episomal plasmid vectors. Thus, we thought that adding these chemical inhibitors for episomal plasmid vector conditions from the early phase of reprogramming and for Sendai viral vector conditions from the late phase of reprogramming. For future perspectives, we might further need to optimize the timing of adding these molecules.

      (10) The protocol for three germ layer differentiation using a specific differentiation medium requires further elaboration. For instance, the authors mentioned that suspension cultures were transferred to differentiation media but did not emphasize the cell number and culture conditions before moving the cultures to the differentiation media.  

      Sorry for this unclear description. We have added the explanation on the cell number and culture conditions before moving the cultures to the differentiation media in the Materials and Methods section as below.

      "As in the maintenance conditions, 4 × 105 hiPSC were seeded in one well of a low-attachment 6-well plate with 4 mL of StemFit AK02N medium supplemented with 10 µM Y-27632. This plate was placed onto the plate shaker in the CO2 incubator. Next day, the medium was changed to the germ layer specific differentiation medium."

    1. Reviewer #3 (Public review):

      Summary:

      In this paper, the authors measured neural activity (using MEG) and eye gaze while individuals listened to speech from either one or two speakers, which sometimes contained semantic incongruencies.

      The stated aim is to replicate two previous findings by this group: (1) that there is "ocular speech tracking" (that eye-movements track the audio of the speech), (2) that individual differences in neural response to tones that are predictable vs. not-predictable in their pitch is linked to neural response to speech. In addition, here they try to link the above two effects to each other, and to link "attention, prediction, and active sensing".

      Strengths:

      This is an ambitious project, that tackles an important issue and combines different sources of data (neural data, eye-movements, individual differences in another task) in order to obtain a comprehensive "model" of the involvement of eye-movements in sensory processing.

      The authors use many adequate methods and sophisticated data-analysis tools (including MEG source analysis and multivariate statistical models) in order to achieve this.

      Weaknesses:

      Although I sympathize with the goal of the paper and agree that this is an interesting and important theoretical avenue to pursue, I am unfortunately not convinced by the results and find that many of the claims are very weakly substantiated in the actual data.

      Since most of the analyses presented here are derivations of statistical models and very little actual data is presented, I found it very difficult to assess the reliability and validity of the results, as they currently stand. I would be happy to see a thoroughly revised version, where much more of the data is presented, as well as control analyses and rigorous and well-documented statistical testing (including addressing multiple comparisons).

      These are the main points of concern that I have regarding the paper, in its current format.

      (1) Prediction tendencies - assessed by listening to sequences of rhythmic tones, where the pitch was either "predictable" (i.e., followed a fixed pattern, with 25% repetition) or "unpredictable" (no particular order to the sounds). This is a very specific type of prediction, which is a general term that can operate along many different dimensions. Why was this specific design selected? Is there theoretical reason to believe that this type of prediction is also relevant to "semantic" predictions or other predictive aspects of speech processing?

      (2) On the same point - I was disappointed that the results of "prediction tendencies" were not reported in full, but only used later on to assess correlations with other metrics. Even though this is a "replication" of previous work, one would like to fully understand the results from this independent study. On that note, I would also appreciate a more detailed explanation of the method used to derive the "prediction tendency" metric (e.g, what portion of the MEG signal is used? Why use a pre-stimulus and not a post-stimulus time window? How is the response affected by the 3Hz steady-state response that it is riding on? How are signals integrated across channels? Can we get a sense of what this "tendency" looks like in the actual neural signal, rather than just a single number derived per participant (an illustration is provided in Figure 1, but it would be nice to see the actual data)? How is this measure verified statistically? What is its distribution across the sample? Ideally, we would want enough information for others to be able to replicate this finding).

      (3) Semantic violations - half the nouns ending sentences were replaced to create incongruent endings. Can you provide more detail about this - e.g., how were the words selected? How were the recordings matched (e.g., could they be detected due to audio editing?)? What are the "lexically identical controls that are mentioned"? Also, is there any behavioral data to know how this affected listeners? Having so many incongruent sentences might be annoying/change the nature of listening. Were they told in advance about these?

      (4) TRF in multi-speaker condition: was a univariate or multivariate model used? Since the single-speaker condition only contains one speech stimulus - can we know if univariate and multivariate models are directly comparable (in terms of variance explained)? Was any comparison to permutations done for this analysis to assess noise/chance levels?

      (5) TRF analysis at the word level: from my experience, 2-second segments are insufficient for deriving meaningful TRFs (see for example the recent work by Mesik & Wojtczak). Can you please give further details about how the analysis of the response to semantic violations was conducted? What was the model trained on (the full speech or just the 2-second long segments?) Is there a particular advantage to TRFs here, relative - say - to ERPs (one would expect a relatively nice N400 response, not)? In general, it would be nice to see the TRF results on their own (and not just the modulation effects).

      (6) Another related point that I did not quite understand - is the dependent measure used for the regression model "neural speech envelope tracking" the r-value derived just from the 2sec-long epochs? Or from the entire speech stimulus? The text mentions the "effect of neural speech tracking" - but it's not clear if this refers to the single-speaker vs. two-speaker conditions or to the prediction manipulation. Or is it different in the different analyses? Please spell out exactly what metric was used in each analysis.

    1. Woz and I started Apple in my parents’ garage when I was 20. We worked hard, and in 10 years Apple had grown from just the two of us in a garage into a $2 billion company with over 4,000 employees.

      This is crazy that they started in a garage and now it's one of the biggest company's ever.

    1. I've picked up about 20 of the typewriters in my collection from ShopGoodwill.

      Only two were impeccably/properly packaged and shipped and one of these was a special machine that I emailed them after purchase with written details and links to videos about how to pack and ship it just to be on the safe side.

      Three were dreadful disasters: one was a 40 pound standard that was dropped and the frame bent drastically (it had almost no padding materials inside the box), two were shoved into cases (one upside down and the other right side up, but neither locked into their cases properly nor with their carriage locks engaged so they both bounced around for the entire trip) and put into boxes with almost no packing material. All three refunded portions of the price and/or all the shipping costs.

      Most of the remainder (all portables with cases) were packaged with a modicum of care (some packing material in the case and some outside the case with reasonable boxes) and showed up in reasonable condition.

      Two of the machines were local enough that I did a local pick up to ensure better care.

      Generally, it's a crapshoot, but this is also the reason why I don't spend more than $20 on any machine I get from them (except one reasonably rare German typewriter in the US and a Royal with a Vogue typeface that still came out at less than $100 because only one other person noticed its rarity in the photos).

      Only one of the machines was clean as a whistle and ready to type on day one. All the remainder required serious cleanings at a minimum. Two were missing internal pieces, two had repairable drawband issues, one had dramatically bad escapement issues, and one had a destroyed mainspring that I need to replace.

      Only one of the group had a platen with any life left in it. One had a completely unusable platen, but it was also relatively obvious in the photos. Most of the rest were hard, but usable.

      I live in the US and typically only bid on machines that are in the top 20% of their class cosmetically.

      I'll echo the thought of others that I wouldn't have a machine from them shipped directly to someone as a present unless I knew they were a tinkerer and had the mechanical ability, the facilities/tools, and desire to clean and service their own machine. Otherwise, I'd do that myself and ship it to them directly.


      reply to u/Tico_Typer at https://old.reddit.com/r/typewriters/comments/1g28v6z/i_am_curious_about_the_shopping_goodwill_websites/

    1. Why is this all happening? This is devastating. This is heartbreaking. You know, I've tuned in on the future many times, and I do see like, of course, there is going to be a lot more catastrophes, but on the other side of that, they always show me that the light is going to win, like the digital age is approaching. So it's really just how we kind of look at that, because, like, the first level is awakening to the systems, and the second level is anchoring in your own system. Faith is like our birthright. It's just that we've wired in fear so much we think that's our natural state of being. I like to welcome to the show Ella Ringrose. How you doing Ella? I'm super well. Thank you for having me. Thank you so much for coming on the show. I'm looking really looking forward to talking to you about your unique journey into where you are getting to this place in your life. So before we start talking about your more psychic and mystical abilities, what was your life like prior to you learning about your psych abilities, or at least coming out of the closet, if you will, with your psychic abilities. Well, I became aware that I was psychic quite young, young, but for most of my teenage hood, I really struggled with my sensitivity. So I guess I was hiding in a sensitive closet of always feeling like there was something deeply wrong with me, and I really struggled to fit in in school. I was failing everything in school as well. I was diagnosed with dyslexia and dyspraxia, and so sitting in class, I couldn't retain information. It was like my mind would shut off. And I always found myself being extremely sensitive to other people, other people's emotions, you know, people who were quite strong. I was very sensitive to a lot of stuff, so I grew up very much masking myself and and who I really was to fit in. But it got to a point where I just felt like I was gonna crack like, you know, when you have like, like, a lid over a boiling water and it just starts bubbling over. It just got to this point where I just couldn't continue pretending to be just like a normal person. And so when I was 17 years old, I was sitting in the back of math class, and I heard this very strong voice. Now I know it's the voice of Spirit, telling me to drop out of school. And I was in the back of math class, and I remember just making that decision in that moment. It was like every part of my body, every cell knew that that was going to be my last day. And so I went home and I told my mom, and they were not obviously happy about it, but I knew that this was what I had to do. And so shortly after that, my brother was on his own self development journey, and he bought hundreds of self development books and spiritual books and filled our bookshelf in our living room up. And so one day, he handed me the specific book called feel the fear and do it anyway. Before I remember that book. Yeah, I was in college when I read that that, book. Yeah, it was before. Then I was just depressed and I was so super anxious. So when I read that book, my 17 year old mind was like, fear isn't real, like, why has no one told me this? Like, it infatuated me. And so I'd been wanting to do YouTube since I was 12 years old. And so I ran home from reading that book on the train, and I started my YouTube channel, even though I was petrified. What year was that? What year was that? I don't know. I'm 25 now, so it was nearly eight years ago. Yeah. So we're looking at oh gosh, 2012 early on. It wasn't when YouTube wasn't popping just yet. It wasn't Oh, Mr. Beast. Mr. Beast wasn't around yet. No, not at all. He probably was, but he wasn't known. But I've been watching YouTube, because the only thing that kept me going when I would go home from school and cry every day was YouTube. It was the only thing that made me feel I could relate to other people who were on the other side of the screen showing things in their lives. Because I wanted that normality, and so I found that book, and I just became infatuated, and I just went around down a rabbit hole, and was studying and studying and reading and learning, and one day, our family, we lost our home overnight, like we were told that we had to leave. So I couldn't bring anything, I couldn't bring my clothes, I couldn't bring my furniture, because it's a long story, but I had to leave everything overnight because there was a mold infestation as well. So all my products and things were destroyed. We were all quite sick, and so I flew to Canada, and that is when the spiritual journey really started accelerating. It was almost as if angels and guides and spirit were coming to me, and I couldn't ignore the guidance that was moving through and the guidance they were showing me. It all started with me when I was walking into a bookstore, and this book was a book by Gabby Bernstein. It was called Super attractor, but it had my face on the cover. And at this time, I was still somewhat of an atheist. I was very into like energy or emotions and mindset, but I was still very closed off to that realm. And this book had my face on it. And. I remember just staring at it, looking around like, is anyone seeing what I'm seeing? What is going on? That was my first kind of like experience where I was physically seeing things with my eyes. And I went home and read that book, and it was all about angels. And then within the next few days, the voices just came in. The connection just clicked. It was like reading that book overnight. My body just knew that this was real and I recognized it. It was as if my soul was remembering a part of itself that was ready to be activated. And that was kind of the beginning of my, my spiritual journey. So when you first started to feel these psychic the voice, I hate the voices, the voice, the things coming through, I always like asking this, did you think you were losing your mind? Did you? Because that's a normal normal thing is like, Hey, I hear voices. That's when they used to send people to the loony bin with that stuff in the in the padded sense. So I always ask channelers, and I always ask psychics this, because it's the first question I would ask if I heard a booming voice in my head, and yeah, and it did with was it just a voice, or was there an energy or a feeling with the voice that calmed it down, which I hear that happens as well? Yeah, to answer your question, no, it was actually, I mean, of course, later in my spiritual journey, I did start to think I was losing it like the more I started diving deep, of course. But when I did receive that guidance, it was actually a moment I had never felt the amount of peace that I had, because I finally didn't feel alone. I was like, there is more here than meets the eye that I was craving and seeking this whole time I was on earth, you know. So it felt very peaceful. And how my gifts work is I don't see them physically with my eye. Although I did see the Gabby book, I see it through my third eye. So like, it's like a, I see, I call it like a projector, like, you know, like a movie projector screen, like, puts it out into the wall. It's as if my third eye can can show me it in the physical room. So I was being able to see it through my third eye, but not my physical eyes, if that makes sense. Of course, yeah, I was scared of angels at night time when I was in bed, and I was like, Oh, my God, are there like, these beings around my bed, on all of that. But no, it didn't. It wasn't scary to me. Like, cellularly, I feel like it was my soul remembering as I dive deeper. It was just an awareness of like, oh no. This has been a part of my path for many lifetimes. You know? It just felt natural. It felt normal. Yeah. It was like you said, a remembering, because if you were an atheist, then past lifetimes was probably not a thing that you really thought about, or even thought was real when you decided to come out of the spiritual closet start your YouTube channel. I'm assuming your YouTube channel was in this this space at that time, even when you started talking about so you're talking about this stuff in public eight years ago, which you know, to be fair, eight years ago, the consciousness of the planet wasn't near where it is today. It wasn't as open. There weren't these kind of conversations happening freely as many as they are now, what did the people around you say, your friends, your family, and how did you deal with what they came at you with, because I have to imagine, it wasn't all Kumbaya. They were worried for sure. Yeah, concerned. I have a lot of joy. And from from my perspective, it was exciting me so much, I just wanted to share it, you know. So in my head, it was like, Oh, this is literally transforming my life. This is incredible. Like, this giddiness in me was like, let me share all of this. So I was, like, spewing this online, making videos every day. But in regards to like, family and friends at the time, I had actually kind of cleared all my friendships, so I was very much kind of in my own journey. I didn't have a lot of friends around me at the time. But in regards to family, it was very much like a concern. It was kind of like, I don't know what Ella's doing. Is she getting into a cult, you know? So that was, that was a strong thing, yeah, and especially when I was diving deep and healing a lot, you know, as well, was concern of like, do I need to go to a psych ward? There was definitely some parts of that. But at the same time, my family aren't like a normal family either, in the sense that we've always been very like loving and open and expressive with our words and like from a very young age, my mom and my brother and I, living together, we were all so into mindset and self development. So we were all quite like, expanded in our minds and open to possibilities and ideas, and as the path moved on. It's kind of comical, because my mother is extremely psychic, and my stepmom was always believing in this stuff. She had a million Angel books in her home. So there was actually a lot of people surrounding me that were in that realm that I wasn't aware of until I was able to see it to myself. You know, Now was there a moment where you used your gifts to do a reading or help somebody that not only changed their life but surprised the heck out of you. Oh my gosh. I feel like that's every reading, Alex, every reading, Your first your first one, the very first time you did it, like I imagine the first time you did a reading for somebody, you were like, Oh man, that worked kind of thing. I actually remember it. I remember it. I was living in the Canary Islands at the time, and my psychic gifts started accentuating very strongly, and I heard spirit being like, just go give it to strangers on the beach. We are in a time of great change, and humanity is awakening more and more every day. Mankind needs insights on what is happening to all of us. That is why I'm inviting you to Wisdom from Beyond a six day virtual summit designed to awaken your soul. Experience over nine hours of soul expanding channeling sessions led by six of the world's most esteemed channelers, connect with the divine, receive sacred insights and transform your journey by asking questions directly to the channelers themselves. This is more than just a summit. It is your gateway to understanding the profound shifts happening within and around all of us, plus, when you sign up, you receive exclusive bonus content to deepen your spiritual exploration, join us and step into the extraordinary. So I went up to someone, and I just said it. I was, I was literally just like, Can I can I do this? They were like, Sure. And I knew that they had lost their job. I knew that they were suffering and they were struggling. I felt their insecurity. I felt so many different things, and I was expressing it. And he was like, Who the hell are you? Like, this is weird, you know. So I was kind of like, oh, that validated it, that it's correct. And I just kept on going and doing it with other people and friends, and started to know a lot of stuff that, of course, I wouldn't have known myself until I tuned in. And that's when spirit was like, you're going to have to start offering readings. And so I was living in Lapland at the time, and that's when I started going full time giving readings. And I think I've done over 1000 now, and they've all been deeply transformational. But I always find that each reading I've done has given me more than than what I give them as well, because I'm learning so much about each person's soul, and I'm learning so much about giving ourselves permission to have joy, because whenever I tune into people's guides, it's nothing but unconditional love for that person sitting right in front of me, like their guides just want the best for them. They just want love for them. And seeing that like common thread that is played out in every single reading, it's like, oh, the meaning of life is actually very simple. It's very simple. And it's it's giving ourselves permission to experience that. So being in the space that you're in, and even being in the space that I'm in, there's criticisms that come towards you. You know, obviously, let's not even talk about the YouTube comments, but but in let's not, let's not go down that dark rabbit hole. But have you dealt with that kind of energy coming towards you about your gift. Because, again, this is it's much more accepting now than he was even a decade ago, and is becoming more and more accepted as shows like mine and others are kind of putting the word out for things and people's consciousness are raising. But how do you deal with that kind of negative energy that comes towards you? Because I have to believe that you have had it at one point or another in your journey. Yeah, yeah. I mean, what's quite interesting about that question is it doesn't really bother me for the reason that I dove so deep into heart, awakening a long time ago, and connecting to my heart, that I feel just genuinely compassion. Because I find when people think of this as kind of weird or not real, I have like, this sadness, feeling like, on some level, they're missing out. Because it's so joyfully infectious in my life that I kind of just see it as like, okay, it's just not their time yet, and it's very accepting. And also, from doing so many psychic readings, I really feel I have one foot in the physical and one foot, like, in the higher realm. And so I see everything from a higher perspective, always, rather than, like a grounded, like, reactive state of like, why is this happening to me? I always see it from like, a soul level of being like, okay, it's not their time. I see their perception. And because I can see through people's emotional bodies, their spiritual bodies, whenever I see this kind of criticism, I always see the reflection within themselves. So it just gives me a higher grace of compassion, not to say that I'm a human and I don't get triggered, but it's like something that I've just learned over time and and I think also just of the miracles that it's created my own life and seeing in my friend's life, my loved ones lives, like it's just kind of for me, like it's so real. It's like, it's my soul, it's, it's everything to me that I just, I don't mind because I just am like, well, it's, it's such a blessing that I appreciate it, regardless if someone else doesn't believe in that or think that's crazy. How do you balance living a human life with the amount of knowledge and connection you have to the other side? And this is a problem that I know near death experiencers have, and channelers have, and psychic mediums have, because they live a lot of times more time on the other side than they do in reality. So how do you build relationships? How do you you know, if you want to have a loving relationship, you know a romantic relationship. How does that work? How do you deal with other. People that might not be at the same place that you are, and you're like, Ah, why do I have to deal with this stuff, this lower energy stuff, when I know what's happening on the other side, I know where we're all going to be going, like that, knowledge has to weigh heavy on you, to be to balance that just normal living life day to day. I do. I think that it's kind of comical, because I've made a career out of it, so most of my life is surrounded by that type of energy anyway, but I understand where you're coming from, and it's been a journey, you know, like there was a few years where I was literally sitting in my apartment talking to angels more than humans, you know, and that that wasn't normal either. That's a problem. It was a problem. And at the time, I didn't see that, and I was connecting to angels. I was connecting to more on that side than literally anything, and I didn't have many relationships. And it took kind of like this moment of me surrendering literally on my knees and praying and being like, I allow you to take over, because I feel like Spirit is the one that moves through me and guides me. And so what started to happen was I just started being guided to the right places and the right people that I brought people into my life who were extremely grounded, who were extremely like, into their body, or into, like healthy eating, or like a specific way of living. And I found I've traveled all over the world for the past five years, living with multiple different people who reflect and get, like, have so much codes to offer. Just for example, like I was living in Costa Rica a couple months ago, and I was living with a beautiful like, sister of mine, and she is, like a primal, ancestral eater, and she's very grounded in her body. And like, living with her impacted my life so much that, like, I eat so primarily now and organically and like good, that it's almost like I do my psychic reading, and then once that's finished, I'm not thinking about spirit. I'm in my body. I'm in my life. I'm in my experience. But in regards to it being a challenge, because I can understand a lot of people listening who are just in a hometown and they feel like they're the only one who's kind of awake to that stuff, I really resonate with that pain, and I do understand that that is a very challenging and difficult thing, and it was something that I was tuning into before coming on here that I really wanted to like address, which is, I really believe that it is so vital, like essential is to have your soul tribe. It is to have people that literally inspire you and expand you and uplift you. Because I've been on the other side, where I've been around people where they didn't really understand my way of being. And truthfully, it feels like my soul is suffocating to some degree. And of course, there's a lesson, there's there's growth there. But I also find that it's really important that you find people that you're like are your tribe that can inspire you and influence you. And whenever I used to tune into that and call those people in I kept getting visions of like Earth grids all over the world, like people, like, even if you are alone in your hometown, you're connected to 1000s of other people who are on your frequency on Earth right now. So you're always connected. So what I started to do was, like, connect to that frequency of having support and having people. And it went from I remember like crying to my mom being like, I've literally no friends to like, I don't really want any more friends because I have too much, if I'm being brutally honest, because I've called in so many and it came from like really connecting and believing those people were out there and then going out to meet them, because I've been on that side where you feel like you just don't have anyone who understands you. And I do know how painful that can be, and I really want to honor people who may feel that or go through that. But what I've come to learn is it doesn't have to be that way. Of course, we learn stuff from people who aren't like that, but you can find so many people who are on your wavelength, who are on your path, that are here to guide you and to expand you in a friendship, in a relationship, in whatever way that wants to come Yeah, we always joke around. Like, as you get older, you start running around when people come into your life and try to become friends after you get to a certain age, like we're all friends. Like, we're all friended up here. We're good, yeah, we don't need any I'm not like that, but I could understand, no, we're good. Thanks. I don't have the energy or time to build a new relationship. I have enough. Thank you. You're overflowing. We're overflowing with blessings. We're good. Thank you. It's very, very interesting. Now, one thing is, I want to, and I would love to hear what your spirit, your guides, are saying about this is that we're going through such a difficult time right now, these last four years, the decade so far, has been a journey, to say the least. It's the roughest decade I've ever been a part of. I have been on this earth a couple years longer than you, just a couple, and it seems like we are going through a major, major, not only shift in consciousness, but a shift in general, for so many people who are like, Oh, my God, the world's coming to an end. This is everything's burning, all this, all this negative stuff. Why, from your spirit guides point of view, why is this happening to us right now, and where are we going to be going over the next Well, this year we'll see where we we still got a heck of a year left over here, but the next decade or so, where are we? Where are we going? Why is this happening? Yeah, this is something I have really like argued with my guides and confused, because the human heart, the compassion is like, why is this all happening? This is devastating. This is heartbreaking. But what I've come to understand, and what my guides have shown me so many times, is that a lot of the darkness we see today has always existed, not to say on this entire time on Earth, but because there is such an influx of light and a frequency of people awakening, and so much information nowadays that people's consciousness is accelerating at such a rapid rate, we're just being revealed what was already there. And so I see it as like they always say to me, Ella, this is like a spiritual warfare of dark and light, but it's all essentially happening so that we can remember who we are. And whenever I would tune into this, it was, it was just a really hard, hard thing for me to tune into, because I am very conscious of my guides would show me a lot of things that were happening, happening in Hollywood and with the music industry, the film industry, things that like I logically didn't seek out like my guides show me all the time, things that are happening in the world that, like, are just horrific, and something that I just freaks me out. But they're always showing me like there is a density on this planet, because Earth is, like, one of the only, or if the only planet in the galaxy that has this ability for us to be eat the most, like, like animalistic, primal to Avatar consciousness. Because if you think of like a dog or like a cat, they can't, like, ASCEND their consciousness, they just are at that level. Whereas humans have the option of, like, going from such a density of pain or of trauma, of all these deepness, all the way to like, higher vibrational frequencies, like we can become whoever we want. So with the state of the world, it's kind of like showing me that it's all just being lifted because there are more people on Earth right now than ever that are awakening, that are holding the light, because a long time ago, there was a darkness that took over and tried to place these fear paradigms on the earth that we have all been controlled and constricted to live and embody every day. And so we're waking up to expand that and to remember our light. So the more that we see these terms play out, unfortunately, that is a reflection of how much we're then remembering who we are, because we're being asked to look within ourselves and to remember the light, which is kind of the purpose of this earth. And you know, I've tuned in on the future many times, and I do see like, of course, there is going to be a lot more catastrophes, but on the other side of that, they always show me that the light is going to win. I have been shown like, I don't want to get too into it, because they always say, like, it's not for most people to know, but there are going to be earthly disasters. I've been shown that a lot, but the reasoning for that is of a higher level again, and it's something that just doing my work as a psychic and seeing the higher level in everything. It allows me to hold that higher vision, again, of understanding, because I see it as like on a human level, we're very reactive, we're emotional, we feel, but on a higher level, the soul is like just breath. It's just like a heartbeat. It's so neutral about everything. So when we can hold a higher perspective and understand that this is all happening for a higher reason, for people to remember of who we are and to take back our power. That's kind of the higher scheme of it. So like they're showing me like a pyramid right now. It's like remembering the top of the pyramid the higher mind and like understanding and holding the light of that, because we come here to remember who we are, and the more people wake up to that, the more it's going to shatter those fear paradigms that we have been under illusion for for centuries. So how can we maintain spiritual balance during this insane time? Because it's one thing to go up to Tibetan, to Tibetan monastery up in the Himalayas. You know, we just eat pure food all day and sit down and meditate for eight or nine hours. Very easy to become, not very easy, but easier to have spiritual enlightenment in that scenario. But the rest of us don't live in that world. Some of us are parents. So I always said to yogis, I'm like, where is there a yogi that had kids? And there's only one that I found, but it's very difficult to have enlightenment when you have to deal with real world events, just normal life, but then now dealing with this turmoil and the wars and the economic stuff and the political stuff and the and everything that's happening to us, how can you maintain spiritual balance in the middle of that kind of hurricane? Yeah, and what's interesting is I had a dream about this a while ago, that spirit answered that question, because I was very much battling between the two worlds, and they showed me that everything that is happening, I think this understanding that, like spirituality is something outside of ourselves, or it is like something we need to transcend and move into a different realm, like the earth experience is the spiritual experience, because everything is spiritual matter. So I see everything in this world as the spiritual experience. And it went from me, you know, going and sitting in circle and ceremony and retreats and traveling all over the world to these events and doing what you were saying, of, kind of like moving up the scale to the mountains and to these spaces of enlightenment, to come to this point where I am now. Of, I have no desire to do any of that, because it's not about. Me finding these height and spiritual experiences. It's getting dirty in the game of life and the reality of this. So I see everything as kind of like a spiritual experience. And that is what's like. We're working towards an understanding. So this paradigm that in order to be spiritual, we have to meditate and have crystals and pray and do all of these things, I really believe, is dramatically incorrect, because everything in this world is is just energy. Everything in this world is a spiritual experience and spiritual game. And I've had that discussion with a lot of my friends who are like coming back to life, back to the world, and seeing that that's the real game, and that's where it really stretches us and gives us that grit. So I don't see the two as separate anymore. Of course, I used to, but I see them as one of the same. So I kind of see it all as part of the game. I see this whole world is just like a game. If there is, you know, if Jesus was here today, or Buddha or Yogananda, or any of these great avatars, you know what I mean, if they were physically here in matter, don't be a smart butt. Okay, see, so if any of these avatars were here today, they would have YouTube channels, wouldn't they? I actually laugh about that so much. I'm like, Jesus was an influencer. Like Jesus was literally like, I was just my ultimate the ultimate influence, ultimate influencer. I was like, thinking this, like a few months ago, I was like, imagining him, just like, have a millions of followers on Instagram. Just like preaching and just like putting up the peace sign and being like, here with Mary Magdalene, like it's it's true. You know, they were all just influential. And I really believe that that awareness of you see, I think Jesus came here to remember, to reflect to us who, to remember who we are, not to praise him as a god or not, to see him as like, worshiping something outside of ourselves. It's the understanding that we are all part of the Prime Creator, and I think that's what we're really starting to understand. So everyone's starting to wake up to that sovereignty, that we are all one and we are all part of that. I mean, I went on like a Bob Marley kick. I love Bob Marley so much. My mom actually hitchhiked across Europe to see him, and I was so jealous. But one love, I literally just listened to that song every day. And I'm like, That is the message. You know, it's like a weaved within a soul. I always see it as this vision spirit shows me of like this green chord, or like a white chord that interconnects us with everything and everyone, like that, a piece of source is in with all within all of us, and we have the ability to connect to anyone and anything, no matter how far it is in the galaxy, because we are all just energy, and we are all connected. And I think that's the real awakening that we're coming here to learn. And I also, too, like Bob Marley, a lot, that concept of one love, and it just it's remarkable. I love to hear what your guides have to say about the shift that's happening between the old systems and the new systems you were speaking of Jesus, His teachings have been slightly, not often, slightly changed since his original just a little bit has been manipulated just a slight bit since he originally was preaching them. But you know that kind of truth of those original teachings, of all the great avatars and all the great masters, you're starting to see cracks in these institutions that were absolutely infallible. I mean, you come from an Irish background. I come from a Latin background, a Latino background, the Catholic Church. You could, oh, my, it was this omnipotent, powerful, just it was the Rock of Gibraltar, like it was unmovable. Never questioned today, not so much. And it seems that I'm using that as an example, as one of those systems that seems to you starting to see the cracks. People are going, No thank you, though, that's not what we really want, and it's happening in every world, from media, Hollywood and the music industry. Is a big shift in politics, there's a big shift in economics, there's a big shift in health. Is a big shift all that stuff. So what are their take on this old system, new system paradigm that we're going through. Yeah, and I love that you said it's a paradigm, because Spirit have shown me the old and new a million times. I've spoken about it in so many YouTube videos as well. And what they're kind of showing me at this point is like. They kind of use the analogy of like, that we have the information is the light. So if we are aware, that is the light. So I'll give the example of like, if we're in a dark, pitch black room and we hear these creepy noises, we're going to be freaked out. We're going to be scared. But if we turn on the light and we see where that noise is coming from, we feel a bit calmer knowing where it originates from. So when we have that awareness, and we have that understanding that in itself, is enough to really start to enhance like, what is happening, but what I've come to learn, and what my guides are starting to continue to tell me now, is, like, it's not about us waiting on the side and just like, waiting for these systems to change because, like, of course, we believe that they are going to change eventually, because we're all kind of waking up to that, but they're still very much concreted in their own way. So it's not about because I've had so many. People and Coles who are just waiting for, like, everyone to just wake up one day, and that's it, and it's just and my guides are like, Ella, that's just not the case. It's just not going to be that way. And they always show me a set of like, spiritual laws, which I can email you, by the way, that they channeled for me, and they were like, what they're really wanting to usher in is a paradigm that we can anchor and hold, whilst these systems are like simultaneously still existing, because it's not about us waiting and sitting on the sideline or, of course, we can fight and do whatever we want, but it's about us anchoring in our own systems. And that's what they keep showing me. So it's like living and breathing in the embodiment of your own systems, regardless if you're working in like a nine to five or you're in the midst of, like, the most like matrixy thing, and you're super awake to it. It's living in your own system. So I can email that to some of the laws that they've shown me, because what they're wanting to do, and they're even showing this now, is like, it's about us anchoring in the new systems, instead of because, like, the first level is awakening to the systems, and the second level is anchoring in your own system while simultaneously. And the more people that remember that, because it's sovereignty, the more collectively it's going to start to shift.

      Own systems sovereignity

    1. introverts will stop belittling themselves.  Support the Next Generation of Content Creators Invest in the diverse voices that will shape and lead the future of journalism and art. donate now

      This article angers me far beyond what it should. It is supposed to be a call to action for society to stop treating introverts like they're inferior. As an introvert myself I feel belittled reading this. Like I'm the victim and that it is a negative trait to be an introvert. While it's supposed to be a positive trait according to the article (Oh look, I'm introverted and that makes me a GREAT leader). All it does is just list the author's problems with being an introvert, but all of what she did list is barely traits of being an introvert, it's traits of being a coward with no self-confidence and a victim complex. Being an introvert or an extrovert is neither good nor bad, it's just what it is.

    1. Hello there, folks.

      Thanks once again for joining.

      Now that we've got a little bit of an understanding of what problem cloud is solving, let's actually go ahead and define it.

      So what we'll talk about is technology on tap, a common phrase that you might have heard about when talking about cloud.

      What is it and why would we say that?

      Then what we're actually going to do is walk through the NIST definition of cloud.

      So there are five key properties that the National Institute of Standards and Technology does use to determine whether or not something is cloud.

      So we'll walk through that.

      So we've got a good understanding of what cloud is and what cloud is not.

      So first things first, technology on tap.

      Why would we refer to cloud as technology on tap?

      Well, let's have a think about the taps we do know about.

      When you want access to water, if you're lucky enough to have access to a nice and easy supply of water, all you really need to do is turn on your tap and get access to as little or as much water as you want.

      You can turn that on and off as you require.

      Now, we know that that's easy for us.

      All we have to worry about is the tap and paying the bill for the amount of water that we consume.

      But what we don't really have to worry about is everything that goes in behind the scenes.

      So the treatment of the water to bring it up to drinking standards, the actual storage of that treated water, and then the transportation of that through the piping network to actually get to our tap.

      All of that is managed for us.

      We don't need to really worry about what happens behind the scenes.

      All we do is focus on that tap.

      We turn it on if we want more.

      We turn it off when we are finished.

      We only pay for what we consume.

      So you might be able to see where I'm going with this.

      This is exactly what we are talking about with cloud.

      With cloud, however, it's not water that we're getting access to, it is technology.

      So if we want access to technology, we use the cloud.

      We push some buttons, we click on an interface, we use whatever tool we require, and we get access to those servers, that storage, that database, whatever it might be that we require in the cloud.

      Now again, behind the scenes, we don't have to worry about the data centers that host all of this technology, all of these services that we want access to.

      We don't worry about the physical infrastructure, the hosting infrastructure, the storage, all the different bits and pieces that actually get that technology to us, we don't need to worry about.

      And how does it get to us?

      How is it available all across the globe?

      Well, we don't need to worry about that connectivity and delivery as well.

      All of this behind the scenes when we use cloud is managed for us.

      All we have to worry about is turning on or off services as we require.

      And this is why you can hear cloud being referred to as technology on tap, because it is very similar to the water utility service.

      Utility service is another name you might hear cloud being referred to, because it's like water or electricity.

      Cloud are like these utility services where you don't have to worry about all the infrastructure behind the scenes.

      You just worry about the thing that you want access to.

      And really importantly, you only have to pay for what you use.

      You turn it on if you need it, you turn it on if you don't, you create things when you need them, delete them when you don't, and you only pay for those services when you have them, even though they are constantly available at your fingertips.

      Now, compare this to the scenario we walked through earlier.

      Traditionally, we would have to buy all of the infrastructure, have it sitting there idly, even if we weren't using it, we would still have had to pay for it, set it up, power it and keep it all running.

      So this is a high level of what we are talking about with cloud.

      Easy access to servers when you need them, turn them off when you don't, don't worry about all that infrastructure behind the scenes.

      But that's a high level definition.

      So let's now walk through what the NIST use as the key properties to define cloud.

      One of the first properties you can use to understand whether something is or is not cloud is understanding whether or not it provides you on demand self service access, where you can easily go ahead and get that technology without even having to talk to humans.

      So what do I really mean by that?

      Well, let's say you're a cloud administrator, you want to go ahead and access some resources in the cloud.

      Now, if you do want access to some services, some data, some storage and application, whatever it might be, while you're probably going to have some sort of admin interface that you can use, whether that's a command line tool or some sort of graphical user interface, you can easily use that to turn on any of the services that you need, web applications, data, storage, compute and much, much more.

      And you don't have to go ahead, talk to another human, procure all of the infrastructure that runs behind the scenes.

      You use your tool, it is self service, it is on demand, create it when you want it, delete it when you don't.

      So that's on demand self service access and one of the key properties of the cloud.

      Next, what I want to talk to you about is broad network access.

      Now, this is where we're just saying, if something is cloud, it should be easy for you to access through standard capabilities.

      So for example, if we are the cloud administrator, it's pretty common when you're working with technology to expect that you would have command line tools, web based tools and so on.

      But even when we're not talking about cloud administrators and we're actually talking about the end users, maybe for example, accessing storage, it should be easy for them to do so through standard tools as well, such as a desktop application, a web browser or something similar.

      Or maybe you've gone ahead and deployed a reporting solution in the cloud, like we spoke of in the previous lesson.

      Well, you would commonly expect for that sort of solution that maybe there's also a mobile application to go and access all of that reporting data.

      The key point here is that if you are using cloud, it is expected that all of the common standard sorts of accessibility options are available to you, public access, private access, desktop applications, mobile applications and so on.

      So if that's what cloud is and how we access it, where actually is it?

      That's a really important part of the definition of cloud.

      And that's where we're referring to resource pooling, this idea that you don't really know exactly where the cloud is that you are going to access.

      So let's say for example, you've got your Aussie Mart company.

      If they want to deploy their solution to be available across the globe, well, it should be pretty easy for them to actually go ahead and do that.

      Now, we don't know necessarily where that is.

      We can get access to it.

      We might say, I want my solution available in Australia East for example, or Europe or India or maybe central US for example.

      All of these refer to general locations where we want to deploy our services.

      When you use cloud, you are not going to go ahead and say, I want one server and I want it deployed to the data center at 123 data center street.

      Okay, you don't know the physical address exactly or at least you shouldn't really have to.

      All you need to know about is generally where you are going to go and deploy that.

      Now, you will also see that for most cloud providers, you've got that global access in terms of all the different locations you can deploy to.

      And really importantly, in terms of all of these pooled resources, understand that it's not just for you to use.

      There will be other customers all across the globe who are using that as well.

      So when you're using cloud, there are lots of resources.

      They might be in lots of different physical locations and lots of different physical infrastructure and in use by lots of different customers.

      And you don't really need to worry about that or know too much about it.

      Another really important property of the cloud is something referred to as rapid elasticity.

      Now elasticity is the idea that you can easily get access to more or less resources.

      And when you work with cloud, you're actually going to commonly hear this being referred to as scaling out and in rather than just scaling up and down.

      So what do I mean by that?

      Well, let's say we've got our users that need to access our Aussie Mart store.

      We might decide to use cloud to host our Aussie Mart web application.

      And perhaps that's hosted on a server and a database.

      Now, when that application gets really busy, for example, if we have lots of different users going to access it at the same time, we might want to scale out to meet demand.

      That is to say, rather than having one server that hosts our web application, we might actually have three.

      And if that demand for our application decreases, we might actually go ahead and decrease the underlying resources that power it as well.

      What we are talking about here is scaling in and out by adding or decreasing the number of resources that host our application.

      This is different from the traditional approach to scalability, where what we would normally do is just add CPU or add memory, for example.

      We would increase the size of one individual resource that was hosting our solution.

      So that's just elasticity at a high level and it's a really key property of cloud.

      Now, we'll just say here that if you are worried about how that actually works behind the scenes in terms of how you host that application across duplicate resources, how you provide connectivity to that, that's all outside the scope of this beginners course, but it's definitely covered in other content as well.

      So when you're using cloud, you get easy access to scale in and out and you should never feel like there are not enough resources to meet your demand.

      To you, it should just feel like if you want a hundred servers, for example, then you can easily get a hundred servers.

      All right, now the last property of cloud that I want to talk to you about is that of measuring service.

      When we're talking about measuring service, what we're talking about is the idea that if you are using cloud to host your solutions, it should be really easy for you to go and say, I know what this is costing, I know where my resources are, how they are performing and whether there are any issues and I can control the types of resources and the configuration that I use that I'm going to deploy.

      So for example, it should be easy for you to say, how much is it going to cost me for five gigabytes of storage?

      What does my bill look like currently and what am I forecasted to be using over the remainder of the month?

      Or maybe you want to say that certain services should not be allowed to be deployed across all regions.

      Yes, cloud can be accessed across the globe, but maybe your organization only works in one part of a specific country and that's the only location you should be able to use.

      These are the standard notions of measuring and controlling service and it's really common to all of the cloud providers.

      All right, everybody.

      So now you've got an understanding of what cloud is and how you can define it.

      If you'd like to see more about this definition from the NIST, then be sure to check out the link that I've included for this lesson.

      So thanks for joining me, folks.

      I'll see you in the next lesson.

    1. Hey there everybody, thanks for joining.

      It's great to have you with me in this lesson where we're going to talk about why cloud matters.

      Now to help answer that question, what I want to do firstly is talk to you about the traditional IT infrastructure.

      How did we used to do things?

      What sort of challenges and issues did we face?

      And therefore we'll get a better understanding of what cloud is actually doing to help.

      We can look at how things used to be and how things are now.

      So what we're going to do throughout this lesson is walk through a little bit of a scenario with a fictitious company called Ozzymart.

      So let's go ahead now, jump in and have a chat about the issues that they're currently facing.

      Ozzymart is a fictitious company that works across the globe selling a range of different Australia related paraphernalia.

      Maybe stuffed toys for kangaroos, koalas and that sort of thing.

      Now they've currently got several different applications that they use that they provide access to for their users.

      And currently the Ozzymart team do not use the cloud.

      So when we have a look at the infrastructure hosting these applications, we'll learn that Ozzymart have a couple of servers, one server for each of the applications that they've got configured.

      Now the Ozzymart IT team have had to have gone and set up these servers with windows, the applications and all the different data that they need for these applications to work.

      And what it's also important to understand about the Ozzymart infrastructure is all of this is currently hosted on their on-premises customer managed infrastructure.

      So yes, the Ozzymart team could have gone out and maybe used a data center provider.

      But the key point here is that the Ozzymart IT team have had to set up servers, operating systems, applications and a range of other infrastructure to support all of this storage, networking, power, cooling.

      Okay, these are the sorts of things that we have to manage traditionally before we were able to use cloud.

      Now to help understand what sort of challenges that might introduce, let's walk through a scenario.

      We're going to say that the Ozzymart CEO has gone and identified the need for reporting to be performed across these two applications.

      And the CEO wants those reports to be up and ready by the end of this month.

      Let's say that's only a week away.

      So the CEO has instructed the finance manager and the finance manager has said, "Hey, awesome.

      You know what?

      I've found this great app out there on the internet called Reports For You.

      We can buy it, download it and install it.

      I'm going to go tell the IT team to get this up and running straight away."

      So this might sound a little bit familiar to some of you who have worked in traditional IT where sometimes demands can come from the top of the organization and they filter down with really tight timelines.

      So let's say for example, the finance manager is going to go along, talk to the IT team and say, "We need this Reports For You application set up by the end of month."

      Now the IT team might be a little bit scared because, hey, when we look at the infrastructure we've got, it's supporting those two servers and applications okay, but maybe we don't have much more space.

      Maybe we don't have enough storage.

      Maybe we are using something like virtualization.

      So we might not need to buy a brand new physical server and we can run up a virtual Windows server for the Reports For You application.

      But there might just not be enough resources in general.

      CPU, memory, storage, whatever it might be to be able to meet the demands of this Reports For You application.

      But you've got a timeline.

      So you go ahead, you get that server up and running.

      You install the applications, the operating system data, all there as quickly as you can to meet these timelines that you've been given by the finance manager.

      Now maybe it's not the best server that you've ever built.

      It might be a little bit rushed and a little bit squished, but you've managed to get that server up and running with the Reports For You application and you've been able to meet those timelines and provide access to your users.

      Now let's say that you've given access to your users for this Reports For You application.

      Now let's say when they start that monthly reporting job, the Reports For You application needs to talk to the data across your other two applications, the Aussie Mart Store and the Aussie Mart Comply application.

      And it's going to use that data to perform the reporting that the CEO has requested.

      So you kick off this report job on a Friday.

      You hope that it's going to be complete on a Saturday, but maybe it's not.

      You check again on a Sunday and things are starting to get a little bit scary.

      And uh-oh, Monday rolls around, the Reports For You report is still running.

      It has not yet complete.

      And that might not be so great because you don't have a lot of resources on-premises.

      And now all of your applications are starting to perform really poorly.

      So that Reports For You application is still running.

      It's still trying to read data from those other two applications.

      And maybe they're getting really, really slow and let's hope not, but maybe the applications even go off entirely.

      Now those users are going to become pretty angry.

      You're going to get a lot of calls to the help desk saying that things are offline.

      And you're probably going to have the finance manager and every other manager reaching out to you saying, this needs to be fixed now.

      So let's say you managed to push through, perhaps through the rest of Monday, and that report finally finishes.

      You clearly need more resources to be able to run this report much more quickly at the end of each month so that you don't have angry users.

      So what are you going to do to fix this for the next month when you need to run the report again?

      Well, you might have a think about ordering some new software and hardware because you clearly don't have enough hardware on-premises right now.

      You're going to have to wait some time for all of that to be delivered.

      And then you're going to have to physically and store it, set it up, get it running, and make sure that you've got everything you need for reports for you to be running with more CPU and resources next time.

      There's a lot of different work that you need to do.

      This is one of the traditional IT challenges that we might face when the business has demands and expectations for things to happen quickly.

      And it's not really necessarily the CEO or the finance manager's fault.

      They are focused on what the business needs.

      And when you work in the technology teams, you need to do what you can to support them so that the business can succeed.

      So how might we do that a little bit differently with cloud?

      Well, with cloud, we could sign up for a cloud provider, we could turn on and off servers as needed, and we could scale up and scale down, scale in and scale out resources, all to meet those demands on a monthly basis.

      So that could be a lot less work to do and it could certainly provide you the ability to respond much more quickly to the demands that come from the business.

      And rather than having to go out and buy all of this new infrastructure that you are only going to use once a month, well, as we're going to learn throughout this course, one of the many benefits of cloud is that you can turn things on and off really quickly and only pay for what you need.

      So what might this look like with cloud?

      Well, with cloud, what we might do is no longer have that on-premises rushed server that we were using for reports for you.

      Instead of that, we can go out to a public cloud provider like AWS, GCP or hopefully Azure, and you can set up those servers once again using a range of different features, products that are all available through the various public cloud providers.

      Now, yes, in this scenario, we are still talking about setting up a server.

      So that is going to take you some time to configure Windows, set up the application, all of the data and configuration that you require, but at least you don't need to worry about the actual physical infrastructure that is supporting that server.

      You don't have to go out, talk to your procurement team, talk to a different providers, wait for different physical infrastructure to be delivered and licensing and software and other assets.

      With cloud, as we will learn, you can really quickly get online and up and running.

      And also, if we had that need to ensure that the reports for you application was running with lots of different resources at the end of the month, it's much easier when we use cloud to just go and turn some servers on and then maybe turn them off at the end of the month when they are no longer acquired.

      This is the sort of thing that we are talking about with cloud.

      We're only really just touching on the service about what cloud can do and what cloud actually is.

      But my hope is that through this lesson, you can understand how cloud changes things.

      Cloud allows us to work with technology in a much different way than we traditionally would work with our on-premises infrastructure.

      Another example that shows how cloud is different is that rather than using the reports for you application, what we might in fact actually choose to do is go to a public cloud provider and go to someone that actually has a equivalent reports for you solution that's entirely built in the cloud ready to go.

      In this way, not only do we no longer have to manage the underlying physical infrastructure, we don't actually have to manage the application software installation, configuration, and all of that service setup.

      With something like a reporting software that's built in the cloud, we would just provide access to our users and only have to pay on a per user basis.

      So if you've used something like zoom for meetings or Dropbox for data sharing, that's the sort of solution we're talking about.

      So if we consider this scenario for Aussie Mart, we have a think about the benefits that they might access when they use the cloud.

      Well, we can much more quickly get access to resources to respond to demand.

      If we need to have a lot of different compute capacity working at the end of the month with cloud, like you'll learn, we can easily get access to that.

      If we wanted to add lots of users, we could do that much more simply as well.

      And something that the finance manager might really be happy about in this scenario is that we aren't going to go back and suggest to them that we need to buy a whole heap of new physical infrastructure right now.

      When we think about traditionally how Aussie Mart would have worked with this scenario, they would have to go and buy some new physical servers, resources, storage, networking, whatever that might be, to meet the needs of this reports for you application.

      And really, they're probably going to have to strike a balance between having enough infrastructure to ensure that the reports for you application completes its job quickly and not buying too much infrastructure that's just going to be sitting there unused whilst the reports for you application is not working.

      And really importantly, when we go to cloud, we see this difference as not having to buy lots of physical infrastructure upfront as being referred to as capital expenditure versus operational expenditure.

      Really, what we're just saying here is rather than spending a whole big lump sum all at once to get what you need, you can just pay on a monthly basis for what you need when you need it.

      And finally, one of the other benefits that you'll also see is that we're getting a reduction in the amount of different tasks that we have to complete in terms of IT administration, set up of operating systems, management of physical infrastructure, what the procurement team has to manage, and so on.

      Again, right now we're just talking really high level about a fictitious scenario for Aussie Mart to help you to understand the types of things and the types of benefits that we can get access to for cloud.

      So hopefully if you're embarking on a cloud journey, you're gonna have a happy finance manager, CEO, and other team members that you're working with as well.

      Okay, everybody, so that's a wrap to this lesson on why cloud matters.

      As I've said, we're really only just scratching the surface.

      This is just to introduce you to a scenario that can help you to understand the types of benefits we get access to with cloud.

      As we move throughout this course, we'll progressively dive deeper in terms of what cloud is, how you define it, the features you get access to, and other common concepts and terms.

      So thanks for joining me, I'll see you there.

    1. Over the years, forums did not really get smaller, so much as the rest of the internet just got bigger. Reddit, Discord and Facebook groups have filled a lot of that space, but there is just certain information that requires the dedication of adults who have specifically signed up to be in one kind of community. This blog is a salute to those forums that are either worth participating in or at least looking at in bewilderment.

      It's just nice to see people be interested in stuff, and have a group of like minded people that's also interested in the same stuff! What else is there to it all

    1. A levelized morality that is rational, global, and actively meliorist fits almost perfectly with this new-age liberalism. This levelized morality can be calculated and outsourced just the same as a manufacturing job. If, for example, it’s more efficient to make an air conditioner in Mexico than in Ohio, then you gut the town in Ohio and ship the parts from Mexico. With EA’s levelized morality, if your money is most effective fighting malaria in Africa, then you stop caring about your neighbors and outsource your moral caring there.

      I see the comparison here, and think that there is merit to it. The "outsourcing" point is particularly strong--i.e. it is tempting to outsource moral actions to simplified optimization functions, which entice us to jump to conclusions about what is good. The question, though, is whether or not Singer encourages this. That is a tougher case. My sense is that Singer would NOT explicitly endorse that kind of behavior, even if he DOES implicitly endorse it, as a way to nudge people to become "more moral than they might otherwise be" according to his tastes.

    2. What I want to convince you of is that the values of neoliberalism don’t just dictate our economic lives but also influence our moral and spiritual lives, and that neoliberalism does so by asserting values that directly conflict with what it is to be human. Fortunately, there is a convenient way to investigate this influence. Gray’s list (“individualist, egalitarian, universalist, and meliorist”), is essentially the philosophy of Peter Singer.

      Calling this out as the thrust of the piece. Is Rudy able to defend this point? (FWIW seems like it's still an interesting argument, even if the neolib/lib distinction is not make perfectly clear.)

    3. the Singerian story is that you should treat people the same whether they live just down the street from you or on the other side of the world.

      I think at his strongest, this is true. There's so much to be said here though. Related to the last comment, his argument almost forces us to come to some version of this conclusion ourselves, by relying on our intuitions. "Oh, you think it's immoral to let this kid that you don't really know but is proximate to you drown? Why is that different from a kid on the other side of the world that you don't know? That seems like a legitimate question. The assertion that distance matters is not much more than an assertion. If this thought experiment doesn't cause you to completely flatten your concern curve, should cause you to question its derivative...

    4. Civil rights (property and self)Religious rights (worship)Political rights (speech)These rights were foundational to the philosophic framework of America and to our modern, global, liberal system.

      My view is that there is a relatively clear mapping from Locke to Gray. The list that is given here for Locke is clearly lacking details. For example, who are the bearers of these rights? Locke had an asnwer, but it's not here. Gray gives us an answer: Individuals, not collectives; it's not the case that some get more rights than others; all humans, not just some.

    1. One of the early ways of social communication across the internet was with Email, which originated in the 1960s and 1970s. These allowed people to send messages to each other, and look up if any new messages had been sent to them.

      It's also interesting to know that the first ever version of the internet was just a connection of multiple computers at first and then evolved into this network where users could only read, to now doing so many stuff with it.

    1. Author response:

      The following is the authors’ response to the original reviews.

      Public Reviews: 

      Reviewer #1 (Public Review): 

      Summary: 

      The present study's main aim is to investigate the mechanism of how VirR controls the magnitude of MEV release in Mtb. The authors used various techniques, including genetics, transcriptomics, proteomics, and ultrastructural and biochemical methods. Several observations were made to link VirR-mediated vesiculogenesis with PG metabolism, lipid metabolism, and cell wall permeability. Finally, the authors presented evidence of a direct physical interaction of VirR with the LCP proteins involved in linking PG with AG, providing clues that VirR might act as a scaffold for LCP proteins and remodel the cell wall of Mtb. Since the Mtb cell wall provides a formidable anatomical barrier for the entry of antibiotics, targeting VirR might weaken the permeability of the pathogen along with the stimulation of the immune system due to enhanced vesiculogenesis. Therefore, VirR could be an excellent drug target. Overall, the study is an essential area of TB biology.

      We thank the reviewer for the kind assessment of our paper.  

      Strengths: 

      The authors have done a commendable job of comprehensively examining the phenotypes associated with the VirR mutant using various techniques. Application of Cryo-EM technology confirmed increased thickness and altered arrangement of CM-L1 layer. The authors also confirmed that increased vesicle release in the mutant was not due to cell lysis, which contrasts with studies in other bacterial species. 

      Another strength of the manuscript is that biochemical experiments show altered permeability and PG turnover in the mutant, which fits with later experiments where authors provide evidence of a direct physical interaction of VirR with LCP proteins. 

      Transcriptomics and proteomics data were helpful in making connections with lipid metabolism, which the authors confirmed by analyzing the lipids and metabolites of the mutant. 

      Lastly, using three approaches, the authors confirm that VirR interacts with LCP proteins in Mtb via the LytR_C terminal domain. 

      Altogether, the work is comprehensive, experiments are designed well, and conclusions are made based on the data generated after verification using multiple complementary approaches.

      We are glad that this reviewer finds our study of interest and well designed.   

      Weaknesses: 

      (1) The major weakness is that the mechanism of VirR-mediated EV release remains enigmatic. Most of the findings are observational and only associate enhanced vesiculogenesis observed in the VirR mutant with cell wall permeability and PG metabolism. The authors suggest that EV release occurs during cell division when PG is most fragile. However, this has yet to be tested in the manuscript - the AFM of the VirR mutant, which produces thicker PG with more pore density, displays enhanced vesiculogenesis. No evidence was presented to show that the PG of the mutant is fragile, and there are differences in cell division to explain increased vesiculogenesis. These observations, counterintuitive to the authors' hypothesis, need detailed experimental verification.

      We concur with the reviewer that we do not have direct evidence showing a more fragile PG in the virR mutant and our statement is supported by a compendium of different results. However, this statement is framed in the discussion section as a possible scenario, acknowledging that more experiments are needed to make such connection. Nevertheless, we provide additional data on the molecular characterization of virRmut PG using MS to show a significant increase in the abundance of deacetylated muropeptides, a feature that has been linked to altered lysozyme sensitivity in other unrelated Gram-positive bacteria

      (Fig 8 G,H).  

      (2.1) Transcriptomic data only adds a little substantial. Transcriptomic data do not correlate with the proteomics data. It remains unclear how VirR deregulates transcription. 

      We concur with the reviewer that information provided by transcriptomics and proteomics is a bit fragmented and, taking into consideration the low correlation between both datasets, it does not help to explain the phenotype observed in the mutant. This issue has also been raised by another reviewer so, we have paid special attention to that. 

      To refine the biological interpretation of the transcriptomic data we have integrated the complemented strain (virRmut-Comp) in our analyses. This led us to narrow down the virR-dependent transcriptomics signature to the sets of genes that appear simultaneously deregulated in virRmut with respect to both WT and complemented strain in either direction. Furthermore, to identify the transcription factors whose regulatory activity appear disrupted in the mutant strain, we have resorted to an external dataset (Minch et al. 2015) and found a set of 10 transcriptional regulators whose regulons appear significantly impacted in the virRmut strain. While admittedly these improvements do not fully address the question tackled by the reviewer, we found that they contribute to a more precise characterization of the VirR-dependent transcriptional signatures, as well as the regulons, in the genome-wide transcriptional regulatory network of the pathogen that appear altered because of virR disruption. We acknowledge that the lack of correlation between whole-cell lysates proteomics and transcriptomic data is something intriguing, albeit not uncommon in Mycobacterium tuberculosis. However, differences in the protein cargo of the vesicles from different strains share key pathways in common with the transcriptomic analyses, such as the enrichments in cell wall biogenesis and peptidoglycan biosynthesis that are observed both among genes that are downregulated in both cases in virRmut.

      (2.2) TLCs of lipids are not quantitative. For example, the TLC image of PDIM is poor; quantitative estimation needs metabolic labeling of lipids with radioactive precursors. Further, change in PDIMs is likely to affect other lipids (SL-1, PAT/DAT) that share a common precursor (propionyl- CoA).

      We also agree with the reviewer that TLC, as it is, it is not quantitative. However, we do not have access to radioactive procedures. In the new version of the manuscript, we have run TLCs on all the strains tested to resolve SLs and PAT/DATs (Fig S8). Our results show a reduction in the pool of SL and DATs in the mutant, indicating that part of the methylmalonil pool is diverted to the synthesis of PDIMs. 

      (3) The connection of cholesterol with cell wall permeability is tenuous. Cholesterol will serve as a carbon source and contribute to the biosynthesis of methyl-branched lipids such as PDIM, SL-1, and PAD/DAT. Carbon sources also affect other aspects of physiology (redox, respiration, ATP), which can directly affect permeability and import/export of drugs. Authors should investigate whether restoration of the normal level of permeability and EV release is not due to the maintenance of cell wall lipid balance upon cholesterol exposure of the VirR mutant.

      We concur with the reviewer that cholesterol as a sole carbon source is introducing many changes in Mtb cells beside permeability. Consequently, we investigated the virRmut lipid profile upon exposure to either cholesterol or TRZ (Fig S8). Both WT and virRmut-Comp strains were included in the analysis. Polar lipid analysis revealed that either cholesterol or TRZ exposure induced a marked reduction in PIMs and cardiolipin (DPG) levels in virRmut relative to WT or complemented strains (Fig S8A). Analysis of apolar lipids indicated that, relative to glycerol MM, virRmut cultured in the presence of cholesterol or TRZ showed reduced levels of TDM and DATs compared to WT and virRmut-Comp strains (Fig S8B). These results suggest a lack of correlation between modulation of cell permeability by cholesterol and TRZ and lipid levels in the absence of VirR.

      Furthermore, about this section, we would like to mention that we have modified the reference used for the annotation of the DosR regulon: moving from the definition of the regulon used in the previous submission (coming from Rustad, el at. PLoS One 3(1), e1502 (2008). The enduring hypoxic response of Mycobacterium tuberculosis) to the more recent characterization of the regulon based on CHiPseq data, reported in Minch et al. 2015. This was done to ensure coherence with the transcriptomics analyses in the new figure 4.

      (4) Finally, protein interaction data is based on experiments done once without statistical analysis. If the interaction between VirR and LCP protein is expected on the mycobacterial membrane, how the SPLIT_GFP system expressed in the cytoplasm is physiologically relevant. No explanation was provided as to why VirR interacts with the truncated version of LCP proteins and not with the full-length proteins.

      We have repeated the experiments and applied statistics (Figure 9). As stated in the manuscript this assay has successfully been applied to interrogate interactions of domains of proteins embedded in the membrane of mycobacteria. Therefore, we believe that this assay is valid to interrogate interactions between Lcp proteins.

      Reviewer #2 (Public Review): 

      Summary: 

      In this work, Vivian Salgueiro et al. have comprehensively investigated the role of VirR in the vesicle production process in Mtb using state-of-the-art omics, imaging, and several biochemical assays. From the present study, authors have drawn a positive correlation between cell membrane permeability and vesiculogenesis and implicated VirR in affecting membrane permeability, thereby impacting vesiculogenesis. 

      Strengths: 

      The authors have discovered a critical factor (i.e. membrane permeability) that affects vesicle production and release in Mycobacteria, which can broadly be applied to other bacteria and may be of significant interest to other scientists in the field. Through omics and multiple targeted assays such as targeted metabolomics, PG isolation, analysis of Diaminopimelic acid and glycosyl composition of the cell wall, and, importantly, molecular interactions with PG-AG ligating canonical LCP proteins, the authors have established that VirR is a central scaffold at the cell envelope remodelling process which is critical for MEV production. 

      We thank the reviewer for the kind assessment of the paper.

      Weaknesses: 

      Throughout the study, the authors have utilized a CRISPR knockout of VirR. VirR is a non-essential gene for the growth of Mtb; a null mutant of VirR would have been a better choice for the study. 

      According to Tn mutant databases and CRISPR databases, virR is a non-essential gene. However, we have tried to interrupt this gene using the allelic exchange substitution approach via phages many times with no success. So far there is no precedent of a clean KO mutant in this gene. White et al., generated a virR mutant consisting of deletion of a large fragment of the c-terminal part of the protein, pretty much replicating the effect of the Tn insertion site in the virR Tn mutant. These precedents made us to switch to CRISPR technology.  

      Recommendations for the authors:

      Reviewer #1 (Recommendations For The Authors): 

      (1) The authors monitored cell lysis by measuring the release of a cytoplasmic iron-responsive protein (IdeR). Since EV release is regulated by iron starvation, which is directly sensed by IdeR, another control (unrelated to iron) is needed. A much better approach would be to use hydrophobic/hydrophilic probes to measure changes in the cell wall envelope.

      Does the VirR complemented strain have a faint IdeR band in the supernatant? The authors need to clarify. Also, it's unclear whether the complementation restored normal VirR levels or not. 

      We thank the reviewer for this recommendation. Consequently, we have complemented these studies by an alternative approach based on serially diluted cultures spotted on solid medium. These results align very well with that of western blot using IdeR levels in the supernatant as a surrogate of cell lysis.

      We also noticed the presence of a faint IdeR band in the supernatant of the complemented strain and suggestive of a possible cell lysis. However, as shown in other section this was not translated into increased levels of vesiculation. As previously shown in a previous paper describing VirR as a genetic determinant of vesiculogenesis, VirR levels in the complemented strains are not just restored but increased considerably. This overexpression could explain the potential artifact of a leaky phenotype in the complemented strain. In addition to that previous study, the proteomic data included in this paper clearly shows a restoration of VirR levels relative to the WT strains.

      (2) Figure 2C: The data are weak; I don't see any difference in incorporating FDAAs in MM media. Even in the 7H9 medium, differences appear only at the last time point (20 h). What happens at the time point after 20 h (e.g., 48 h)? How do we differentiate between defective permeability or anabolism leading to altered PG? No statistical analysis was performed.

      We apologize for the incomplete assessment of the results in this figure. First, this figure just shows differential incorporation of FDAAs in the different strains in different media. As per previous studies (Kuru et al (2017) Nat. Protocols), these probes can freely enter into cells and may be incorporated into PG by at least three different mechanisms, depending on the species: through the cytoplasmic steps of PG biosynthesis and via two distinct transpeptidation reactions taking place in the periplasm. Consequently, the differential labeling observed in virRmut relative to WT strain may be a consequence of the enlarge PG observe din the mutant. We have repeated the experiment and created new data. First, we have cultured strains with a blue FDAA (HADA) for 48 to ensure full labeling. Then, we washed cells and cultured in the presence of a second FDAA, this time green (FDL) for 5 h. The differential incorporation of FDL relative to HADA was then measured under the fluorescence microscope. This experiment showed a virRmut incorporate more FDL that the other strains, suggesting an altered PG remodeling.  modified the figure to make clearer the early and late time points of the time-course and applied statistics.

      (3) Many genes (~ 1700) were deregulated in the mutant. Since these transcriptional changes do not correlate at the protein level in WCL, it's important to determine VirR-specificity. RNA-Seq of VirR complemented strain is important.

      We think this was an extremely important point, and we thank the reviewer for pointing this out. Following their suggestion, we have analyzed and integrated data from the complemented strain, which we have added to the GEO submission, to conclude that, in fact, differences in expression between the complemented strain and either the WT, or virRmut are also common and highly significant. Albeit this is not completely unexpected, given the nature of our mutants and the fact that the complemented strains show significantly higher levels of expression of VirR -both at the RNA and protein levels- than the WT, it motivated us to narrow down our definition of VirR-dependent genes to adopt a combined criterium that integrated the complemented strain. Following this approach, we considered the set of genes upregulated (downregulated) in virRmut as those whose expression in that strain is, at the same time, significantly higher (lower) than in WT as well as in virRmut-Comp. Working with this integrated definition, the genes considered -399 upregulated and 502 downregulated genes- are those whose observed expression changes are more likely to be genuinely VirR-dependent rather than any non-specific consequence of the mutagenesis protocols. Despite the lower number of genes in these sets, the repetition of all our functional enrichment analyses based on this combined criterium leads us to conclusions that are largely compatible with those presented in the first version of the paper.

      (4) Transcriptome data provide no clues about how VirR could mediate expression deregulation. Is there an overlap with the regulations/regulons of any Mtb transcription factors? One clue is DosR; however, DosR only regulates 50-60 genes in Mtb. 

      Again, we would like to thank the reviewer for this recommendation, which we have followed accordingly to generate a new section in the results named “VirR-dependent genes intersect the regulons of key transcriptional regulators of the responses to stress, dormancy, and cell wall remodeling”. As we explain in this new section, we resorted to the regulon annotations reported in (Minch et al. 2015), where ChIP-seq data is collected on binding events between a panel of 143 transcription factors (TFs) and DNA genome-wide. The dataset includes 7248 binding events between regulators and DNA motifs in the vicinity of targets’ promoters. After completing enrichment analyses with the resulting regulons, we identified 10 transcription factors whose intersections with the sets of up and downregulated genes in virRmut were larger than expected by chance (One tailed Fisher exact test, OR>2, FDR<0.1). Those regulators -which, as guessed by the referee, included DevR-, control key pathways related with cell wall remodeling, stress responses, and transition to dormancy.

      (5) How many proteins that are enriched or depleted in the EVs of the VirR mutant also affected transcriptionally in the mutant? How does VirR regulate the abundance and transport of protein in EVs? 

      While the intersection between genes and proteins that appear upregulated in the virRmut strain both at transcriptional and vesicular protein levels (N=21) was found larger than expected by chance (OR=2.0 p=7.0E-3), downregulated genes and proteins in virRmut (N=14) were not enriched in each other. These results, indicated, at most, a scarce correlation between RNA and protein levels (a phenomenon nonetheless previously observed in Mycobacterium tuberculosis, among other organisms, see Cortés et al. 2013). Admittedly, the compilation of these omics data is insufficient, by itself to pinpoint the specific regulatory mechanisms through which the absence of VirR impacts protein abundance in EVs. For the sake of transparency, this has been acknowledged in the discussion section of the resubmitted version of the manuscript.

      (6) The assumption that a depleted pool of methylmalonyl CoA is due to increased utilization for PDIM biosynthesis is problematic. Without flux-based measurement, we don't know if MMCoA is consumed more or produced less, more so because Acc is repressed in the VirR mutant EVs. Further, MMCoA feeds into the TCA cycle and other methyl-branched lipids. Without data on other lipids and metabolism, the depletion of MMCoA is difficult to explain.

      The differential expression statistics compiled suggest that both effects may be at place, since we observed, at the same time, a downregulation of enzymes controlling methylmalonyl synthesis from propionyl-CoA (i.e. Acc, at the protein level), as well as an upregulation of enzymes related with its incorporation into DIM/PDIMs (i.e. pps genes). Both effects, combined, would favor an increased rate of methylmalonyl production, and a slower depletion rate, thus contributing to the higher levels observed. We however concur with the reviewer that fluxomics analyses will contribute to shed light on this question in a more decisive manner, and we have acknowledged this in the discussion section too.   

      (7) Figure 5: Deregulation of rubredoxins and copper indicates impaired redox balance and respiration in the mutant. The data is complex to connect with permeability as TRZ is mycobactericidal and also known to affect the respiratory chain. The authors need to investigate if, in addition to permeability, the presence of VirR is essential for maintaining bioenergetics.

      The data related to rubredoxins and copper has been modified after reanalyzing transcriptomic data including the complemented strain. Nevertheless, we found that some features of the response to stresses may be impaired in the mutant, including the one to oxidative stress. In this regard, we found the enhanced sensitivity of the mutant to H2O2 relative to WT and complemented strains. This piece of data is now included as Fig S3 in the new version of the manuscript.

      (8) Differential regulation of DoS regulon and cholesterol growth could also be linked to differences in metabolism, redox, and respiration. What is the phenotype of VirR mutants in terms of growth and respiration in the presence of cholesterol/TRZ? 

      We thank the reviewer for this suggestion. Consequently, we have added a new section to Results that suggest that other aspects of mycobacterial physiology may be affected in the virR mutant when cultured in the presence of cholesterol or TRZ: 

      “Modulation of EV levels and permeability in virRmut by cholesterol and TRZ. We next wondered about the effect of culturing virRmut on both cholesterol or TRZ could have on cell growth, permeability and EV production. In the case of cholesterol, it has also been shown to affect other aspects of physiology (redox, respiration, ATP), which can directly affect permeability (Lu et al., 2017). We monitored virRmut growth cultured in MM supplemented with either glycerol, cholesterol as a sole carbon source, and TRZ at 3 ug ml-1 for 20 days. While cholesterol significantly enhanced the growth virRmut after 5 days relative to glycerol medium, supplementation of glycerol medium with TRZ restricted growth during the whole time-course (Fig S5A). The study of cell permeability in the same conditions indicated that the enhanced cell permeability observed in glycerol MM was reduced when virRmut when cultured with cholesterol as sole carbon source. Conversely, the presence of TRZ increased cell permeability relative to the medium containing solely glycerol (Fig S5C). As we have previously observed for the WT strain, either condition (Chol or TRZ) also modified vesiculation levels in the mutant accordingly (Fig S5B). These results strongly indicates that other aspects of mycobacterial physiology besides permeability are also affected in the virR mutant and may contribute to the observed enhanced vesiculation.

      (9) PDIM TLC is not evident; both DimA and DImB should be clearly shown. It will also be necessary to show other methyl-branched lipids, such as SL-1 and PAT/DAT, because the increase in PDIM can take away methyl malonyl CoA from the biosynthesis of SL-1 and PAT/DAT. Studies have shown that SLI-, PAT/DAT, and PDIM are tightly regulated, where an increase in one lipid pool can affect the abundance of other lipids. Quantitative assays using 14C acetate/propionate are most appropriate for these experiments. 

      We apologize for the fact that TLC analysis is not performed in a radioactive fashion. However, we do not have access to this approach. To answer reviewer question about the fact that other methyl-branched lipids may explain the altered flux of methyl malonyl CoA, we have run TLCs on all the strains tested to resolve SLs and PAT/DATs (Fig S8). Notably, we observed a reduction in the level of these lipids (SL1 or PAT/DAT) in virRmut cultured in glycerol relative to WT and complemented strains, suggesting that the excess of PDIM synthesis can take away methyl malonyl CoA from the biosynthesis of SL-1 and PAT/DAT in the absence of VirR (Fig S8B).

      (10) Figure 8: Interaction between VirR and Lcp proteins. Since these interactions are happening in the membrane, using a split GFP system where proteins are expressed in the cytoplasm is unlikely to be relevant.

      Also, experiments on Figure 8C are performed once, and representation needs to be clarified; split GFP needs a positive control, and negative control (CtpC) is not indicated in the figure.

      We have repeated the experiments and applied statistics (Figure 9). As stated in the manuscript this assay has successfully been applied to interrogate interactions of domains of proteins embedded in the membrane of mycobacteria. Therefore, we believe that this assay is valid to interrogate interactions between Lcp proteins.

      Reviewer #2 (Recommendations For The Authors):  

      (1) Authors should consider making more effort to mine the omics data and integrate them. Given the amount of data that is generated with the omics, they need to be looked at together to find out threads that connect all of them. 

      In the resubmitted version of the paper, we have followed reviewer´s recommendation by incorporating new analyses that integrated the virRmut-C strain, and tried to provide context to the differences found in the context of broader transcriptional regulatory networks (new figure 4), as well as in the context of metabolic pathways related with PDIM biosynthesis from methylmalonyl (figure 6I, already present in the first submission). We consider that these additions contribute to a deeper interpretation of the omics data in the line of what was suggested by the reviewer.

      (2) The interpretation given by authors in lines 387-390 is an interpretation that does not have sufficient support and, hence should be moved into discussion. 

      We thank the reviewer for this recommendation. We believe that these new analyses and integration studies now support the above statement.

    1. cultural significance” in La Jolla history

      It's really ironic that they were so concerned about protecting something that they see to have cultural significance when they could care less about the importance the land has to the Kumeyaay. It just goes to show that it doesn't matter how many laws are passed to protect Native culture, history, and peoples. Native Americans will aways be treated as an after thought.

    1. Before this centralization of media in the 1900s, newspapers and pamphlets were full of rumors and conspiracy theories. And now as the internet and social media have taken off in the early 2000s, we are again in a world full of rumors and conspiracy theories.

      While it's reasonable to claim that a decentralized media system results in the proliferation of conspiracy theories, I think it's worth noting that conspiratorial press is also very common within centralized media systems. News networks and daytime TV alike both have a tendency to report shocking and dubiously truthful (if not outright false and dangerous) news, often manufacturing outrage just the same.

    1. He don't want to die. He wants to live.

      I think with addiction people automatically assume they want to die but deep down people want to change their ways and live that healthy life. It's just hard for them to break that habit.

    2. I had never before thought of how awful the relationship must be between the musician and his instrument. He has to fill it, this instrument, with the breath of life, his own. He has to make it do what he wants it to do. And a piano is just a piano. It's made out of so much wood and wires and little hammers and big ones, and ivory. While there's only so much you can do with it, the only way to find this out is to try; to try and make it do everything.

      This passage was by far my favorite. I love how Baldwin is saying that even though he may be feeling sad or have faced may things in life, he can play beautiful music. His life and him breathing is so important as a musician because he can use those feelings or emotions to his advantage to help spread to message to anyone who hears it and as a musician, he would have the control to make it do what he wanted it to.

    3. "Do you have a better idea?" He just walked up and down the kitchen for a minute. He was as tall as I was. He had started to shave. I suddenly had the feeling that I didn't know him at all

      "I suddenly had the feeling that I didn't know him at all." I like this part a lot. It's simple, but I feel as though it holds a lot of weight. People we once knew can feel like strangers in just a blink of an eye.

    4. And when light fills the room, the child is filled with darkness. He knows that every time this happens he's moved just a little closer to that darkness outside. The darkness outside is what the old folks have been talking about. It's what they've come from. It's what they endure. The child knows that they won't talk any more because if he knows too much about what's happened to them, he'll know too much too soon, about what's going to happen to him

      This particular part is very descriptive, the use of light and dark adds extra feeling to it. "The darkness outside is what the old folks have been talking about." right after talking about how when the light turns on, the child will be filled with darkness. The child will see the reality of the world around them, and that will fill them with the darkness the author speaks of. It leaves you with a gloomy/upset feeling.

    1. Loop through the list of submissions# The variable submissions_list now has a list of Reddit submissions. So we can use a for loop to go through each submission, and then use . to access info from each tweet (other pieces of information would need [" "] to access). For each of the tweets, we will use print to display information about the tweet

      This reminds me of Lab 1, where I was so excited after successfully using code to post an article on Reddit. However, it also left me feeling a bit anxious when I considered the broader implications. It made me realize that so much content on the internet can be generated through code, and a single individual has the power to shape public opinion or even spark controversies with just a few lines of code. It's both empowering and a little daunting to think about how easily information can spread and influence people.

    1. Reviewer #2 (Public review):

      Summary:

      Here the authors describe a model for tracking time-varying coupling between neurons from multi-electrode spike recordings. Their approach extends a GLM with static coupling between neurons to include dynamic weights, learned by a long-short-term-memory (LSTM) model. Each connection has a corresponding LSTM embedding and is read-out by a multi-layer perceptron to predict the time-varying weight.

      Strengths:

      This is an interesting approach to an open problem in neural data analysis. I think, in general, the method would be interesting to computational neuroscientists.

      Weaknesses:

      It is somewhat difficult to interpret what the model is doing. I think it would be worthwhile to add some additional results that make it more clear what types of patterns are being described and how.

      Major Issues:

      Simulation for dynamic connectivity. It certainly seems doable to simulate a recurrent spiking network whose weights change over time, and I think this would be a worthwhile validation for this DyNetCP model. In particular, I think it would be valuable to understand how much the model overfits, and how accurately it can track known changes in coupling strength. If the only goal is "smoothing" time-varying CCGs, there are much easier statistical methods to do this (c.f. McKenzie et al. Neuron, 2021. Ren, Wei, Ghanbari, Stevenson. J Neurosci, 2022), and simulations could be useful to illustrate what the model adds beyond smoothing.

      Stimulus vs noise correlations. For studying correlations between neurons in sensory systems that are strongly driven by stimuli, it's common to use shuffling over trials to distinguish between stimulus correlations and "noise" correlations or putative synaptic connections. This would be a valuable comparison for Fig 5 to show if these are dynamic stimulus correlations or noise correlations. I would also suggest just plotting the CCGs calculated with a moving window to better illustrate how (and if) the dynamic weights differ from the data.

      Minor Issues:

      Introduction - it may be useful to mention that there have been some previous attempts to describe time-varying connectivity from spikes both with probabilistic models: Stevenson and Kording, Neurips (2011), Linderman, Stock, and Adams, Neurips (2014), Robinson, Berger, and Song, Neural Computation (2016), Wei and Stevenson, Neural Comp (2021) ... and with descriptive statistics: Fujisawa et al. Nat Neuroscience (2008), English et al. Neuron (2017), McKenzie et al. Neuron (2021).

      In the sections "Static DyNetCP ...reproduce". It may be useful to have some additional context to interpret the CCG-DyNetCP correlations and CCG-GLMCC correlations (for simulation). If I understand right, these are on training data (not cross-validated) and the DyNetCP model is using NM+1 parameters to predict ~100 data points (It would also be good to say what N and M are for the results here). The GLMCC model has 2 or 3 parameters (if I remember right?).

      In the section "Static connectivity inferred by the DyNetCP from in-vivo recordings is biologically interpretable"... I may have missed it, but how is the "functional delay" calculated? And am I understanding right that for the DyNetCP you are just using [w_i\toj, w_j\toi] in place of the CCG?

    2. Author response:

      The following is the authors’ response to the original reviews.

      We thank the reviewers for the constructive criticism and detailed assessment of our work which helped us to significantly improve our manuscript. We made significant changes to the text to better clarify our goals and approaches. To make our main goal of extracting the network dynamics clearer and to highlight the main advantage of our method in comparison with prior work we incorporated Videos 1-4 into the main text. We hope that these changes, together with the rest of our responses, convincingly demonstrate the utility of our method in producing results that are typically omitted from analysis by other methods and can provide important novel insights on the dynamics of the brain circuits. 

      Reviewer #1 (Public Review):

      (1) “First, this paper attempts to show the superiority of DyNetCP by comparing the performance of synaptic connectivity inference with GLMCC (Figure 2).”

      We believe that the goals of our work were not adequately formulated in the original manuscript that generated this apparent misunderstanding. As opposed to most of the prior work focused on reconstruction of static connectivity from spiking data (including GLMCC), our ultimate goal is to learn the dynamic connectivity structure, i.e. to extract time-dependent strength of the directed connectivity in the network. Since this formulation is fundamentally different from most of the prior work, therefore the goal here is not to show the “improvement” or “superiority” over prior methods that mostly focused on inference of static connectivity, but rather to thoroughly validate our approach and to show its usefulness for the dynamic analysis of experimental data. 

      (2) “This paper also compares the proposed method with standard statistical methods, such as jitter-corrected CCG (Figure 3) and JPSTH (Figure 4). It only shows that the results obtained by the proposed method are consistent with those obtained by the existing methods (CCG or JPSTH), which does not show the superiority of the proposed method.”

      The major problem for designing such a dynamic model is the virtual absence of ground-truth data either as verified experimental datasets or synthetic data with known time-varying connectivity. In this situation optimization of the model hyper-parameters and model verification is largely becoming a “shot in the dark”. Therefore, to resolve this problem and make the model generalizable, here we adopted a two-stage approach, where in the first step we learn static connections followed in the next step by inference of temporally varying dynamic connectivity. Dividing the problem into two stages enables us to separately compare the results of both stages to traditional descriptive statistical approaches. Static connectivity results of the model obtained in stage 1 are compared to classical pairwise CCG (Fig.2A,B) and GLMCC (Fig.2 C,D,E), while dynamic connectivity obtained in step 2 are compared to pairwise JPSTH (Fig.4D,E).

      Importantly, the goal here therefore is not to “outperform” the classical descriptive statistical or any other approaches, but rather to have a solid guidance for designing the model architecture and optimization of hyper-parameters. For example, to produce static weight results in Fig.2A,B that are statistically indistinguishable from the results of classical CCG, the procedure for the selection of weights which contribute to averaging is designed  as shown in Fig.9 and discussed in details in the Methods. Optimization of the L2 regularization parameter is illustrated in Fig.4 – figure supplement 1 that enables to produce dynamic weights very close to cJPSTH as evidenced by Pearson coefficient and TOST statistical tests. These comparisons demonstrate that indeed the results of CCG and JPSTH are faithfully reproduced by our model that, we conclude, is sufficient justification to apply the model to analyze experimental results. 

      (3) “However, the improvement in the synaptic connectivity inference does not seem to be convincing.”

      We are grateful for the reviewer to point out to this issue that we believe, as mentioned above, results from the deficiency of the original manuscript to clarify the major motivation for this comparison. Comparison of static connectivity inferred by stage 1 of our model to the results of GLMCC in Fig.2C,D,E is aimed at optimization of yet another two important parameters - the pair spike threshold and the peak height threshold. Here, in Fig. 2D we show that when the peak height threshold is reduced from rigorous 7 standard deviations (SD) to just 5 SD, our model recovers 74% of the ground truth connections that in fact is better than 69% produced by GLMCC for a comparable pair spike threshold of 80. As explained above, we do not intend to emphasize here that our model is “superior” since it was not our goal, but rather use this comparison to illustrate the approach for optimization of thresholds for units and pairs filtering as described in detail in Fig. 11 and corresponding section in Methods.

      To address these misunderstandings and better clarify the goal of our work we changed the text in the Introductory section accordingly. We also incorporated Videos 1-4 from the Supplementary Materials into the main text as Video 1, Video 2, Video 3, and Video 4. In fact, these videos represent the main advantage (or “superiority”) of our model with respect to prior art that enables to infer the time-dependent dynamics of network connectivity as opposed to static connections.

      (4) “While this paper compares the performance of DyNetCP with a state-of-the-art method (GLMCC), there are several problems with the comparison. For example: 

      (a) This paper focused only on excitatory connections (i.e., ignoring inhibitory neurons). 

      (b) This paper does not compare with existing neural network-based methods (e.g., CoNNECT: Endo et al. Sci. Rep. 2021; Deep learning: Donner et al. bioRxiv, 2024).

      (c) Only a population of neurons generated from the Hodgkin-Huxley model was evaluated.”

      (a) In general, the model of Eq.1 is agnostic to excitatory or inhibitory connections it can recover. In fact, Fig. 5 and Fig.6 illustrate inferred dynamic weights for both excitatory (red arrows) and inhibitory (blue arrows) connections between excitatory (red triangles) and inhibitory (blue circles) neurons. Similarly, inhibitory and excitatory dynamic interactions between connections are represented in Fig. 7 for the larger network across all visual cortices.

      (b) As stated above, the goal for the comparison of the static connectivity results of stage 1 of our model to other approaches is to guide the choice of thresholds and optimization of hyperparameters rather than claiming “superiority” of our model. Therefore, comparison with “static” CNN-based model of Endo et al. or ANN-based static model of Donner et al. (submitted to bioRxiv several months after our submission to eLife) is beyond the scope of this work. 

      (c) We have chosen exactly the same sub-population of neurons from the synthetic HH dataset of Ref. 26 that is used in Fig.6 of Ref. 26 that provides direct comparison of connections reconstructed by GLMCC in the original Ref.26 and the results of our model. 

      (5) “In summary, although DyNetCP has the potential to infer synaptic connections more accurately than existing methods, the paper does not provide sufficient analysis to make this claim. It is also unclear whether the proposed method is superior to the existing methods for estimating functional connectivity, such as jitter-corrected CCG and JPSTH. Thus, the strength of DyNetCP is unclear.”

      As we explained above, we have no intention to claim that our model is more accurate than existing static approaches. In fact, it is not feasible to have better estimation of connectivity than direct descriptive statistical methods as CCG or JPSTH. Instead, comparison with static (CCG and GLMCC) and temporal (JPSTH) approaches are used here to guide the choice of the model thresholds and to inform the optimization of hyper-parameters to make the prediction of the dynamic network connectivity reliable. The main strength of DyNetCP is inference of dynamic connectivity as illustrated in Videos 1-4. We demonstrated the utility of the method on the largest in-vivo experimental dataset available today and extracted the dynamics of cortical connectivity in local and global visual networks. This information is unattainable with any other contemporary methods we are aware of. 

      Reviewer #1 (Recommendations for the Authors):

      (6) “First, the authors should clarify the goal of the analysis, i.e., to extract either the functional connectivity or the synaptic connectivity. While this paper assumes that they are the same, it should be noted that functional connectivity can be different from synaptic connectivity (see Steavenson IH, Neurons Behav. Data Anal. Theory 2023).”

      The goal of our analysis is to extract dynamics of the spiking correlations. In this paper we intentionally avoided assigning a biological interpretation to the inferred dynamic weights. Our goal was to demonstrate that a trough of additional information on neural coding is hidden in the dynamics of neural correlations. The information that is typically omitted from the analysis of neuroscience data. 

      Biological interpretation of the extracted dynamic weights can follow the terminology of the shortterm plasticity between synaptically connected neurons (Refs 25, 33-37) or spike transmission strength (Refs 30-32,46). Alternatively, temporal changes in connection weights can be interpreted in terms of dynamically reconfigurable functional interactions of cortical networks (Refs 8-11,13,47) through which the information is flowing. We could not also exclude interpretation that combines both ideas. In any event our goal here is to extract these signals for a pair (video1, Fig.4), a cortical local circuit (Video 2, Fig.5), and for the whole visual cortical network (Videos 3, 4 and Fig.7). 

      To clarify this statement, we included a paragraph in the discussion section of the revised paper. 

      (7) “Finally, it would be valuable if the authors could also demonstrate the superiority of DyNetCP qualitatively. Can DyNetCP discover something interesting for neuroscientists from the large-scale in vivo dataset that the existing method cannot?”

      The model discovers dynamic time-varying changes in neuron synchronous spiking (Videos 1-4) that more traditional methods like CCG or GLMCC are not able to detect. The revealed dynamics is happening at the very short time scales of the order of just a few ms during the stimulus presentation. Calculations of the intrinsic dimensionality of the spiking manifold (Fig. 8) reveal that up to 25 additional dimensions of the neural code can be recovered using our approach. These dimensions are typically omitted from the analysis of the neural circuits using traditional methods.  

      Reviewer #2 (Public Review):

      (1) “Simulation for dynamic connectivity. It certainly seems doable to simulate a recurrent spiking network whose weights change over time, and I think this would be a worthwhile validation for this DyNetCP model. In particular, I think it would be valuable to understand how much the model overfits, and how accurately it can track known changes in coupling strength.”

      We are very grateful to the reviewer for this insight. Verification of the model on synthetic data with known time-varying connectivity would indeed be very useful. We did generate a synthetic dataset to test some of the model performance metrics - i.e. testing its ability to distinguish True Positive (TP) from False Positive (FP) “serial” or “common input” connections (Fig.10A,B). Comparison of dynamic and static weights might indeed help to distinguish TP connections from an artifactual FP connections. 

      Generating a large synthetic dataset with known dynamic connections that mimics interactions in cortical networks is, however, a separate and not very trivial task that is beyond the scope of this work. Instead, we designed a model with an architecture where overfitting can be tested in two consecutive stages by comparison with descriptive statistical approaches – CCG and JPSTH. Static stage 1 of the model predicts correlations that are statistically indistinguishable from the CCG results (Fig.2A,B). The dynamic stage 2 of the model produce dynamic weight matrices that faithfully reproduce the cJPSTH (Fig.4D,E). Calculated Pearson correlation coefficients and TOST testing enable optimizing the L2 regularization parameter as shown in Fig.4 – supplement 1 and described in detail in the Methods section. The ability to test results of both stages separately to descriptive statistical results is the main advantage of the chosen model architecture that allow to verify that the model does not overfit and can predict changes in coupling strength at least as good as descriptive statistical approaches (see also our answer above to the Reviewer #1 questions).

      (2) “If the only goal is "smoothing" time-varying CCGs, there are much easier statistical methods to do this (c.f. McKenzie et al. Neuron, 2021. Ren, Wei, Ghanbari, Stevenson. J Neurosci, 2022), and simulations could be useful to illustrate what the model adds beyond smoothing.”

      We are grateful to the reviewer for bringing up these very interesting and relevant references that we added to the discussion section in the paper. Especially of interest is the second one, that is calculating the time-varying CCG weight (“efficacy” in the paper terms) on the same Allen Institute Visual dataset as our work is using. It is indeed an elegant way to extract time-variable coupling strength that is similar to what our model is generating. The major difference of our model from that of Ren et al., as well as from GLMCC and any statistical approaches is that the DyNetCP learns connections of an entire network jointly in one pass, rather than calculating coupling separately for each pair in the dataset without considering the relative influence of other pairs in the network. Hence, our model can infer connections beyond pairwise (see Fig. 11 and corresponding discussion in Methods) while performing the inferences with computational efficiency. 

      (3) “Stimulus vs noise correlations. For studying correlations between neurons in sensory systems that are strongly driven by stimuli, it's common to use shuffling over trials to distinguish between stimulus correlations and "noise" correlations or putative synaptic connections. This would be a valuable comparison for Figure 5 to show if these are dynamic stimulus correlations or noise correlations. I would also suggest just plotting the CCGs calculated with a moving window to better illustrate how (and if) the dynamic weights differ from the data.”

      Thank you for this suggestion. Note that for all weight calculations in our model a standard jitter correction procedure of Ref. 33 Harrison et al., Neural Com 2009 is first implemented to mitigate the influences of correlated slow fluctuations (slow “noise”). Please also note that to obtain the results in Fig. 5 we split the 440 total experimental trials for this session (when animal is running, see Table 1) randomly into 352 training and 88 validation trials by selecting 44 training trials from each configuration of contrast or grating angle and 11 for validation. We checked that this random selection, if changed, produced the very same results as shown in Fig.5. 

      Comparison of descriptive statistical results of pairwise cJPSTH and the model are shown in Fig. 4D,E. The difference between the two is characterized in Fig.4 – supplement 1 in detail as evidenced by Pearson coefficient and TOST statistical tests.

      Reviewer #2 (Recommendations for the Authors):

      (4) “The method is described as "unsupervised" in the abstract, but most researchers would probably call this "supervised" (the static model, for instance, is logistic regression).”

      The model architecture is composed of two stages to make parameter optimization grounded. While the first stage is regression, the second and the most important stage is not. Therefore, we believe the term “unsupervised” is justified. 

      (5) “Introduction - it may be useful to mention that there have been some previous attempts to describe time-varying connectivity from spikes both with probabilistic models: Stevenson and Kording, Neurips (2011), Linderman, Stock, and Adams, Neurips (2014), Robinson, Berger, and Song, Neural Computation (2016), Wei and Stevenson, Neural Comp (2021) ... and with descriptive statistics: Fujisawa et al. Nat Neuroscience (2008), English et al. Neuron (2017), McKenzie et al. Neuron (2021).”

      We are very grateful to both reviewers for bringing up these very interesting and relevant references that we gladly included in the discussions within the Introduction and Discussion sections. 

      (6) “In the section "Static connectivity inferred by the DyNetCP from in-vivo recordings is biologically interpretable"... I may have missed it, but how is the "functional delay" calculated? And am I understanding right that for the DyNetCP you are just using [w_i\toj, w_j\toi] in place of the CCG?”

      The functional delay is calculated as a time lag of the maximum (or minimum) in the CCG (or static weight matrix). The static weight that the model is extracting is indeed the wiwj product. We changed the text in this section to better clarify these definitions. 

      (7) “P14 typo "sparce spiking" sparse”

      Fixed. Thank you. 

      (8) “Suggest rewarding "Extra-laminar interactions reveal formation of neuronal ensembles with both feedforward (e.g., layer 4 to layer 5), and feedback (e.g., layer 5 to layer 4) drives." I'm not sure this method can truly distinguish common input from directed, recurrent cortical effects. Just as an example in Figure 5, it looks like 2->4, 0->4, and 3>2 are 0 lag effects. If you wanted to add the "functional delay" analysis to this laminar result that could support some stronger claims about directionality, though.”

      The time lags for the results of Fig. 5 are indeed small, but, however, quantifiable. Left panel Fig. 5A shows static results with the correlation peaks shifted by 1ms from zero lag.

      (9) “Methods - I think it would be useful to mention how many parameters the full DyNetCP model has.”

      Overall, after the architecture of Fig.1C is established, dynamic weight averaging procedure is selected (Fig.9), and Fourier features are introduced (Fig.10), there is just a few parameters to optimize including L2 regularization (Fig.4 – supplement 1) and loss coefficient  (Fig.1 – figure supplement 1A). Other variables, common for all statistical approaches, include bin sizes in the lag time and in the trial time. Decreasing the bin size will improve time resolution while decreasing the number of spikes in each bin for reliable inference. Therefore, number of spikes threshold and other related thresholds α𝑠 , α𝑤 , α𝑝 as well as λ𝑖λ𝑗, need to be adjusted accordingly (Fig.11) as discussed in detail in the Methods, Section 4. We included this sentence in the text. 

      (10) “It may be useful to also mention recent results in mice (Senzai et al. Neuron, 2019) and monkeys (Trepka...Moore. eLife, 2022) that are assessing similar laminar structures with CCGs.”

      Thank you for pointing out these very interesting references. We added a paragraph in “Dynamic connectivity in VISp primary visual area” section comparing our results with these findings. In short, we observed that connections are distributed across the cortical depth with nearly the same maximum weights (Fig.7A) that is inconsistent with observed in Trepka et al, 2022 greatly diminished static connection efficacy within <200µm from the source. It is consistent, however, with the work of Senzai et al, 2019 that reveals much stronger long-distance correlations between layer 2/3 and layer 5 during waking in comparison to sleep states. In both cases these observations represent static connections averaged over a trial time, while the results presented in Video 3 and Fig.7A show strong temporal modulation of the connection strength between all the layers during the stimulus presentation. Therefore, our results demonstrate that tracking dynamic connectivity patterns in local cortical networks can be invaluable in assessing circuitlevel dynamic network organization.

    1. They remind us just how long it’s been clear there’s something wrong with what we’re doing as well as just how little progress we’ve made in acting on that realization.

      The fact that was written in 2011, and people still feel like this in 2024 shows us how we haven't improved.

    1. The easiest way to describe the programming methodused in most projects today was given to me by ateacher who was explaining how he teaches program-ming. “Think like a computer,”

      Due to the apparent issues with this type of thinking, some changes in common pedagogy have occurred since then. As someone with no prior computer-science knowledge, I have seen that this course is far more focused on thinking about the art of problem solving, then it is about the logic of machines. We are able to atomically break down the way our programs work to the computer with the stepper, but just as often it's useful instead to undergo a design process. We think about what we want to achieve, and break it down into goals a human finds intuitive, rather than focusing on the thought process of the computer.