1. Last 7 days
    1. Windows systems provide the Windows Task Manager, a tool that includes information for current applications as well as processes, CPU and memory usage, and networking statistics. A screen shot of the task manager in Windows 10 appears in Figure 2.19.

      I use the Task Manager all the time to check which of the programs are using the most resources. It’s clear to see how the Windows provides both the process-level and the overall system statistics in one place. I’m curious how the Task Manager gathers this type of data behind the scenes—does it use counters like the Linux tools?

    2. Virtualization

      Virtualization is a technology that allows a single physical computer to run multiple virtual machines, where each virtual machine behaves like an independent computer with its own operating system and applications. This is made possible through a software layer called a hypervisor, which manages and shares the underlying hardware resources such as CPU, memory, and storage.

    3. Operating systems keep track of system activity through a series of counters, such as the number of system calls made or the number of operations performed to a network device or disk. The following are examples of Linux tools that use counters:

      It’s interesting to see how the operating systems use the simple counters to monitor the activity. I didn’t realize that just counting things like the system calls or the disk operations could provide such valuable insight into system performance. I wonder how accurate this method is compared to other monitoring techniques.

    4. We mentioned earlier that performance tuning seeks to improve performance by removing processing bottlenecks. To identify bottlenecks, we must be able to monitor system performance. Thus, the operating system must have some means of computing and displaying measures of system behavior. Tools may be characterized as providing either per-process or system-wide observations. To make these observations, tools may use one of two approaches—counters or tracing. We explore each of these in the following sections.

      It makes sense that before you can be able to fix the performance issues, you need to measure them. I like how much the text distinguishes between the monitoring of the individual processes versus the whole system—this seems key to spotting exactly where the bottleneck is.

    5. Operating-system debugging and process debugging frequently use different tools and techniques due to the very different nature of these two tasks. Consider that a kernel failure in the file-system code would make it risky for the kernel to try to save its state to a file on the file system before rebooting. A common technique is to save the kernel's memory state to a section of disk set aside for this purpose that contains no file system. If the kernel detects an unrecoverable error, it writes the entire contents of memory, or at least the kernel-owned parts of the system memory, to the disk area. When the system reboots, a process runs to gather the data from that area and write it to a crash dump file within a file system for analysis. Obviously, such strategies would be unnecessary for debugging ordinary user-level processes.

      It’s interesting how the kernel debugging has to plan for the fact that the file system itself might be broken. Writing the memory state to the dedicated disk area outside of the file system is clever—it ensures that crash data isn’t lost even if the system fails completely.

    6. Debugging user-level process code is a challenge. Operating-system kernel debugging is even more complex because of the size and complexity of the kernel, its control of the hardware, and the lack of user-level debugging tools. A failure in the kernel is called a crash. When a crash occurs, error information is saved to a log file, and the memory state is saved to a crash dump.

      It’s overwhelming to see how much harder it is to debug the kernel as compared to the regular programs. The idea of a ‘crash dump’ makes sense—capturing the kernel’s memory state seems to be essential since normal user-level tools can’t reach it.

    7. If a process fails, most operating systems write the error information to a log file to alert system administrators or users that the problem occurred. The operating system can also take a core dump—a capture of the memory of the process—and store it in a file for later analysis. (Memory was referred to as the “core” in the early days of computing.) Running programs and core dumps can be probed by a debugger, which allows a programmer to explore the code and memory of a process at the time of failure.

      I like how the text explains the core dumps—it’s interesting that the ‘core’ comes from the older terminology for the memory. I also find it useful such that the modern OS is able to capture the process’s memory when it fails, so that the developers can be able to inspect exactly what had gone wrong.

    8. We have mentioned debugging from time to time in this chapter. Here, we take a closer look. Broadly, debugging is the activity of finding and fixing errors in a system, both in hardware and in software. Performance problems are considered bugs, so debugging can also include performance tuning, which seeks to improve performance by removing processing bottlenecks. In this section, we explore debugging process and kernel errors and performance problems. Hardware debugging is outside the scope of this text.

      I find it interesting that debugging isn’t just about fixing crashes or wrong outputs—it also includes improving performance. I didn’t realize that the performance issues are the considered bugs which need debugging at the system or the kernel level.

    9. Finally, boot loaders for most operating systems—including Windows, Linux, and macOS, as well as both iOS and Android—provide booting into recovery mode or single-user mode for diagnosing hardware issues, fixing corrupt file systems, and even reinstalling the operating system. In addition to hardware failures, computer systems can suffer from software errors and poor operating-system performance, which we consider in the following section.

      It’s useful that modern boot loaders offer recovery or single-user modes. This makes troubleshooting hardware or software problems much easier, and it’s interesting that this feature is standard across Windows, Linux, macOS, iOS, and Android.

    10. To save space as well as decrease boot time, the Linux kernel image is a compressed file that is extracted after it is loaded into memory. During the boot process, the boot loader typically creates a temporary RAM file system, known as initramfs. This file system contains necessary drivers and kernel modules that must be installed to support the real root file system (which is not in main memory). Once the kernel has started and the necessary drivers are installed, the kernel switches the root file system from the temporary RAM location to the appropriate root file system location. Finally, Linux creates the systemd process, the initial process in the system, and then starts other services (for example, a web server and/or database). Ultimately, the system will present the user with a login prompt. In Section 11.5.2, we describe the boot process for Windows.

      It’s interesting how Linux uses a temporary RAM file system (initramfs) to get everything started before switching to the real root file system. I wonder how this compares in speed and reliability to the Windows boot process mentioned later.

    11. Many recent computer systems have replaced the BIOS-based boot process with UEFI (Unified Extensible Firmware Interface). UEFI has several advantages over BIOS, including better support for 64-bit systems and larger disks. Perhaps the greatest advantage is that UEFI is a single, complete boot manager and therefore is faster than the multistage BIOS boot process.

      It’s surprising how the UEFI replaces the old BIOS process. I’m curious—how much faster is UEFI in practice, and does it make a noticeable difference when starting up modern computers?

    12. Some computer systems use a multistage boot process: When the computer is first powered on, a small boot loader located in nonvolatile firmware known as BIOS is run. This initial boot loader usually does nothing more than load a second boot loader, which is located at a fixed disk location called the boot block. The program stored in the boot block may be sophisticated enough to load the entire operating system into memory and begin its execution. More typically, it is simple code (as it must fit in a single disk block) and knows only the address on disk and the length of the remainder of the bootstrap program.

      It’s interesting how the boot process is broken into stages. I wonder why the first boot loader has to be so tiny—just enough to load the next stage. Is it mainly because of space constraints in the BIOS firmware?

    13. At a slightly less tailored level, the system description can lead to the selection of precompiled object modules from an existing library. These modules are linked together to form the generated operating system. This process allows the library to contain the device drivers for all supported I/O devices, but only those needed are selected and linked into the operating system. Because the system is not recompiled, system generation is faster, but the resulting system may be overly general and may not support different hardware configurations.

      This explains how precompiled modules help build an OS quickly without full recompilation. It makes sense for speed, but I’m curious—how often does this ‘overly general’ approach cause compatibility problems with less common hardware?

    14. Configuring the system involves specifying which features will be included, and this varies by operating system. Typically, parameters describing how the system is configured is stored in a configuration file of some type, and once this file is created, it can be used in several ways.

      So the OS can be suited by adjusting the configuration axxording to the files—this is like setting up the preferences before using the program. I wonder how much of the flexibility the different OS really gives in terms of which of the features can be included or excluded.

    15. Most commonly, a computer system, when purchased, has an operating system already installed. For example, you may purchase a new laptop with Windows or macOS preinstalled. But suppose you wish to replace the preinstalled operating system or add additional operating systems. Or suppose you purchase a computer without an operating system. In these latter situations, you have a few options for placing the appropriate operating system on the computer and configuring it for use.

      This makes me think about how operating systems are tied to hardware out of the box. It’s interesting that we have flexibility to replace or add OSes, but it also raises questions about compatibility and setup—like dual-booting or clean installations.

    16. It is possible to design, code, and implement an operating system specifically for one specific machine configuration. More commonly, however, operating systems are designed to run on any of a class of machines with a variety of peripheral configurations.

      This highlights the trade-off in the OS design: you can make a system that’s perfectly optimized for any one machine, but most of the OS developers aim for the flexibility so that the system works across many of the devices. I find it interesting to see how this will affect the performance versus the compatibility.

    17. Because Android can run on an almost unlimited number of hardware devices, Google has chosen to abstract the physical hardware through the hardware abstraction layer, or HAL. By abstracting all hardware, such as the camera, GPS chip, and other sensors, the HAL provides applications with a consistent view independent of specific hardware. This feature, of course, allows developers to write programs that are portable across different hardware platforms.

      The Hardware Abstraction Layer (HAL) is considered pretty smart— as it hides the differences between the devices so the developers won't have to worry about the specifics of every phone or the tablet. This explains why the same Android app can run on so many different devices without any modification.

    18. Software designers for Android devices develop applications in the Java language, but they do not generally use the standard Java API. Google has designed a separate Android API for Java development. Java applications are compiled into a form that can execute on the Android RunTime ART, a virtual machine designed for Android and optimized for mobile devices with limited memory and CPU processing capabilities. Java programs are first compiled to a Java bytecode .class file and then translated into an executable .dex file. Whereas many Java virtual machines perform just-in-time (JIT) compilation to improve application efficiency, ART performs ahead-of-time (AOT) compilation. Here, .dex files are compiled into native machine code when they are installed on a device, from which they can execute on the ART. AOT compilation allows more efficient application execution as well as reduced power consumption, features that are crucial for mobile systems.

      It’s interesting that the Android doesn’t use any of the standard Java API but instead has its own. The ahead-of-time (AOT) compilation in ART seems really clever—it makes apps run faster and saves battery life, which is super important for mobile devices with limited resources.

    19. The Android operating system was designed by the Open Handset Alliance (led primarily by Google) and was developed for Android smartphones and tablet computers. Whereas iOS is designed to run on Apple mobile devices and is close-sourced, Android runs on a variety of mobile platforms and is open-sourced, partly explaining its rapid rise in popularity. The structure of Android appears in Figure 2.18.

      Android’s open-source nature and the support for the multiple mobile platforms is used to help explain why it became so popular very quickly, especially when compared to the closed iOS ecosystem which only runs on the Apple devices.

    20. Apple has released the Darwin operating system as open source. As a result, various projects have added extra functionality to Darwin, such as the X-11 windowing system and support for additional file systems. Unlike Darwin, however, the Cocoa interface, as well as other proprietary Apple frameworks available for developing macOS applications, are closed.

      Apple is said to have released the Darwin operating system such as an open source. As a result, various projects have added the extra functionality to the Darwin, such as the X-11 windowing system and the support for the additional file systems. Unlike the Darwin, however, the Cocoa interface, as well as the other proprietary Apple frameworks are said to be available for the developing macOS applications, are closed.

    21. In Section 2.8.3, we described how the overhead of message passing between different services running in user space compromises the performance of microkernels. To address such performance problems, Darwin combines Mach, BSD, the I/O kit, and any kernel extensions into a single address space. Thus, Mach is not a pure microkernel in the sense that various subsystems run in user space. Message passing within Mach still does occur, but no copying is necessary, as the services have access to the same address space.

      Darwin improves the microkernel performance by combining the mac, BSD, and the I/O Kit, and the kernel extensions into the single address space. This design is used for reducing the overhead of message passing between the services because they share the memory, so no copying is needed—showing the practical way to balance the microkernel modularity with its efficiency.

    22. Beneath the system-call interface, Mach provides fundamental operating-system services, including memory management, CPU scheduling, and interprocess communication (IPC) facilities such as message passing and remote procedure calls (RPCs). Much of the functionality provided by Mach is available through kernel abstractions, which include tasks (a Mach process), threads, memory objects, and ports (used for IPC). As an example, an application may create a new process using the BSD POSIX fork() system call. Mach will, in turn, use a task kernel abstraction to represent the process in the kernel.

      Mac is used for handling the core OS tasks like the memory management, CPU scheduling, and the communication between processes using the abstractions such as the tasks, threads, memory objects, and the ports. For instance, when the program calls the POSIX fork() to create the new process, mac represents that process internally as a task—showing how the kernel translates into a high-level calls into its own mechanisms.

    23. Whereas most operating systems provide a single system-call interface to the kernel—such as through the standard C library on UNIX and Linux systems—Darwin provides two system-call interfaces: Mach system calls (known as traps) and BSD system calls (which provide POSIX functionality). The interface to these system calls is a rich set of libraries that includes not only the standard C library but also libraries that provide networking, security, and progamming language support (to name just a few).

      Darwin is very unique because it is said to offer two separate system-call interfaces: Mach calls for low-level kernel interactions and BSD calls for POSIX functionality. This dual interface, combined with extensive libraries for networking, security, and programming, gives developers a lot of flexibility compared to typical single-interface systems.

    24. The iOS operating system is generally much more restricted to developers than macOS and may even be closed to developers. For example, iOS restricts access to POSIX and BSD APIs on iOS, whereas they are openly available to developers on macOS.

      iOS is much more locked down as compared to the macOS. Developers have very limited access to the low-level APIs like the POSIX and the BSD, which are freely available on the macOS. This highlights how the Apple prioritizes security and the control on the mobile devices.

    25. Because macOS is intended for desktop and laptop computer systems, it is compiled to run on Intel architectures. iOS is designed for mobile devices and thus is compiled for ARM-based architectures. Similarly, the iOS kernel has been modified somewhat to address specific features and needs of mobile systems, such as power management and aggressive memory management. Additionally, iOS has more stringent security settings than macOS.

      This section defines how the same underlying OS (Darwin) is adapted for a different hardware. macOS targets the Intel desktops/laptops, while the iOS is optimized for the ARM mobile devices with the extra features like an advanced power management and the stricter security controls.

    26. Kernel environment. This environment, also known as Darwin, includes the Mach microkernel and the BSD UNIX kernel. We will elaborate on Darwin shortly.

      This highlights the foundation of the macOS and the iOS. Darwin, with its Mach microkernel and the BSD UNIX components, forms the core of the operating system, handling the essential functions like the process management, memory, and the system calls.

    27. Core frameworks. This layer defines frameworks that support graphics and media including, Quicktime and OpenGL.

      This layer shows just how the operating systems are said to provide the built-in support for the multimedia and the graphics. QuickTime and the OpenGL make it easier for the developers to create the media-rich applications without having to handle the low-level graphics or the video processing themselves.

    28. Application frameworks layer. This layer includes the Cocoa and Cocoa Touch frameworks, which provide an API for the Objective-C and Swift programming languages. The primary difference between Cocoa and Cocoa Touch is that the former is used for developing macOS applications, and the latter by iOS to provide support for hardware features unique to mobile devices, such as touch screens.

      It’s surprising how the Cocoa and Cocoa Touch serve the similar purposes but are said to be tailored for the different devices. Cocoa Touch supporting the touch-specific hardware really highlights how the APIs need to adapt to the features of the device, not just the operating system.

    29. User experience layer. This layer defines the software interface that allows users to interact with the computing devices. macOS uses the Aqua user interface, which is designed for a mouse or trackpad, whereas iOS uses the Springboard user interface, which is designed for touch devices.

      It’s fascinating to see how the user experience layer really suits the interface to the type of device. I hadn’t realized that macOS and iOS use completely different interfaces (Aqua vs. Springboard) even though they share underlying system components. It makes me think about how much design affects usability on different devices.

    30. In practice, very few operating systems adopt a single, strictly defined structure. Instead, they combine different structures, resulting in hybrid systems that address performance, security, and usability issues. For example, Linux is monolithic, because having the operating system in a single address space provides very efficient performance. However, it also modular, so that new functionality can be dynamically added to the kernel. Windows is largely monolithic as well (again primarily for performance reasons), but it retains some behavior typical of microkernel systems, including providing support for separate subsystems (known as operating-system personalities) that run as user-mode processes. Windows systems also provide support for dynamically loadable kernel modules. We provide case studies of Linux and Windows 10 in Chapter 20 and Chapter 21, respectively. In the remainder of this section, we explore the structure of three hybrid systems: the Apple macOS operating system and the two most prominent mobile operating systems—iOS and Android.

      It’s interesting to see how most of the modern operating systems are really hybrids rather than strictly monolithic or microkernel. I hadn’t realized how theLinux combines the monolithic efficiency with the modular flexibility, and that the Windows mixes the monolithic performance with some microkernel-like features. I wonder how these hybrid designs are used to affect the speed and the stability of the OS in the real-world use.

    31. The idea of the design is for the kernel to provide core services, while other services are implemented dynamically, as the kernel is running. Linking services dynamically is preferable to adding new features directly to the kernel, which would require recompiling the kernel every time a change was made. Thus, for example, we might build CPU scheduling and memory management algorithms directly into the kernel and then add support for different file systems by way of loadable modules.

      I like how this paragraph explains about the benefit of dynamic linking. It makes sense that the kernel handles core functions like CPU scheduling and memory management, while less essential features—like different file systems—can be added on the fly. It makes me curious: how does the OS make sure a newly loaded module doesn’t interfere with the core kernel services?

    32. Perhaps the best current methodology for operating-system design involves using loadable kernel modules (LKMs). Here, the kernel has a set of core components and can link in additional services via modules, either at boot time or during run time. This type of design is common in modern implementations of UNIX, such as Linux, macOS, and Solaris, as well as Windows.

      This part emphasizes how the modern day operating systems have evolved to be more flexible. I find it interesting such that the loadable kernel modules let the OS add or remove the services without rebuilding the whole kernel. It makes me wonder: how does the system ensure the stability when the new modules are loaded at the run time?”

    33. Unfortunately, the performance of microkernels can suffer due to increased system-function overhead. When two user-level services must communicate, messages must be copied between the services, which reside in separate address spaces. In addition, the operating system may have to switch from one process to the next to exchange the messages. The overhead involved in copying messages and switching between processes has been the largest impediment to the growth of microkernel-based operating systems. Consider the history of Windows NT: The first release had a layered microkernel organization. This version's performance was low compared with that of Windows 95. Windows NT 4.0 partially corrected the performance problem by moving layers from user space to kernel space and integrating them more closely. By the time Windows XP was designed, Windows architecture had become more monolithic than microkernel. Section 2.8.5.1 will describe how macOS addresses the performance issues of the Mach microkernel.

      Microkernels often face performance challenges because communication between user-level services requires message copying and process switching. This overhead can be said to slow down the system, as seen in the early versions of the Windows NT, which can be used as a layered microkernel architecture. Over time, performance concerns led Windows NT and XP to adopt a more monolithic approach, integrating layers into the kernel. Solutions like those in macOS (discussed later) aim to address these efficiency issues while retaining microkernel benefits.

    34. Another example is QNX, a real-time operating system for embedded systems. The QNX Neutrino microkernel provides services for message passing and process scheduling. It also handles low-level network communication and hardware interrupts. All other services in QNX are provided by standard processes that run outside the kernel in user mode.

      QNX demonstrates the microkernel design in the context of the real-time and embedded systems. Its Neutrino microkernel is said to manage the essential functions like the message passing, process scheduling, network communication, and the hardware interrupts, while all the other services run as a separate user-mode processes. This separation is said to enhance the reliability and it also simplifies the system maintenance and its updates.

    35. Perhaps the best-known illustration of a microkernel operating system is Darwin, the kernel component of the macOS and iOS operating systems. Darwin, in fact, consists of two kernels, one of which is the Mach microkernel. We will cover the macOS and iOS systems in further detail in Section 2.8.5.1.

      Darwin serves as an important example of a microkernel-based on the operating system. It forms the kernel foundation for macOS and iOS, incorporating the Mach microkernel as one of its core components. This highlights how modern operating systems can use microkernel principles while supporting complex, feature-rich environments.

    36. One benefit of the microkernel approach is that it makes extending the operating system easier. All new services are added to user space and consequently do not require modification of the kernel. When the kernel does have to be modified, the changes tend to be fewer, because the microkernel is a smaller kernel. The resulting operating system is easier to port from one hardware design to another. The microkernel also provides more security and reliability, since most services are running as user—rather than kernel—processes. If a service fails, the rest of the operating system remains untouched.

      The microkernel design is said to offer the several advantages: it simplifies the extending the operating system, since the new services can be added in the user space without the changing of the kernel. The smaller the kernel is, the easier it is to modify and port across the hardware platforms. Additionally, because most of the services run in the user space, the system gains the improved security and the reliability—if a service crashes, it does not affect the rest of the operating system.

    37. The main function of the microkernel is to provide communication between the client program and the various services that are also running in user space. Communication is provided through message passing, which was described in Section 2.3.3.5. For example, if the client program wishes to access a file, it must interact with the file server. The client program and service never interact directly. Rather, they communicate indirectly by exchanging messages with the microkernel.

      In a microkernel system, the kernel’s primary role is to act as a communication hub between user-space programs and services. Instead of direct interaction, clients and services exchange messages via the microkernel. For instance, a program requesting file access communicates with the file server through the kernel, ensuring controlled, indirect interaction and maintaining the modular structure of the system.

    38. We have already seen that the original UNIX system had a monolithic structure. As UNIX expanded, the kernel became large and difficult to manage. In the mid-1980s, researchers at Carnegie Mellon University developed an operating system called Mach that modularized the kernel using the microkernel approach. This method structures the operating system by removing all nonessential components from the kernel and implementing them as user-level programs that reside in separate address spaces. The result is a smaller kernel. There is little consensus regarding which services should remain in the kernel and which should be implemented in user space. Typically, however, microkernels provide minimal process and memory management, in addition to a communication facility. Figure 2.15 illustrates the architecture of a typical microkernel.

      The microkernel approach has emerged for addressing the complexity of the large monolithic kernels, like the UNIX. By moving the nonessential services out of the kernel into the user-space programs, the kernel becomes smaller and very easier to manage. Microkernels usually are said to handle only the core tasks—such as the process and the memory management and the interprocess communication—while the other services are said to run separately, improving the modularity and the maintainability.

    39. Layered systems have been successfully used in computer networks (such as TCP/IP) and web applications. Nevertheless, relatively few operating systems use a pure layered approach. One reason involves the challenges of appropriately defining the functionality of each layer. In addition, the overall performance of such systems is poor due to the overhead of requiring a user program to traverse through multiple layers to obtain an operating-system service. Some layering is common in contemporary operating systems, however. Generally, these systems have fewer layers with more functionality, providing most of the advantages of modularized code while avoiding the problems of layer definition and interaction.

      While the layered approach is said to offer the clarity and the modularity, it is rarely used in its pure form in operating systems. Defining precise responsibilities for each layer is difficult, and performance can suffer because service requests must pass through multiple layers. Modern systems often use a compromise: fewer, broader layers that retain modular benefits while reducing overhead and complexity.

    40. Each layer is implemented only with operations provided by lower-level layers. A layer does not need to know how these operations are implemented; it needs to know only what these operations do. Hence, each layer hides the existence of certain data structures, operations, and hardware from higher-level layers.

      Each layer acts like a “black box,” using services from lower layers without needing to know their internal workings. This abstraction hides implementation details, so higher layers focus only on what operations do, not how they are carried out, which simplifies design and enhances modularity.

    41. The main advantage of the layered approach is simplicity of construction and debugging. The layers are selected so that each uses functions (operations) and services of only lower-level layers. This approach simplifies debugging and system verification. The first layer can be debugged without any concern for the rest of the system, because, by definition, it uses only the basic hardware (which is assumed correct) to implement its functions. Once the first layer is debugged, its correct functioning can be assumed while the second layer is debugged, and so on. If an error is found during the debugging of a particular layer, the error must be on that layer, because the layers below it are already debugged. Thus, the design and implementation of the system are simplified.

      The layered approach makes the building and the debugging an operating system much easier. Every layer depends solely on the ones beneath it, allowing the developers to test any one layer individually. After a lower layer is verified for the functioning properly, any errors that are detected in the higher layers must be traceable to it, simplifying that the process of identifying and resolving issues while enhancing the overall reliability of the system

    42. An operating-system layer is an implementation of an abstract object made up of data and the operations that can manipulate those data. A typical operating-system layer—say, layer M—consists of data structures and a set of functions that can be invoked by higher-level layers. Layer M, in turn, can invoke operations on lower-level layers.

      An operating-system layer functions as a fundamental component, integrating the data with the operations that manipulate that data. Each layer (such as layer M) offers the services to the layers situated above it while relying on the layers beneath it for the foundational operations. This structured method helps in the system organization, enhancing its comprehensibility, maintainability, and the adaptability.

    43. The monolithic approach is often known as a tightly coupled system because changes to one part of the system can have wide-ranging effects on other parts. Alternatively, we could design a loosely coupled system. Such a system is divided into separate, smaller components that have specific and limited functionality. All these components together comprise the kernel. The advantage of this modular approach is that changes in one component affect only that component, and no others, allowing system implementers more freedom in creating and changing the inner workings of the system.

      Monolithic kernels are said to be challenging to implement and modify due to their very large, unified structure. However, they offer high performance because system calls involve minimal overhead and internal kernel communication is very fast. This speed advantage is why monolithic designs remain common in operating systems like UNIX, Linux, and Windows despite their complexity.

    44. Despite the apparent simplicity of monolithic kernels, they are difficult to implement and extend. Monolithic kernels do have a distinct performance advantage, however: there is very little overhead in the system-call interface, and communication within the kernel is fast. Therefore, despite the drawbacks of monolithic kernels, their speed and efficiency explains why we still see evidence of this structure in the UNIX, Linux, and Windows operating systems.

      Monolithic kernels are said to be challenging to implement and modify due to their very large, unified structure. However, they offer high performance because system calls involve minimal overhead and internal kernel communication is very fast. This speed advantage is why monolithic designs remain common in operating systems like UNIX, Linux, and Windows despite their complexity.

    45. The Linux operating system is based on UNIX and is structured similarly, as shown in Figure 2.13. Applications typically use the glibc standard C library when communicating with the system call interface to the kernel. The Linux kernel is monolithic in that it runs entirely in kernel mode in a single address space, but as we shall see in Section 2.8.4, it does have a modular design that allows the kernel to be modified during run time.

      Linux, like the UNIX, follows a largely monolithic structure but this includes the modular features. The kernel operates completely in the kernel mode within the unified address space and it facilitates the loadable modules, enabling the components of the kernel to be added, removed, or updated while the system is running. Applications are used to interact with the kernel through the glibc standard C library, serving as the conduit for the system calls.

    46. An example of such limited structuring is the original UNIX operating system, which consists of two separable parts: the kernel and the system programs. The kernel is further separated into a series of interfaces and device drivers, which have been added and expanded over the years as UNIX has evolved. We can view the traditional UNIX operating system as being layered to some extent, as shown in Figure 2.12. Everything below the system-call interface and above the physical hardware is the kernel. The kernel provides the file system, CPU scheduling, memory management, and other operating-system functions through system calls. Taken in sum, that is an enormous amount of functionality to be combined into one single address space.

      The original UNIX OS illustrates a partially layered structure. While it is mostly monolithic, it separates the kernel from system programs and further divides the kernel into interfaces and device drivers. The kernel handles core functions—like file systems, CPU scheduling, and the memory management—through system calls, all within a single address space, demonstrating how even limited structuring can help organize a complex operating system.

    47. The simplest structure for organizing an operating system is no structure at all. That is, place all of the functionality of the kernel into a single, static binary file that runs in a single address space. This approach—known as a monolithic structure—is a common technique for designing operating systems.

      A monolithic structure is the simplest way for organizing the operating system: all the kernel functions are compiled into the single large binary which runs in one address space. While straightforward, this design can make the debugging, updating, and maintaining the system becomes more difficult, because every part of the kernel is tightly interconnected. Many of the early operating systems have used this approach.

    48. A system as large and complex as a modern operating system must be engineered carefully if it is to function properly and be modified easily. A common approach is to partition the task into small components, or modules, rather than have one single system. Each of these modules should be a well-defined portion of the system, with carefully defined interfaces and functions. You may use a similar approach when you structure your programs: rather than placing all of your code in the main() function, you instead separate logic into a number of functions, clearly articulate parameters and return values, and then call those functions from main().

      Modern operating systems are meant to be extremely complex, so breaking them into the modules makes the development and the maintenance manageable.Every module manages a distinct, clearly defined function and interacts with the other modules via explicit interfaces. This modular method resembles effective programming practices, as the code is separated into functions with specified inputs and outputs instead of consolidating everything within the main(). It enhances readability, maintainability, and also decreases errors.

    49. As is true in other systems, major performance improvements in operating systems are more likely to be the result of better data structures and algorithms than of excellent assembly-language code. In addition, although operating systems are large, only a small amount of the code is critical to high performance; the interrupt handlers, I/O manager, memory manager, and CPU scheduler are probably the most critical routines. After the system is written and is working correctly, bottlenecks can be identified and can be refactored to operate more efficiently.

      As is true in other systems, major performance improvements in operating systems are more likely to be the result of better data structures and algorithms than of excellent assembly-language code. In addition, although operating systems are large, only a small amount of the code is critical to high performance; the interrupt handlers, I/O manager, memory manager, and CPU scheduler are probably the most critical routines. After the system is written and is working correctly, bottlenecks can be identified and can be refactored to operate more efficiently.

    50. The advantages of using a higher-level language, or at least a systems-implementation language, for implementing operating systems are the same as those gained when the language is used for application programs: the code can be written faster, is more compact, and is easier to understand and debug. In addition, improvements in compiler technology will improve the generated code for the entire operating system by simple recompilation. Finally, an operating system is far easier to port to other hardware if it is written in a higher-level language. This is particularly important for operating systems that are intended to run on several different hardware systems, such as small embedded devices, Intel x86 systems, and ARM chips running on phones and tablets.

      Using the higher-level languages for the operating system development offers the several key benefits: code can be written more quickly, is easier to read and debug, and is generally more compact. Compiler improvements automatically enhance the efficiency of the OS through recompilation. Additionally, the high-level languages make the porting of the OS to the different hardware platforms much easier—such as a crucial advantage for the systems designed for running on diverse devices, from embedded systems to desktop PCs and mobile ARM-based devices.

    51. Early operating systems were written in assembly language. Now, most are written in higher-level languages such as C or C++, with small amounts of the system written in assembly language. In fact, more than one higher-level language is often used. The lowest levels of the kernel might be written in assembly language and C. Higher-level routines might be written in C and C++, and system libraries might be written in C++ or even higher-level languages. Android provides a nice example: its kernel is written mostly in C with some assembly language. Most Android system libraries are written in C or C++, and its application frameworks—which provide the developer interface to the system—are written mostly in Java. We cover Android's architecture in more detail in Section 2.8.5.2.

      This passage emphasizes the importance and the evolution of the operating system development from the assembly language to the higher-level languages like C and C++. Modern OS kernels often use a mix of languages: low-level routines for the hardware control in the assembly or C, system libraries in the C or C++, and the higher-level application frameworks in the languages such as Java. Android is a clear example, showing how the different layers of the OS stack are used for implementation in the different languages to balance the performance, portability, and also the developer accessibility.

    52. Policy decisions are important for all resource allocation. Whenever it is necessary to decide whether or not to allocate a resource, a policy decision must be made. Whenever the question is how rather than what, it is a mechanism that must be determined.

      This passage emphasizes the difference between the policy and the mechanism in the resource management. A policy defines what should be done—for example, deciding which of the process gets the access to the resource—while a mechanism defines how the decision should be implemented, such as a specific algorithm or the procedure used to allocate such a resource. Recognizing this separation helps in designing the flexible and adaptable operating systems.

    53. We can make a similar comparison between commercial and open-source operating systems. For instance, contrast Windows, discussed above, with Linux, an open-source operating system that runs on a wide range of computing devices and has been available for over 25 years. The “standard” Linux kernel has a specific CPU scheduling algorithm (covered in Section 5.7.1), which is a mechanism that supports a certain policy. However, anyone is free to modify or replace the scheduler to support a different policy.

      This passage illustrates the separation of policy and mechanism in practice, using the Windows versus Linux as the examples. In Linux, the CPU scheduler is used for representing a mechanism, while the scheduling algorithm (policy) determines how much of the CPU time is being allocated. Unlike most of the commercial operating systems, Linux is an open source, so the users can be able to modify or replace the scheduler to implement a different policy without changing any of the underlying mechanism. This flexibility is a key advantage of open-source systems.

    54. The separation of policy and mechanism is important for flexibility. Policies are likely to change across places or over time. In the worst case, each change in policy would require a change in the underlying mechanism. A general mechanism flexible enough to work across a range of policies is preferable. A change in policy would then require redefinition of only certain parameters of the system. For instance, consider a mechanism for giving priority to certain types of programs over others. If the mechanism is properly separated from policy, it can be used either to support a policy decision that I/O-intensive programs should have priority over CPU-intensive ones or to support the opposite policy.

      This passage highlights why separating policy from mechanism increases flexibility in an operating system. Policies often change depending on context or over time, and if mechanisms were tightly coupled to policies, any change would require redesigning the mechanism. By keeping the mechanisms general and flexible, only the policy parameters needs to be adjusted. For example, the priority mechanism can be used to support the different policies, such as giving the preference to the I/O-intensive programs or the CPU-intensive programs, without modifying any of the underlying mechanism itself.

    55. One important principle is the separation of policy from mechanism. Mechanisms determine how to do something; policies determine what will be done. For example, the timer construct (see Section 1.4.3) is a mechanism for ensuring CPU protection, but deciding how long the timer is to be set for a particular user is a policy decision.

      This principle emphasizes distinguishing between mechanisms and policies. Mechanisms define how the task is performed, while policies define what is to be done. For instance, a timer is a mechanism that enforces CPU usage limits, but setting the duration of the timer for each user is a policy decision. This separation allows flexibility in system behavior without changing the underlying implementation.

    56. Specifying and designing an operating system is a highly creative task. Although no textbook can tell you how to do it, general principles have been developed in the field of software engineering, and we turn now to a discussion of some of these principles.

      Designing an operating system requires a high degree of creativity, as there is no single formula or textbook method for doing it. However, software engineering principles provide general guidelines and best practices that can help structure the design process, ensuring the system is reliable, efficient, and maintainable.

    57. There is, in short, no unique solution to the problem of defining the requirements for an operating system. The wide range of systems in existence shows that different requirements can result in a large variety of solutions for different environments. For example, the requirements for Wind River VxWorks, a real-time operating system for embedded systems, must have been substantially different from those for Windows Server, a large multiaccess operating system designed for enterprise applications.

      There is, in short, no unique solution to the problem of defining the requirements for an operating system. The wide range of systems in existence shows that different requirements can result in a large variety of solutions for different environments. For example, the requirements for Wind River VxWorks, a real-time operating system for embedded systems, must have been substantially different from those for Windows Server, a large multiaccess operating system designed for enterprise applications.

    58. The first problem in designing a system is to define goals and specifications. At the highest level, the design of the system will be affected by the choice of hardware and the type of system: traditional desktop/laptop, mobile, distributed, or real time.

      The initial step in designing the operating system is defining the clear goals and their specifications. Key design decisions will depend on the hardware platform and the type of the system that is being developed—whether it’s the traditional desktop or the laptop, a mobile device, a distributed system, or the real-time system. These factors are used to influence the performance, capabilities, and the overall architecture for the OS.

    59. In sum, all of these differences mean that unless an interpreter, RTE, or binary executable file is written for and compiled on a specific operating system on a specific CPU type (such as Intel x86 or ARMv8), the application will fail to run. Imagine the amount of work that is required for a program such as the Firefox browser to run on Windows, macOS, various Linux releases, iOS, and Android, sometimes on various CPU architectures.

      Ultimately, an application can only run on a system if its interpreter, runtime environment (RTE), or compiled binary is designed for that specific operating system and CPU architecture. This explains why cross-platform applications, like the Firefox browser, require significant effort to support multiple OSes and hardware types, including Windows, macOS, Linux distributions, iOS, and Android, often across different processor architectures.

    60. Each operating system has a binary format for applications that dictates the layout of the header, instructions, and variables. Those components need to be at certain locations in specified structures within an executable file so the operating system can open the file and load the application for proper execution.

      Every operating system defines its own binary file format for applications, which specifies how the executable’s header, instructions, and variables are arranged. This structure ensures that the OS can correctly load the program into memory and execute it, making the binary format a critical factor in application compatibility across different systems.

    61. In theory, these three approaches seemingly provide simple solutions for developing applications that can run across different operating systems. However, the general lack of application mobility has several causes, all of which still make developing cross-platform applications a challenging task. At the application level, the libraries provided with the operating system contain APIs to provide features like GUI interfaces, and an application designed to call one set of APIs (say, those available from IOS on the Apple iPhone) will not work on an operating system that does not provide those APIs (such as Android). Other challenges exist at lower levels in the system, including the following.

      While the interpreted languages, virtual machines, and the cross-compilers can help the applications run on the multiple operating systems, achieving the true cross-platform compatibility is said to remain difficult. One major reason is that the different operating systems offer the different libraries and the APIs, particularly for the GUI and the system-level features. An app designed for the one OS (like iOS) may fail on the other(like Android) if the expected APIs aren’t available. Additionally, the lower-level differences in the system architecture, the memory management, and the file handling are used to create the further challenges for the developers trying to make the applications that are portable.

    62. In theory, these three approaches seemingly provide simple solutions for developing applications that can run across different operating systems. However, the general lack of application mobility has several causes, all of which still make developing cross-platform applications a challenging task. At the application level, the libraries provided with the operating system contain APIs to provide features like GUI interfaces, and an application designed to call one set of APIs (say, those available from IOS on the Apple iPhone) will not work on an operating system that does not provide those APIs (such as Android). Other challenges exist at lower levels in the system, including the following.

      Although the interpreted languages, the virtual machines, and the cross-compilers help with the cross-platform development, true application portability is still considered difficult. A major reason is that the operating systems are used to provide the different APIs, especially for the features like the graphical interfaces. An app is built for one OS (e.g., iOS) may fail on another (e.g., Android) because of the expected APIs aren’t said to be available. Additional challenges also arise from the low-level system differences, making this cross-platform development complex.

    63. 1. The application can be written in an interpreted language (such as Python or Ruby) that has an interpreter available for multiple operating systems. The interpreter reads each line of the source program, executes equivalent instructions on the native instruction set, and calls native operating system calls. Performance suffers relative to that for native applications, and the interpreter provides only a subset of each operating system's features, possibly limiting the feature sets of the associated applications.

      Applications developed in interpreted languages such as Python or Ruby can operate on various operating systems since the interpreter functions as an intermediary layer. The interpreter executes the program line by line, converting it into native instructions and calling the OS when needed. This enables compatibility across platforms but might reduce performance compared to native apps and could limit access to some features exclusive to specific operating systems.

    64. Based on our earlier discussion, we can now see part of the problem—each operating system provides a unique set of system calls. System calls are part of the set of services provided by operating systems for use by applications. Even if system calls were somehow uniform, other barriers would make it difficult for us to execute application programs on different operating systems. But if you have used multiple operating systems, you may have used some of the same applications on them. How is that possible?

      Each of the operating system has its own set of the system calls, which makes it very hard to run the applications across the different systems. Even if the system calls were standardized, the differences in the design and the implementation would still cause the ompatibility issues. Yet, we often are able to see the same applications (like the browsers or the word processors) working across the Windows, Linux, and the macOS. This is possible because of the applications are usually written against the APIs or the cross-platform frameworks, rather than directly using system calls, allowing them to be adapted to different operating systems.

    65. Object files and executable files typically have standard formats that include the compiled machine code and a symbol table containing metadata about functions and variables that are referenced in the program. For UNIX and Linux systems, this standard format is known as ELF (for Executable and Linkable Format). There are separate ELF formats for relocatable and executable files. One piece of information in the ELF file for executable files is the program's entry point, which contains the address of the first instruction to be executed when the program runs. Windows systems use the Portable Executable (PE) format, and macOS uses the Mach-O format.

      Executable and the object files follow the standard formats which include both the actual machine code and the metadata (like details about functions and variables). On UNIX and the Linux systems, this format is called ELF (Executable and Linkable Format), with the different versions for the relocatable and the executable files. ELF files also are used specify the entry point, which is the first instruction to run when the program starts. Other operating systems use different formats—Windows uses PE (Portable Executable), and macOS uses Mach-O.

    66. Source files are compiled into object files that are designed to be loaded into any physical memory location, a format known as an relocatable object file. Next, the linker combines these relocatable object files into a single binary executable file. During the linking phase, other object files or libraries may be included as well, such as the standard C or math library (specified with the flag -lm).

      When the programs are compiled, then the source code is initially transformed into the relocatable object files, which can be loaded into any of the memory addresses. The linker merges these types of the object file types into one of the executable file, also including external object files or libraries when necessary (for example, the math library with -lm). This process ensures that the completed program is comprehensive and ready to run

    67. The view of the operating system seen by most users is defined by the application and system programs, rather than by the actual system calls. Consider a user's PC. When a user's computer is running the macOS operating system, the user might see the GUI, featuring a mouse-and-windows interface. Alternatively, or even in one of the windows, the user might have a command-line UNIX shell. Both use the same set of system calls, but the system calls look different and act in different ways. Further confusing the user view, consider the user dual-booting from macOS into Windows. Now the same user on the same hardware has two entirely different interfaces and two sets of applications using the same physical resources. On the same hardware, then, a user can be exposed to multiple user interfaces sequentially or concurrently.

      Users primarily interact with the operating system through the interfaces (GUIs or command lines) and applications, rather than directly getting through the system calls. For example, macOS users can interact through the graphical interface or the UNIX shell, both able to use the same system calls, although they appear quite different.The Dual-booting macOS and the Windows showcases how the identical hardware can result in the distinctly different type of the user experiences and the environments, despite relying on the same foundational system resources

    68. Program loading and execution. Once a program is assembled or compiled, it must be loaded into memory to be executed. The system may provide absolute loaders, relocatable loaders, linkage editors, and overlay loaders. Debugging systems for either higher-level languages or machine language are needed as well.

      Program loading and execution services handle the process of getting compiled programs into memory so they can run. These include loaders (absolute, relocatable, overlay) and tools like linkage editors. Debugging support is also part of this category, helping programmers test and fix errors in either high-level code or machine language.

    69. File management. These programs create, delete, copy, rename, print, list, and generally access and manipulate files and directories.

      File management services provide everyday tools for working with files and directories. They let users create, delete, copy, rename, print, and list files, making it easier to organize and manage data without needing to use low-level system calls directly.

    70. Another aspect of a modern system is its collection of system services. Recall Figure 1.1, which depicted the logical computer hierarchy. At the lowest level is hardware. Next is the operating system, then the system services, and finally the application programs. System services, also known as system utilities, provide a convenient environment for program development and execution. Some of them are simply user interfaces to system calls. Others are considerably more complex. They can be divided into these categories:

      This part explains where system services fit in the computer hierarchy. They are situated between the operating system and the application programs, making it very easy for the developers and the users to interact with the system. Some services are just the simple tools which act as the front end for the system calls, while the others are more advanced and are used to provide the broader functionality. Essentially, system services (or utilities) give the programmers a convenient way for developing and running the programs without dealing directly with the low-level details.

    71. Typically, system calls providing protection include set_permission() and get_permission(), which manipulate the permission settings of resources such as files and disks. The allow_user() and deny_user() system calls specify whether particular users can—or cannot—be allowed access to certain resources. We cover protection in Chapter 17 and the much larger issue of security—which involves using protection against external threats—in Chapter 16.

      This paragraph highlights about how the operating systems are used to employ the certain system calls to manage protection and access control. Functions such as the set_permission() and the get_permission()manage permissions for its resources, whereas allow_user() and the deny_user() are used to specify which of the users can access the particular files or the devices. This indicates that the protection is meant exclusively to manage internal access rights, whereas the security (addressed later) focuses on defending against external threats

    72. Protection provides a mechanism for controlling access to the resources provided by a computer system. Historically, protection was a concern only on multiprogrammed computer systems with several users. However, with the advent of networking and the Internet, all computer systems, from servers to mobile handheld devices, must be concerned with protection.

      Why has protection become an important concern for all the computer systems, not just the multiprogrammed systems with its multiple users?

    73. Both of the models just discussed are common in operating systems, and most systems implement both. Message passing is useful for exchanging smaller amounts of data, because no conflicts need be avoided. It is also easier to implement than is shared memory for intercomputer communication. Shared memory allows maximum speed and convenience of communication, since it can be done at memory transfer speeds when it takes place within a computer. Problems exist, however, in the areas of protection and synchronization between the processes sharing memory.

      What are the main advantages and the disadvantages of using the message passing versus the shared memory for interprocess communication, and in what situations is each model more suitable?

    74. There are two common models of interprocess communication: the message-passing model and the shared-memory model. In the message-passing model, the communicating processes exchange messages with one another to transfer information. Messages can be exchanged between the processes either directly or indirectly through a common mailbox. Before communication can take place, a connection must be opened. The name of the other communicator must be known, be it another process on the same system or a process on another computer connected by a communications network. Each computer in a network has a host name by which it is commonly known. A host also has a network identifier, such as an IP address. Similarly, each process has a process name, and this name is translated into an identifier by which the operating system can refer to the process. The get_hostid() and get_processid() system calls do this translation. The identifiers are then passed to the general-purpose open() and close() calls provided by the file system or to specific open_connection() and close_connection() system calls, depending on the system's model of communication. The recipient process usually must give its permission for communication to take place with an accept_connection() call. Most processes that will be receiving connections are special-purpose daemons, which are system programs provided for that purpose. They execute a wait_for_connection() call and are awakened when a connection is made. The source of the communication, known as the client, and the receiving daemon, known as a server, then exchange messages by using read_message() and write_message() system calls. The close_connection() call terminates the communication.

      Explain the steps involved in interprocess communication using the message-passing model. Include the roles of the client, server (daemon), and system calls such as open_connection(), accept_connection(), read_message(), and close_connection().

    75. Many operating systems provide a time profile of a program to indicate the amount of time that the program executes at a particular location or set of locations. A time profile requires either a tracing facility or regular timer interrupts. At every occurrence of the timer interrupt, the value of the program counter is recorded. With sufficiently frequent timer interrupts, a statistical picture of the time spent on various parts of the program can be obtained.

      Many operating systems can track how much time a program spends running at different points in its code. This is called a time profile. To create one, the system either traces the program or uses regular timer interrupts. Every time the timer interrupts, the system records the program’s current position. By doing this often frequently, it can give a statistical view of which parts of the program take the most time to execute.

    76. Many system calls exist simply for the purpose of transferring information between the user program and the operating system. For example, most systems have a system call to return the current time() and date(). Other system calls may return information about the system, such as the version number of the operating system, the amount of free memory or disk space, and so on.

      Many system calls are used for just passing the information back and forth between the program and the operating system. For example, many systems are used for processing a function which is used to retrieve the current time and its date. Additional calls can offer the information regarding the system, such as the version of the operating system, the amount of the available memory or disk space, and other related details

    77. Once the device has been requested (and allocated to us), we can read(), write(), and (possibly) reposition() the device, just as we can with files. In fact, the similarity between I/O devices and files is so great that many operating systems, including UNIX, merge the two into a combined file–device structure. In this case, a set of system calls is used on both files and devices. Sometimes, I/O devices are identified by special file names, directory placement, or file attributes.

      Once the device has been requested (and allocated to us), we can read(), write(), and (possibly) reposition() the device, just as we can with files. In fact, the similarity between I/O devices and files is so great that many operating systems, including UNIX, merge the two into a combined file–device structure. In this case, a set of system calls is used on both files and devices. Sometimes, I/O devices are identified by special file names, directory placement, or file attributes.

    78. The various resources controlled by the operating system can be thought of as devices. Some of these devices are physical devices (for example, disk drives), while others can be thought of as abstract or virtual devices (for example, files). A system with multiple users may require us to first request() a device, to ensure exclusive use of it. After we are finished with the device, we release() it. These functions are similar to the open() and close() system calls for files. Other operating systems allow unmanaged access to devices. The hazard then is the potential for device contention and perhaps deadlock, which are described in Chapter 8.

      The resources that an operating system manages can be thought of as devices. Some of these are physical, like the disk drives, while the others are abstract or virtual, like the files. In the systems where the multiple users, a program may need to request() the device to ensure that it has an exclusive access, and then release() it when it's finished. These actions are similar to open() and close() for files. Some operating systems let programs access devices without this kind of control, but doing so can lead to problems like device conflicts or deadlocks, which we’ll discuss in Chapter 8.

    79. We may need these same sets of operations for directories if we have a directory structure for organizing files in the file system. In addition, for either files or directories, we need to be able to determine the values of various attributes and perhaps to set them if necessary. File attributes include the file name, file type, protection codes, accounting information, and so on. At least two system calls, get_file_attributes() and set_file_attributes(), are required for this function. Some operating systems provide many more calls, such as calls for file move() and copy(). Others might provide an API that performs those operations using code and other system calls, and others might provide system programs to perform the tasks. If the system programs are callable by other programs, then each can be considered an API by other system programs.

      We often need similar operations for directories as we do for files, especially when using a directory structure to organize files. For both files and directories, it’s important to check or modify their attributes when necessary. Attributes can include things like the name, type,or the access permissions, and the accounting information. To handle this, operating systems usually provide system calls such as get_file_attributes() and set_file_attributes(). Some systems go further, offering extra calls for tasks like moving or copying files. In other cases, these actions are handled through APIs or system programs. If other programs can call these system programs, they effectively act as the APIs themselves.

    1. As generative AI becomes further ingrained into higher education, it’s important to be intentional about how we navigate its complexities.

      This is something you really have to put your deep thought into because ai is increasing all over the world, especially in schools. Yeah, you can find it beneficial, but you can also receive misinformation from it. In addition to this, fully relying on ai will not help you in the future at all for so many obvious reasons.

    2. While AI is a powerful tool, the human touch remains crucial. By working together, we can make the most of what AI offers while mitigating its known limitations.

      Overall, we should not be against ai but that does not mean that we should be 100% reliable. Humans are still the center of all creations; we should count on each other as well as figure out things through the assistance of one another.

    3. Problems with bias in AI systems predate generative AI tools. For example, in the Gender Shades project, Buolamwini (2017) tested AI-based commercial gender classification systems and found significant disparities in accuracy across different genders and skin types. These systems performed better on male and lighter-skinned faces than others. The largest disparity was found in darker-skinned females, where error rates were notably high.

      this is such a disaster class and very terrible because this demonstrates that artificial intelligence may cause more of a negative effect on societies. This tends to be very unjust.

    4. Generative AI models are trained on vast amounts of internet data. This data, while rich in information, contains both accurate and inaccurate content, as well as societal and cultural biases. Since these models mimic patterns in their training data without discerning truth, they can reproduce any

      this part of the text highlights why ai might be inaccurate and biased in most cases stating that it contains a large amount of information that might be accurate and inaccurate ,it is also a way to emphasizes the purpose of the article.

    5. . These generative AI biases can have real-world consequences. For instance, adding biased generative AI to “virtual sketch artist” software used by police departments could “put already over-targeted populations at an even increased risk of harm ranging from physical injury to unlawful imprisonment”

      it is really overwhelming to know that if we used biased and inaccurate ai in our society how much of damage it might cause.

    1. temperature regimes of northern Germany are not comparable to those of Eastern Australia (e.g. Buschbaum et al., 2009; Cole, 2010) where mussels are likely acclimated to warmer environments

      This section shows how ocean acidification and warming can shift community structures, but it also points out that species responses vary by region. For example, mussels in Eastern Australia may be more tolerant of heat than those in northern Germany because they’re already acclimated to warmer waters. This suggests that local adaptation plays a big role in how climate change impacts ecosystems. Do you think local acclimation is enough to protect some species from climate change, or will ocean acidification eventually outweigh those regional advantages?

    2. 2.4. Data analysis

      After reading this section, I learned that climate change alters not just the performance of mussels but also the active choices of colonizing infauna. Which made we wonder if this shift could create feedback loops that further advantage invasive mussels over natives?

    3. Temperature also affected the behavioural preferences of the infauna associated with mussels. Polychaetes, crustaceans, and molluscs altered their behaviour to colonise the habitat created by one species of mussel to another. This altered behavioural preference of infauna can be driven by habitat-specific cues and the ability of infauna to make habitat choices

      The authors talked about some behavioral changes to the infauna that is associated with the mussels. Would the behavioral changes be positive or negative effect towards them or other species in their environment?

    4. After the 4-week acclimation period, the mussels were defaunated by carefully removing all infauna and separating adult mussels (>1 cm) into 10-cm-diameter clumps (Cole, 2010).

      Would we have seen different result if they acclimation period was longer for the mussels? Would a longer acclimation or a shorter one not really affect the mussels or the result to much?

    5. The outdoor experiment was performed in a purpose-built facility (Pereira et al., 2019) at the Sydney Institute of Marine Science (SIMS), Chowder Bay, Sydney Harbour, New South Wales, Australia. The experiment was performed during the summer peak recruitment period of marine invertebrates in Sydney Harbour.

      Would the researchers get similar or the same results, if they did not perform the experiment during the peak recruitment period? How different would the results be if they where during the low recruitment period?

    6. Previous studies have shown that the loss of a biogenic habitat in an ecosystem can be functionally replaced (or the loss of function is slowed to some extent) by another habitat-forming organism (Nagelkerken et al., 2016; Sunday et al., 2017).

      What would happen if another habitat-forming organism was introduces to the area? Would it benefit the overall ecology of the area or would it prove to be detrimental to the organisms that already exist in that area? Would it be ethical to perform this in order to prevent the replacement of a habitat?

    7. For example, under acidification, fleshy seaweeds outcompete calcareous species

      How would this potential change impact the organisms that rely on the calcareous species for food or protection?

    8. Molluscs actively chose to colonise T. hirsuta and actively avoided M. galloprovincialis, regardless of warming or pCO2 levels (Table 1).

      What caused molluscs to choose to colonize T. hisuta regardless of warming or pCO2 levels? What deterred them from colonizing M. galloprovincialis?

    9. The native mussel T. hirsuta grew more under warming (Fig. 1; ANOVA Species × Temperature F1,32 = 6.13, P < 0.05; Supplementary Table 2). In contrast, M. galloprovincialis grew the same at ambient and elevated temperatures (Fig. 1; Supplementary Table 2). There was no effect of elevated pCO2 on growth in either of the mussel species (ANOVA CO2 F1,32 = 0.53, P > 0.05; Supplementary Table 2).

      The authors present an interesting point here. The research suggests that temperature is the primary driver for the difference in growth between the native T. hisuta and the M. galloprovincialis. Based on these results, would these results be consistent in another shellfish species with the same tolerance for temperature and sensitivities to carbon dioxide?

  2. learn-us-east-1-prod-fleet01-beaker-xythos.content.blackboardcdn.com learn-us-east-1-prod-fleet01-beaker-xythos.content.blackboardcdn.com
    1. Trade and Trust” addresses the misconception amongsome libertarians that the institutional infrastructure neededto support specialization and trade is minimal.

      Stanford focuses on class divisions, while Kling focuses on specialization and trust. Both look at the big structures that make economies work, just in differently.

    2. iscovering new patterns of sustainablespecialization and trade is more complex and subtle and lessmechanical than what is assumed by the Keynesian and mon-etarist traditions.

      Finance is basically extended trust. Crises happen when that trust disappears Stanford focuses on class and inequality, but Kling focuses on trust and specialization. Both show important parts of how the economy works, just from different angles.

    3. If trade entails trust among strangers, then financial inter-mediation entails trust over time. If people lose trust infinancial intermediaries, then financial intermediation candecline precipitously. That sharp decline can have a broadeffect on the structure of production in the economy

      Trust is key. Without it, people can’t rely on each other’s work. What happens to economies when trust breaks down, like during financial crises...?

    4. Unfortunately, once they have taught the simple exam-ple of comparative advantage, with two producers and twotasks, most economists are done with the issue of specializa-tion. Instead, textbooks want to focus on scarcity and choice.Often, the student will read that economics is about the allo-cation of scarce resources given our unlimited wants. Or heor she may be told that economics is the study of how peoplemake rational choices among competing priorities.Scarcity and choice are certainly important concepts, butmaking them the central focus can lead to economic analysisthat is simplistic and mechanistic. In fact, the approach to eco-nomics that took hold after World War II treats the economyas a machine governed by equations.

      Kling says specialization and cooperation are more important than just scarcity for understanding how economies grow.

    5. Look at the list of ingredients in the cereal. Those ingre-dients had to be refined and shipped to the cereal manu-facturer. Again, those processes required many machines,which in turn had to be manufactured. The cereal grainsand other ingredients had to be grown, harvested, and pro-cessed. Machines were involved in those processes, and thosemachines had to be manufactured.

      Even simple things depend on lots of people working together. Does this mean understanding how we all rely on each other could change how we see value in the economy?

    6. hat econo-mists have lost the art of critical thinking.

      Kling thinks economists focus too much on models and not enough on real-world problems. Makes me wonder what ways of thinking could make economics more useful for regular people.

    7. “Policy in Practice” corrects the misconception that diagno-sis and treatment of “market failure” is straightforward. Thissection looks at challenges facing economists and policymak-ers trying to use the theory of market failure. The example Iuse is housing finance policy during the run-up to the finan-cial crisis of 2008. The policy process was overwhelmed bythe complexity of the specialization that emerged in housingfinance. Moreover, the basic thrust of policy was determinedby interest-group influence. The lesson is that a very large gapexists between the economic theory of public goods and thepractical execution of policy.

      Why does the author emphasize the disconnect between economic theory and policy implementation, particularly in the context of market failure? The market failure provides a clean and logical framework fro identifying inefficiencies, its application in real-world policymaking is far more complex. This suggest that economic models often fail to account for institutional and political realities that shape policy decisions. How should they adapt their models to reflect these challenges? How do economists reconcile the gap between idealized models and messy politicized nature of policy execution?

    8. He knows that his breakfast depends upon workerson the coffee plantations of Brazil, the citrus groves ofFlorida, the sugar fields of Cuba, the wheat farms ofthe Dakotas, the dairies of New York; that it has beenassembled by ships, railroads, and trucks, has beencooked with coal from Pennsylvania in utensils madeof aluminum, china, steel, and glass.

      To me this symbolizes how one nation's economic growth depends on many other nations through trade.

    9. yourself mine the metals, process them, combine them, andshape them. To mine the metals, you would have to be ableto locate them. You would need machinery, which you wouldhave to build yourself

      To gather the tools, raw material and metals machinery was needed. Machinery is very integrated into the industry, its an industrial technology that has improved productivity. Stanford states "The invention of steam power, semi-automated spinning and weaving machines, and other early industrial technologies dramatically increased productivity." (Stanford, pg 44). The scale of industrial technology is something that surpasses an individual worker capacity. Kling talks about specialization on why individuals are inherently dependent on a broader system. While Stanford goes into why capitalist is necessary to get factories working.

    10. “Filling in Frameworks” wrestles with the misconceptionthat economics is a science. This section looks at the difficul-ties that economists face in trying to adopt scientific methods. Isuggest that economics differs from the natural sciences in thatwe have to rely much less on verifiable hypotheses and muchmore on hard-to-verify interpretative frameworks. Economicanalysis is a challenge, because judging interpretive frame-works is actually harder than verifying scientific hypotheses.

      I find it interesting that Kling has pointed out in this section the difference between natural sciences and the complexity of economics. Noting that economics is largely interpretative and not always subject to the same verifiable scientific methods of study.

    11. “Instructions and Incentives” deals with the misconceptionthat economic activity is directed by planners. This sectionexplains that although people within a firm are guided totasks through instruction from managers, the economy as awhole is not coordinated that way. Instead, the price systemfunctions as the coordination mechanism.

      In Stanford he wants people to learn about the economics and focus on deciding what's best for them instead of listening to the expert whereas Kling sees the price system as the main mechanism that organizes the economy.

    12. Scarcity and choice are certainly important concepts, butmaking them the central focus can lead to economic analysisthat is simplistic and mechanistic. In fact, the approach to eco-nomics that took hold after World War II treats the economyas a machine governed by equations. Textbooks using thatapproach purport to offer a repair manual, with policy toolsto fix the economic machine when something goes wrong.The mechanistic metaphor is inappropriate and even dan-gerous. A better metaphor would be that of a rainforest. Theeconomy is a complex, evolving system

      Klings introduction, highlights that whether we want to or not the economy relies on many people and we cannot avoid it. He has a social view of specialization and trade, even if there is no physical contact. Stanfords view on economics is that individuals work with what they have knowing we don’t have unlimited access. Individuals focus more on what is needed most. Both Klings and Stanfords readings speak over production, society, consumers, and factors that make up the economy.

    13. Even more strikingis the fact that almost everything you consume is somethingyou could not possibly produce. Your daily life depends on thecooperation of hundreds of millions of other people

      Kling states that even if we are able to consume anything we are not able to produce it. This leads to me to wonder if there is anything I use or eat everyday that I could make on my own. "Your daily life depends on millions of other people". Even if I were to consume a meal that I made from scratch, the work of getting the ingredients to me had to be done from hours of labor and machines. However, can the people who produce it, consume their own work? If a person was part of a car making project and bought it after, is that considered consuming your production? You did not make the whole car, but you played a big part of the process. If we rely on millions of people to make just one item, isn't everyone who is involved producing something together? leading to consuming your production as a whole. I argue against since his logic is taking away the focus of the workers behind the production process.

    14. If trade entails trust among strangers, then financial inter-mediation entails trust over time. If people lose trust infinancial intermediaries, then financial intermediation candecline precipitously. That sharp decline can have a broadeffect on the structure of production in the economy

      Kling says that trade requires for there to be trust from strangers. "If trade entails trust among strangers, then financial intermediation entails trust overtime." He mentions how if there is trust lost in the financial systems it will break the flow of production in an economy. What would happen if suddenly we stopped putting trust in banks, or financial systems that are involved in order to make trade successful? How will employment, businesses, and production be affected?

    15. Instead of headlines, the “crawl” on the TV lists all of thetasks and people needed to produce your breakfast. Your cerealwas manufactured in a factory that had a variety of workersand many machines. People had to manage the factory. Orga-nization of the firm required many functions in finance andadministration. First, however, people had to build the factory

      Kling speaks over the amount of work and tools that are used for an item that looks simple to make. He says "We carry on our lives not really conscious of the complexity of that specialization." The steps and process line up to get just a bowl of cereal. What would happens if something steps out of line, or breaks down? will it slow down the production process? Or what if over time there are less workers to getting production moving?

    16. When patterns of specialization become unsustainable, theindividuals affected can face periods of unemployment. Theyare like soldiers waiting for new orders, except that the orderscome not from a commanding general but from the decen-tralized actions of many entrepreneurs testing ideas in searchof profit.

      When old job patterns no longer make sense, soldiers waiting for new orders, instead of commanding officers they're are getting it from entrepreneurs experimenting. Meaning their livelihood is at the hands of people who are just testing things and an uncertainty if there will be a new role created. It raises a question if this decentralized adjustment make unemployment longer or less predictable?

    17. Look at the list of ingredients in the cereal. Those ingre-dients had to be refined and shipped to the cereal manu-facturer. Again, those processes required many machines,which in turn had to be manufactured. The cereal grainsand other ingredients had to be grown, harvested, and pro-cessed. Machines were involved in those processes, and thosemachines had to be manufactured.

      Machines is such a big part of the industry. It is used for transportation and manufacturing products. Products such as cereal which comes from a chain of production. Every ingredient had to be grown, processed, and transported using machines and the machine itself had to be designed and built to keep the industry running. If machines weren't involve whatsoever would the industry be able to keep afloat?

  3. learn-us-east-1-prod-fleet01-beaker-xythos.content.blackboardcdn.com learn-us-east-1-prod-fleet01-beaker-xythos.content.blackboardcdn.com
    1. onomists who ignore the key features of capitalism will be less able tounderstand and explain how capitalism actually works. So purely from a scientificperspective, it’s important to be frank about what we are dealing with

      Stanford says economists are biased; Kling says they rely too much on models. Both agree economists can be out of touch with real life.

    2. Of course, capitalism can change its “look” a lot, while still preserving its core,underlying features. Indeed, one of the most impressive features of capitalism is itsflexibility: its capacity to change and adapt. Many economists and commentatorshave argued that capitalism today is not at all like capitalism in its early days (backin the soot and grime of the Industrial Revolution). Here are some of the terms usedto describe modern capitalism, implying (falsely) that it’s a whole “new” system:

      Capitalism can change how it looks over time, but its core hasn’t really changed. Even though services are bigger than goods now, information moves faster, private companies still dominate, most workers don’t own the businesses they work for, and inequality is still huge. So while the system adapts and evolves, the basic rules of capitalism stay the same.

    3. So the economy is about work: organizing it, doing it, and dividing up andmaking use of its final output. And in our work, one way or another, we alwayswork (directly or indirectly) with other people.The link between the economy and society goes two ways. The economy is afundamentally social arena. But society as a whole depends strongly on the stateof the economy. Politics, culture, religion, and international affairs are all deeplyinfluenced by the progress of the economy. Governments are re-elected or turfedfrom office depending on the state of the economy. Family life is organized aroundthe demands of work (both inside and outside the home). Being able to comfortablysupport oneself and one’s family is a central determinant of happiness.

      Why does Stanford structure the economy about work, it reflects a deliberate shift away from traditional economic narratives that prioritize consumption or production.By centering labor, he emphasizes the abstraction of the mainstream economics. Instead, Stanford argues that work is the connective tissue between individuals and society, what responsibilities do the governments and institutions have to ensure that work is accessible and available. How does unpaid forms of labor fit into this structure like caregiving or volunteerism and why are they often excluded.

    4. land and driven into cities, where they suffered horrendous exploitation andconditions that would be considered intolerable today: seven-day working weeks,twelve-hour working days, child labour, frequent injury, early death. Vast profitswere earned by the new class of capitalists, most of which they ploughed backinto new investment, technology, and growth – but some of which they used tofinance their own luxurious consumption. The early capitalist societies were not atall democratic: the right to vote was limited to property owners, and basic rights tospeak out and organize (including to organize unions) were routinely (and oftenviolently) trampled.

      The start of capitalism during the Industrial Revolution was harsh and unfair. Workers were forced into cities and faced terrible conditions while capitalists made huge profits. Do you think capitalism could have grown the same way if workers had better protections back then...;?

    5. Instead, economists refer simply to “the economy” – as if there is only one kindof economy, and hence no need to name or define it. This is wrong. As we havealready seen, “the economy” is simply where people work to produce the things weneed and want. There are different ways to organize that work. Capitalism is justone of them.

      Klings view of the economy is that we need to be good at something and trade it with others who can provide the same service in their specialty. Stanford says the economy is about human beings prioritizing their needs with limited resources, repeating the production cycle continuing to improve. Both perspectives, regardless of how we consume or produce it, makes us rely on each other. No matter how the economy is studied or what economists say, everyone plays a role.

    6. Instead, economists refer simply to “the economy” – as if there is only one kindof economy, and hence no need to name or define it. This is wrong. As we havealready seen, “the economy” is simply where people work to produce the things weneed and want. There are different ways to organize that work. Capitalism is justone of them

      Stanford is saying that capitalism hasn’t always been here and might not stick around forever. It makes me think if something new takes its place, what would it look like, and would it deal with inequality any better?

    7. An economy in which private, profit-seeking companies undertake mostproduction, and in which wage-earning employees do most of the work, is acapitalist economy. We will see that these twin features (profit-driven productionand wage labour) create particular patterns and relationships, which in turn shapethe overall functioning of capitalism as a system.Any economy driven by these two features – production for profit and wagelabour – tends to replicate the following trends and patterns, over and over again:• Fierce competition between private companies for markets, sales, and profit.• Innovation, as companies constantly experiment with new technologies,new products, and new forms of organization – in order to succeed in thatcompetition.• An inherent tendency to growth, resulting from the desire of each individualcompany to make more profit.• Deep inequality between those who own successful companies, and the restof society who do not own companies.• A general conflict of interest between those who work for wages, and theemployers who hire them.• Economic cycles or “rollercoasters,” with periods of strong growth followedby periods of stagnation or depression; sometimes these cycles even producedramatic economic and social crises

      Stanford says “companies undertake most production and in which wage-earning employees do most of the work.” Since in private companies, production repeats itself, and continues to innovate in order to grow, how does that affect employees are just working for money? What strategies or methods are used in order to keep workers and attract more knowing there can be future changes? Although Stanford speaks that the economy is based off individual choices, specialization, and trade still place a role for their own production. The repeated cycle that is made from private ownership in businesses require multiple humans to keep producing. It cannot be done on its own.

    8. Conventionally trained economists take it as a proven fact that free tradebetween two countries always makes both sides better off. People who questionor oppose free trade – trade unionists, social activists, nationalists – must eitherbe acting from ignorance, or else are pursuing some narrow vested interest thatconflicts with the broader good. These troublesome people should be lectured to(and economists love nothing better than expounding their beautiful theory ofcomparative advantage*), or simply ignored. And that’s exactly what mostgovernments do. (Ironically, even some conventional economists now recognizethat traditional free trade theory is wrong, for many reasons – some of whichwe’ll discuss in Chapter 22 of this book. But that hasn’t affected the profession’snear-religious devotion to free trade policies.)

      He’s pointing out that economists act like free trade is more of a belief than a fact. I take this to mean that economics sometimes feels more like an ideology than an actual science. If free trade ends up hurting workers, shouldn’t economists question the ideas they’re basing it on?

    9. At its simplest, the “economy” simply consists of all the work that human beingsperform, in order to produce the things we need and use in our lives. (By work,we mean all productive human activity, not just employment; we’ll discuss thatdistinction later.) We need to organize and perform our work (economists call thatproduction). And then we need to divide up the fruits of our work (economistscall that distribution), and use it

      Stanford says “ the economy consist of all the work that human beings perform, in order to produce the things we need and use our lives” When economists study over what is being produced, consumed, traded, the GDP and the market, why is it that some have more resources than others? Does volunteering count as a contribution towards production since it is not involve money?

    10. Economics is the study of human economic behaviour: the production anddistribution of the goods and services we need and want. Hence, economicsis a social science, not a physical science.

      Stanford says that economics is the study make rational choices among competing priorities. That means that we choose what we want to produce and how. Does that mean that individuals make choices based off logic? Knowing that we do not have access to unlimited resources, how do they know they are making the right choice? What if they fail to continue producing perfectly every time?

    11. Never trust an economist with your job

      I noticed that someone else responded to this one and I wanted to give my opinion on it as well. I really do believe you shouldn't trust someone being in charge of your job when their job is solely to increase efficiency, for instance they might want to put policies into place that make for high turnover but make more money for the company.

    12. But quite apart from whether you think capitalism is good or bad, capitalism issomething we must study. It’s the economy we live in, the economy we know. Andthe more ordinary people understand about capitalism

      Capitalism is something we must understand as people of society. I am interested to learn about all the different aspects of it and how it influences our economy.

    13. I alsobelieve that it is ultimately possible to build an alternative economic system guideddirectly by our desire to improve the human condition, rather than by a hunger forprivate profit. (Exactly what that alternative system would look like, however, isnot at all clear today.) We’ll consider these criticisms of capitalism, and alternativevisions, in the last chapters of this book

      How different would the economy be with this alternate system? Would it be better for the people?

    14. Unfortunately, most professional economists don’t think about economics inthis common-sense, grass-roots context. To the contrary, they tend to adopt arather superior attitude in their dealings with the untrained masses. They invokecomplicated technical mumbo-jumbo – usually utterly unnecessary to theirarguments – to make their case. They claim to know what’s good for the people

      What would it be like if economists did not have such superio attitudes and came from a more "for the people" perspective? How different would the economy be?

    15. Most production of goods and services is undertaken by privately-ownedcompanies, which produce and sell their output in hopes of making a profit.This is called production for profit.2. Most work in the economy is performed by people who do not own theircompany or their output, but are hired by someone else to work in return for amoney wage or salary. This is called wage labour.

      Stanford tends to focus on how important production is, what they sell, who they hire, and how much they pay. While Kling focuses on specialization and free-market. Stanford also writes as if his audience is not very economic educated while Kling writes like his audience has economic background knowledge.

    1. nimulavagula, blandula,Hospes comesquecorporis,QuaenuncabibisinlocaPallidula,rigida,nudula,Nec,utsolis,dabisiocos...P.AELIUSHADRIANUS,Imp.

      Animula vagula, blandula Little soul, wandering, gentle

      Hospes comesque corporis Guest and companion of the body

      Quae nunc abibis in loca You who now will depart to places

      Pallidula, rigida, nudula Pale, stiff, and bare

      Nec, ut soles, dabis iocos… And you will no longer make your usual jokes…

    Annotators

    1. In short, science classes pair a description of our best knowledge at the present with a story of discovery of how we came to know what we know now, with the clear implication that this method is how we will continue to discover new things. By contrast in history this same story (we call it historiography – the history of the history) doesn’t generally attract sustained attention until graduate school. Students learn the names of rulers and thinkers and key figures but they rarely learn the names of historians. Likewise, instead of being presented with a process of historical discovery they are given a narrative of human development – it is not until advanced undergraduate courses that they begin to engage meaningfully with how we know these things. In my own experience the exceptions to this were almost invariably stories about the knowledge-making achievements of other disciplines – archaeology and linguistics, mostly – rather than narratives of historical investigation. So it is not surprising that many students at those introductory levels come away assuming that the narrative is pretty much fixed and has been known and understood effectively forever.

      IB historiography!!

    1. Speaking: Explain how igneous, sedimentary, and metamorphic rocks differ to a partner using compare and contrast language

      I feel like all of these sound like normal objectives that we are taught to write in all of our education classes. I am failing to see how these are special or different. The first ones just seem like terrible objectives to begin with and the second ones just go into more detail.

    2. If we don’t challenge the status quo regarding the improvement of educational opportunities for multilingual learners through equitable practices and policies, who will?

      This should be required by law. As teachers we are required to provide free and appropriate education to all students, regardless of background.

    3. Additionally, multilingual learners may face challenges when teachers use “tricky” language, such as idiomatic expressions (e.g., “learn this by heart” or “the assignment is a walk in the park”

      I think that while these idiomatic expressions may be more difficult for ELL students that they should be taught them because they are often used in everyday language. Although they might not be on a standardized test, they could help students understand more conversationally and be able to communicate better with their peers.

    4. acknowledging the implicit and explicit ideologies and power structures inherent in language, (2) understanding that the use of such language, even unintentionally, can and does legitimate and reproduce social inequalities, and (3) striving to become agents of long-term change in society

      I think that it is important to be aware of terms that are offensive, even if unintentionally and definitely avoid using them. Also though, if a student is using them, shut that behavior down and tell them that it is inappropriate school talk and let them know that it can be hurtful to others.

    5. Getting to know your students should begin on the first day—or even earlier, if possible, by meeting them (and their families) before the academic year starts—and should continue throughout the year

      This should go for every student, not just ELL because it opens the door for communication between families which helps both parties be able to do what is best for their child. It also allows for the teacher to get to know each family and be able to differentiate per the students background knowledge.

    6. “I’m not sure what the problem is. These kids can’t speak well in English or Spanish. Rather than teaching them both languages, we should just focus on English

      I think this approach is lowkey a white supramacist and racist view. Students native languages are just as important as English. If they are not speaking well in either, than they are not being taught well in either side and that is up to the teachers to help teach. It is not a teachers job to be deciding what language they are going to be learning or talking outside of school.

    1. Although the concept of self-fulfilling prophecies was originally developed to be applied to social inequality and discrimination, it has since been applied in many other contexts, including interpersonal communication. This research has found that some people are chronically insecure, meaning they are very concerned about being accepted by others but constantly feel that other people will dislike them.

      I tend to feel very insecure when I am meeting new people because I am afraid they won't like me. But once I open up and talk to them, I never have a problem. This is a good idea of what to do, and to just manifest positive thoughts. When you allow those negative insecure thoughts to stay, they can keep you from having the opportunity for positive thoughts.

    1. Americans "Cinderella" is not a story of rags to riches, but rather riches recovered; not poor girl into princess but rather rich girl (or princess) rescued from improper or wicked enslavement; not suffering Griselda enduring but shrewd and practical girl persevering and winning a share of the power. It is really a story that is about "the stripping away of the disguise that conceals the soul from the eyes of others "

      I was really intrigued by the way they broke down Cinderella's story and described it. It honestly gave me a new outlook on the whole fairy tale. I was one of the people who had always saw it as a rags to riches story. Seeing this new perspective makes me appreciate the story do much more and makes me want to go back and reread it as well as watch the movies with this new point of view of Cinderella always being a princess.

    2. Since then, America's Cinderella has been a coy, helpless dreamer, a "nice" girl who awaits her rescue with patience and a song.

      I can tell how different this version of cinderella is from the older one. this sentence highlights how Disney made the Cinderella appear more sweet and gentle. This aligns with the American culture at the time and portrayed women submissively.

    3. The idea in the West is to make a product which will sell well.

      It is incredibly interesting to see how society influences the story telling. With each translation the story of Ashenputtel/ Cinderalla which altered to fit the culture and society. Though I knew there was a distinction between the American and original German depiction of the same story, I never thought about why and how this came to be. After reflection, I realize that the American versions unconsciously or consciously sells the American dream. It sells the idea that success and a better life is possible for anyone regardless of where they start off. The American dream relies on the concept that America is a country of endless opportunities and where social and class mobility is possible. The story of cinderella portrays just that, of course with a bit of romantic touch, it follows the story of a girl who was poor and miserable but ended up with the prince- being able to jump social classes.

    1. But each person’s self-concept is also influenced by context, meaning we think differently about ourselves depending on the situation we are in. In some situations, personal characteristics, such as our abilities, personality, and other distinguishing features, will best describe who we are.

      In some places, I can be much more outgoing and seem like I know what Im doing but other times it seems like im really nervous or I have never done this before in my life. When I feel confident in something, I tend to act not necessarily differently, but more myself. And sometimes I'll think highly of myself and others I will think that I don't deserve to be participating like the others.

    1. I’m sure you have a family member, friend, or coworker with whom you have ideological or political differences.

      When I get into an argument with someone like my boyfriend or my friends, I try to avoid thinking about only my point of view. Sometimes when I argue with my bestfriend it seems like she doesn't want to put her ideas aside for the benefit of our friendship, but after I talk it through to her it she starts to understand.

    1. In the conflict thus far, success has been on our side

      What is the success? Is the confederate winning the war, or have people united and buy into the message of the confederate.

    2. our social fabric is firmly planted; and I cannot permit myself to doubt the ultimate success of a full recognition of this principle throughout the civilized and enlightened world.

      I find it so baffling how convincing the confederate leaders were that their values were commonly shared among all of the south. While some may have believed this to my understanding many "confederate" valued people were more interested in not loosing their business or jobs as farmers and losing the farming empires that was the southern United States.

    3. The architect, in the construction of buildings, lays the foundation with the proper material-the granite-then comes the brick or the marble. The substratum of our society is made of the material fitted by nature for it, and by experience we know that it is the best, not only for the superior but for the inferior race, that it should be so. It is, indeed, in conformity with the Creator.

      Compares society to a building, saying some people are like the granite foundation while others are the “higher” materials like brick or marble. The point being made is that different races supposedly have different “natural” places in society, with some meant to be at the bottom. By bringing in “the Creator,” the writer argues this hierarchy is part of God’s plan. Basically, it’s a way of justifying racism and inequality by making it sound natural, logical, and even moral.

    1. We also organize information that we take in based on difference. In this case, we assume that the item that looks or acts different from the rest doesn’t belong with the group. Perceptual errors involving people and assumptions of difference can be especially awkward, if not offensive.

      I have been around many people who tend to say or do things based on only the information they think they know. When they assume certain things, sometimes they are correct, but other times they are embarrassed when they end up being wrong. This is why I think it is important to not stereotype or assume based off the first glance.

    1. In answering this letter, please state if there would be any safety for my Milly and Jane, who are now grown up, and both good-looking girls. You know how it was with poor Matilda and Catherine. I would rather stay here and starve—and die, if it come to that—than have my girls brought to shame by the violence and wickedness of their young masters. You will also please state if there has been any schools opened for the colored children in your neighborhood. The great desire of my life now is to give my children an education, and have them form virtuous habits.

      The only wish to educate his children leaves a sense of hopelessness for himself and simply living a life of a parent, to do what is best and necessary for his children's success.

    1. But in the last analysis, it is the people themselves who are filed away through the lack of creativity, transformation, and knowledge in this (at best) misguided system

      It is so important for teachers to be creative. But the argument that I see from my coworkers every year is that the pay does not equal the work. Sadly, this attitude hurts the students and will trickle to the community and the future.

    2. This is the "banking" concept of education, in which the scope of action allowed to the students extends only as far as receiving, filing, and storing the deposits.

      This "banking" of information will make the information hard to keep in the brain. Students will forget the information in the next year.

    3. Words are emptied of their concreteness and become a hollow, alienated, and alienating verbosity.

      Students need to know their why for learning and also how it relates to the real world. This shows the importance of the education.

    4. The contents, whether values or empirical dimensions of reality, tend in the process of being narrated to become lifeless and petrified.

      It is important as educators to involve dialogue amongst students. At the end of the day, students should be "tired" not the teacher due to energy and conversation during the day.

    1. Acceptable Use of AI in this Course

      Will we go over how to use AI as a tool to assist us in our academic coursework and how to use it properly without violating any policies?

    2. Define important concepts such as: authority, peer review, bias, point of view, editorial process, purpose, audience, information privilege and more.

      This is really useful, mainly because a lot of the times when a professor asked me to find a peer reviewed article i struggle to find an actual good one, so i can really use the help.

  4. learn-us-east-1-prod-fleet02-xythos.content.blackboardcdn.com learn-us-east-1-prod-fleet02-xythos.content.blackboardcdn.com
    1. herefore of making a provi-sional determination of the absolute values ofthe charges carried by the drop

      I don't entirely understand the connection between using Stokes' Law to cancel out the mass from the electric charge equation and proving electric charge values are discrete. Looking at equation 4, there is a constant coefficient to the variable velocity terms but that doesn't really indicate that electric charges are discrete to me.

    2. supported by evidence from many sourcesthat all electrical charges,

      I'm curious what sources he's referencing here. What other experiments would have been able to confirm that electric charges are quantized?

    1. Others have been tempted to argue that implicit bias is overrated (maybe even justified) and that minorities simply need to toughen up.

      it is surprisingly to find that peoples still try to disprove implicit bias despite there being numerous researches and daily every day examples clearly observed still people try to avoid this idea , implicit bias is something deep within every person , you will not be aware of it consciously but your actions will show it. but it will be also interesting to see the proves and claims of the people who are trying to debate this matter.

    2. Take the example of the Müller-Lyer illusion. Your task is to decide whether line A or line B is the longer one.

      this is a great example used in the article to make readers really visualize what the article trying to prove and shed light to and also a good way to grab the readers attention more to the matter.

    1. The act of study demands a sense of modesty.

      Learning requires us to be humble in the sense that we can add our own perspectives and challenge readings, yet at the same time accept that we do not know everything and have to be open to collaboration and adapting what we know to maybe fit other points and be comfortable with the unknown.

    2. In fact, a book reflects its author’s confrontation with the world. It expresses thisconfrontation.

      100 different people can read the same passage and present 100 different perspectives Lived experiences can lead to lessons, readings, books, etc. to all be interpreted differently. That is why the act of memorization of other people's writings and opinions will not further reader's academic journeys.

    3. This critical attitude is precisely what“banking education” does not engender.

      When students study, they are simply absorbing information presented to them. The expectation is for them to blurt it back out for the exam and be ready to memorize a new set of facts. For students to form a relationship with the content, they must be presented the opportunity to ask questions.

    4. f the reader is transformed into a “vessel” filled by extracts from an internalizedtext

      There is a passive relationship between a reader and the author. Learners are the "vessel" for information waiting to be filled by the opinions of the authors instead of being challenged and wanting to challenge. It critiques readers being empty for someone elses knowledge.

    1. In our survey, respondents most commonly reported using the time they save with AI coding tools to design systems, collaborate, and learn.

      Respondents say their companies are using AI to generate test cases

    2. nearly all of the survey participants reported using AI coding tools both outside of work or at work at some point

      Almost every respondent has used AI coding tools at work

    3. Easier to work with new programming languages, and understand existing codebases.

      AI coding tools make it easy to adopt new programming languages and understand existing code bases.

    1. It is not often that educators are permitted to strike because they are employed by the state and are considered vital to public service.

      I am surprised to learn that you have to ask permission/be permitted to strike. I did not know that some states still have laws against striking.

    2. Therefore, some states have begun to change tenure laws to adhere to the accountability requirements stipulated by the U.S. Department of Education as it relates to teacher evaluation and student achievement.

      Getting rid of tenure in favor of "merit" based protections might sound good in theory, and it is something that the current administration is pushing for but I argue that rewarding teachers with career protections based on "merit" is very subjective and could easily be used by states/districts to discriminate against teachers or support teachers that fit their vision.

    3. (CCSSO, InTASC Standard #9, 2013).

      The majority of these standards allude to teachers having personal responsibility. I think it is good to be mindful of the huge impact teachers, their attitudes, and actions have on students; not only in the classroom, but also on a student's self esteem, future, and overall feelings about learning.

    1. “There has never been a more important time for children to become storytellers, and there have never been so many ways for them to share their stories” (p. 3). Our students and their stories should be an essential part of our teaching. As educators, we need to encourage students to tell their stories and help build community. Each shared story has the potential of teaching us.

      I think it is super important for storytelling to be apart of a child’s curriculum. The mind can develop to a great extent through storytelling. It is apart of daily life that I think is often overlooked or taken advantage of.

    2. When students’ lives are taken off the margins and placed in the curriculum, they don’t feel the same need to put down someone else” (p. 7). Students need to feel that their voices matter, that they have a story to contribute or share and that their stories are a rich part of the curriculum

      This is true. If we avoid stereotypes, it invites a more comfortable environment for students to share authentic stories. It relieves the pressures and ideas that certain people have to live up to a specific standard or act a certain way.

    3. Students who search their memories for details about an event as they are telling it orally will later find those details easier to capture in writing

      It can be hard to find the right words when telling a story orally. For me personally, I often struggle to find the right words to describe certain things, or often myself using the wrong words, so writing it out definitely helps me brainstorm different ways I can describe certain details. It also gives me an opportunity to expand on those details to make the story more captivating or interesting

    4. “there has probably never been a human society in which people did not tell stories”

      This is fascinating to think about. If you think about native traditions, you will find that all, if not most, come from story telling. A lot of them are from oral tradition/storytelling, so it’s definitely interesting to think about how far story telling dates back to.

    1. Overall summary: author thinks the one advantage we have over AI is the originality that humans possess and it is critical that we continue to embrace that instead of becoming more like AI.

    2. Having said that, always remember that artificial intelligence is only an assistant; an executive’s value comes from his or her own intelligence.

      Summary: AI is useful for busywork or simple tasks.

    3. That which diverges from the run-of-the-mill is not only valuable; it is indeed becoming invaluable in the age of AI.

      Summary: Breaking rules and being truly original is the one advantage humans have over AI.

    4. Our priority should be to discover and innovate, not imitate neural networks.

      Summary: The author warns against becoming like AI in the process of creating.

    5. We can use AI for unengaging and repetitive tasks, but we should also remember that humaneness is the key to creativity.

      How would this author define humaneness? AI is technically just regurgitating human work, and it was invented by humans.

    1. The Egyptian empires lasted for nearly 2300 years before being conquered, in succession, by the Assyrians, Persians, and Greeks between about 700 BCE and 332 BCE.

      I find this to be insane that the Egyptian empires lasted this long! I had always heard the quote of Empires fall after 250 years, so 2300 years is absolutely wild!

    2. Farming developed in a number of different parts of the ancient world, before the beginning of recorded history. That means it’s very difficult for historians to describe early agricultural societies in as much detail as we’d like. Also, because there are none of the written records historians typically use to understand the past, we rely to a much greater extent on archaeologists, anthropologists, and other specialists for the data that informs our histories. And because the science supporting these fields has advanced rapidly in recent years, our understanding of this prehistoric period has also changed – sometimes abruptly.

      This surprised me a lot I thought there would be a decent amount of evidence of early farming and maybe stuff that was written down and annotated. It is still wild that there isn't much known about early stages of farming!

    1. All work turned in must adhere to the following format.

      I appreciate the example of the format we are supposed to use. This gives us clear expectations of what you want and can be used all year long.

    2. All assignments for this course must be written and submitted directly in Google Docs

      As a Google Docs lover I am so pumped for this! Most of my other classes have to be submitted through something else and this will be so helpful for me throughout the class.

    3. It places too high of aburden on me to investigate and evaluate AI possible AI usage instead of focusingon the important educational aspects of the course.

      As a future educator, I find this extremely truthful. It can be so hard to find and investigate the use of AI because it is so authentic.

    1. The speculative bubble created by railroad financing burst in the Panic of 1873, which began a period called the Long Depression that lasted until nearly the end of the century and was so bad that before the Great Depression of the 1930s the period was known simply as “The Depression”.

      The fact that this was caused by a few things, that might've looked inconsequential, or not so much of a deal, all worked together to cause one of the most devastating depressions in U.S. history. It makes me wonder if there was anything they could've done to avoid it.

    2. Nearly 100 Americans died in “The Great Upheaval.” Workers destroyed nearly $40 million worth of property. The strike galvanized the country. It convinced laborers of the need for institutionalized unions, persuaded businesses of the need for even greater political influence and government aid, and foretold a half century of labor conflict in the United States.

      The fact that workers had to come to such drastic measures just to get a voice in what they're paid, or even reduced work hours. They had to destroy nearly $40 million dollars (about $1,174,720,000 today) worth of property, and there were many casualties. It makes me thankful that we have the unions we have today, but also wonder what would happen if something like this happened in modern days. Would it be as catastrophic, or would the government avoid all of it by complying?

    1. not only can such freedom be granted without prejudice to the public peace, but also, that without such freedom, piety cannot flourish nor the public peace be secure.

      holland an example of free speech

    2. How many evils spring from luxury, envy, avarice, drunkenness, and the like, yet these are tolerated

      some things are tolerated now because they cannot be legally enforced... more would come of preventing speech

    1. I mean the pace of the finished film, how the edits speed up or slow down to serve the story, producing a kind of rhythm to the edit.

      this video allows me to connect the overall rhythm of each shot some are speed up and others are longer. this helps me understand what rhythm means in a film.

    2. ther ways cinema manipulates time include sequences like flashbacks and flashforwards. Filmmakers use these when they want to show events from a character’s past, or foreshadow what’s coming in the future.

      i've seen this in a lot of films where they will add the end of the movie at the beginning and then we watch how the story plays out. for example fight club demonstrates a flashforward.

    3. The most obvious example of this is the ellipsis, an edit that slices out time or events we don’t need to see to follow the story. Imagine a scene where a car pulls up in front of a house, then cuts to a woman at the door ringing the doorbell. We don’t need to spend the screen time watching her shut off the car, climb out, shut and lock the door, and walk all the way up to the house.

      i think this saved the directors time and the suidences attentions for another example a person in the film will be eating food and then cut to her washing the dishes or into another scene we don't need to waste time watching that person eat.

    4. He wants you to feel the terror of those peasants being massacred by the troops, even if you don’t completely understand the geography or linear sequence of events. That’s the power of the montage as Eisenstein used it: A collage of moving images designed to create an emotional effect rather than a logical narrative sequence.

      i think this video shows the emotions a lot more the actually understand the logic behind the emotions.

    5. he audience was projecting their own emotion and meaning onto the actor’s expression because of the juxtaposition of the other images. This phenomenon – how we derive more meaning from the juxtaposition of two shots than from any single shot in isolation – became known as The Kuleshov Effect.

      i can see what the directors was trying to get across to the audience you can see the emotions of the actor in each cut.

    1. While navigating through the text, you’ll notice that the major part of the text you’re working within is identified at the top of the page

      This will be helpful to be able to save time finding the correct section I am working through.