266 Matching Annotations
  1. Last 7 days
    1. In one approach, the command interpreter itself contains the code to execute the command. For example, a command to delete a file may cause the command interpreter to jump to a section of its code that sets up the parameters and makes the appropriate system call. In this case, the number of commands that can be given determines the size of the command interpreter, since each command requires its own implementing code.

      This passage explains one method of implementing the commands in a command interpreter: the interpreter directly contains the code for executing each command. For instance, a delete-file command triggers a specific section of the interpreter’s code to set parameters and perform the system call. The number of supported commands directly affects the interpreter’s size, as each command needs its own dedicated code.

    2. The main function of the command interpreter is to get and execute the next user-specified command. Many of the commands given at this level manipulate files: create, delete, list, print, copy, execute, and so on. The various shells available on UNIX systems operate in this way. These commands can be implemented in two general ways.

      This passage highlights how the command interpreter’s primary role is to receive and execute user commands, many of which involve file manipulation, such as creating, deleting, or copying the files. It also notes that the UNIX shells implement these commands, which can be carried out using two general approaches.

    3. Most operating systems, including Linux, UNIX, and Windows, treat the command interpreter as a special program that is running when a process is initiated or when a user first logs on (on interactive systems). On systems with multiple command interpreters to choose from, the interpreters are known as shells. For example, on UNIX and Linux systems, a user may choose among several different shells, including the C shell, Bourne-Again shell, Korn shell, and others

      This passage explains that the command interpreter, or shell, is a special program that runs when a process starts or when a user logs on. On systems like UNIX and Linux, multiple shells are available, allowing users to choose their preferred interface for entering commands.

    4. Protection and security. The owners of information stored in a multiuser or networked computer system may want to control use of that information. When several separate processes execute concurrently, it should not be possible for one process to interfere with the others or with the operating system itself. Protection involves ensuring that all access to system resources is controlled. Security of the system from outsiders is also important.

      This passage describes that operating systems enforce protection and security by controlling access to system resources. In multiuser or the networked environments, this ensures that processes do not interfere with one another, and also safeguards the system against the external threats.

    5. Logging. We want to keep track of which programs use how much and what kinds of computer resources. This record keeping may be used for accounting (so that users can be billed) or simply for accumulating usage statistics. Usage statistics may be a valuable tool for system administrators who wish to reconfigure the system to improve computing services.

      This passage explains how the operating systems maintains the logs of the program resource usage. These logs can support accounting, billing, or help administrators analyze usage patterns to optimize system performance.

    6. Resource allocation. When there are multiple processes running at the same time, resources must be allocated to each of them. The operating system manages many different types of resources. Some (such as CPU cycles, main memory, and file storage) may have special allocation code, whereas others (such as I/O devices) may have much more general request and release code.

      This passage highlights that the operating system is responsible for resource allocation, distributing CPU time, memory, file storage, and I/O devices among multiple running processes to ensure fair and efficient usage.

    7. Error detection. The operating system needs to be detecting and correcting errors constantly. Errors may occur in the CPU and memory hardware (such as a memory error or a power failure), in I/O devices (such as a parity error on disk, a connection failure on a network, or lack of paper in the printer), and in the user program (such as an arithmetic overflow or an attempt to access an illegal memory location).

      This passage explains that the operating system continuously detects and handles errors. These errors can arise in hardware (CPU, memory, or I/O devices) or in user programs, such as illegal memory access or arithmetic overflow, ensuring system stability.

    8. Communications. There are many circumstances in which one process needs to exchange information with another process. Such communication may occur between processes that are executing on the same computer or between processes that are executing on different computer systems tied together by a network

      This passage describes that operating systems provide mechanisms for interprocess communication, allowing processes to exchange the information either on the same computer or across different computers connected via a network.

    9. File-system manipulation. The file system is of particular interest. Obviously, programs need to read and write files and directories. They also need to create and delete them by name, search for a given file, and list file information. Finally, some operating systems include permissions management to allow or deny access to files or directories based on file ownership.

      This passage explains that operating systems manage file-system operations, including reading, writing, creating, deleting, searching, and listing files and directories. Some systems also enforce permissions to control access based on file ownership.

    10. Program execution. The system must be able to load a program into memory and to run that program. The program must be able to end its execution, either normally or abnormally (indicating error).

      This passage highlights that an operating system manages program execution by loading programs into memory, running them, and handling their termination, whether it ends normally or due to an error.

    11. An operating system provides an environment for the execution of programs. It makes certain services available to programs and to the users of those programs. The specific services provided, of course, differ from one operating system to another, but we can identify common classes.

      This passage states that an operating system provides a platform for running programs, offering services to both programs and users. While the specific services vary across operating systems, there are common classes of services that can generally be identified.

    12. We can view an operating system from several vantage points. One view focuses on the services that the system provides; another, on the interface that it makes available to users and programmers; a third, on its components and their interconnections. In this chapter, we explore all three aspects of operating systems, showing the viewpoints of users, programmers, and operating system designers. We consider what services an operating system provides, how they are provided, how they are debugged, and what the various methodologies are for designing such systems. Finally, we describe how operating systems are created and how a computer starts its operating system.

      This passage explains that operating systems can be understood from multiple perspectives: the services they provide, the interfaces available to users and programmers, and their internal components and connections. The chapter will explore these viewpoints, covering OS services, debugging, design methodologies, creation processes, and how a computer boots its operating system.

    13. Another advantage of working with open-source operating systems is their diversity. GNU/Linux and BSD UNIX are both open-source operating systems, for instance, but each has its own goals, utility, licensing, and purpose. Sometimes, licenses are not mutually exclusive and cross-pollination occurs, allowing rapid improvements in operating-system projects. For example, several major components of OpenSolaris have been ported to BSD UNIX. The advantages of free software and open sourcing are likely to increase the number and quality of open-source projects, leading to an increase in the number of individuals and companies that use these projects.

      Another benefit of utilizing the open-source operating systems is that their variety. GNU/Linux and the BSD UNIX are both considered the open-source operating systems, for example, yet they each have distinct goals, functions, licenses, and purposes. Occasionally, licenses are not exclusive, and cross-pollination takes place, facilitating swift advancements in operating-system initiatives. For instance, numerous key elements of OpenSolaris have been adapted to BSD UNIX. The benefits of free software and open sourcing are expected to enhance the quantity and quality of open-source projects, resulting in a rise in the number of people and businesses that utilize these projects.

    14. The free-software movement is driving legions of programmers to create thousands of open-source projects, including operating systems. Sites like http://freshmeat.net/ and http://distrowatch.com/ provide portals to many of these projects. As we stated earlier, open-source projects enable students to use source code as a learning tool. They can modify programs and test them, help find and fix bugs, and otherwise explore mature, full-featured operating systems, compilers, tools, user interfaces, and other types of programs. The availability of source code for historic projects, such as Multics, can help students to understand those projects and to build knowledge that will help in the implementation of new projects.

      This passage highlights how the free-software movement motivates the programmers to create the numerous open-source projects, including the operating systems. Portals like FreshMeat and DistroWatch provide access to these projects. Open-source code serves as a learning tool, allowing students to modify, test, and debug programs, explore full-featured systems, and study historic projects like Multics to gain knowledge useful for developing new software.

    15. Solaris is the commercial UNIX-based operating system of Sun Microsystems. Originally, Sun's SunOS operating system was based on BSD UNIX. Sun moved to AT&T's System V UNIX as its base in 1991. In 2005, Sun open-sourced most of the Solaris code as the OpenSolaris project. The purchase of Sun by Oracle in 2009, however, left the state of this project unclear

      This passage outlines the history of Solaris, Sun Microsystems’ commercial UNIX-based OS. SunOS was initially based on BSD UNIX, but in 1991 it switched to System V UNIX. In 2005, most Solaris code was open-sourced as OpenSolaris, though Oracle’s acquisition of Sun in 2009 left the project’s future uncertain.

    16. As with many open-source projects, this source code is contained in and controlled by a version control system—in this case, “subversion” (https://subversion.apache.org/source-code). Version control systems allow a user to “pull” an entire source code tree to his computer and “push” any changes back into the repository for others to then pull. These systems also provide other features, including an entire history of each file and a conflict resolution feature in case the same file is changed concurrently. Another version control system is git, which is used for GNU/Linux, as well as other programs (http://www.git-scm.com).

      This text describes how open-source projects typically utilize version control systems to oversee the source code. Subversion (employed by BSD) and Git (utilized by GNU/Linux) enable the users for extracting the code, implement the modifications, and then subsequently upload the updates back to the repository. These systems monitor file histories, handle simultaneous changes, and assist in conflict resolution, facilitating collaborative development and effective code management

    17. Just as with Linux, there are many distributions of BSD UNIX, including FreeBSD, NetBSD, OpenBSD, and DragonflyBSD. To explore the source code of FreeBSD, simply download the virtual machine image of the version of interest and boot it within Virtualbox, as described above for Linux. The source code comes with the distribution and is stored in /usr/src/. The kernel source code is in /usr/src/sys. For example, to examine the virtual memory implementation code in the FreeBSD kernel, see the files in /usr/src/sys/vm. Alternatively, you can simply view the source code online at https://svnweb.freebsd.org.

      This passage explains how the BSD UNIX, like the Linux, has the multiple distributions such as the FreeBSD, NetBSD, OpenBSD, and the DragonflyBSD. FreeBSD’s source code is included with its distribution and can be explored locally (e.g., in /usr/src/ and /usr/src/sys) or online via the FreeBSD repository. Virtual machine images allows the users to boot and examine the OS safely, making it accessible for learning and also experimentation.

    18. BSD UNIX has a longer and more complicated history than Linux. It started in 1978 as a derivative of AT&T's UNIX. Releases from the University of California at Berkeley (UCB) came in source and binary form, but they were not open source because a license from AT&T was required. BSD UNIX's development was slowed by a lawsuit by AT&T, but eventually a fully functional, open-source version, 4.4BSD-lite, was released in 1994.

      This passage summarizes the history of BSD UNIX. Originating in 1978 as a derivative of AT&T UNIX, early BSD releases from UC Berkeley required an AT&T license and were not fully open source. Development was delayed by legal issues, but a fully functional open-source version, 4.4BSD-lite, was eventually released in 1994.

    19. The resulting GNU/Linux operating system (with the kernel properly called Linux but the full operating system including GNU tools called GNU/Linux) has spawned hundreds of unique distributions, or custom builds, of the system. Major distributions include Red Hat, SUSE, Fedora, Debian, Slackware, and Ubuntu. Distributions vary in function, utility, installed applications, hardware support, user interface, and purpose. For example, Red Hat Enterprise Linux is geared to large commercial use. PCLinuxOS is a live CD—an operating system that can be booted and run from a CD-ROM without being installed on a system's boot disk. A variant of PCLinuxOS—called PCLinuxOS Supergamer DVD—is a live DVD that includes graphics drivers and games. A gamer can run it on any compatible system simply by booting from the DVD. When the gamer is finished, a reboot of the system resets it to its installed operating system.

      This passage discusses GNU/Linux as an example of a free and open-source operating system. By 1991, the GNU Project had developed most components except for a fully functional kernel. Linus Torvalds then released a basic UNIX-like kernel like using the GNU tools and the invited global contributions, leading to the development of the Linux kernel and the complete GNU/Linux system.

    20. As an example of a free and open-source operating system, consider GNU/Linux. By 1991, the GNU operating system was nearly complete. The GNU Project had developed compilers, editors, utilities, libraries, and games—whatever parts it could not find elsewhere. However, the GNU kernel never became ready for prime time. In 1991, a student in Finland, Linus Torvalds, released a rudimentary UNIX-like kernel using the GNU compilers and tools and invited contributions worldwide.

      This passage discusses GNU/Linux as an example of a free and open-source operating system. By 1991, the GNU Project had developed most components except for a fully functional kernel. Linus Torvalds then released a basic UNIX-like kernel using GNU tools and invited global contributions, leading to the development of the Linux kernel and the complete GNU/Linux system.

    21. The FSF uses the copyrights on its programs to implement “copyleft,” a form of licensing invented by Stallman. Copylefting a work gives anyone that possesses a copy of the work the four essential freedoms that make the work free, with the condition that redistribution must preserve these freedoms. The GNU General Public License (GPL) is a common license under which free software is released. Fundamentally, the GPL requires that the source code be distributed with any binaries and that all copies (including modified versions) be released under the same GPL license. The Creative Commons “Attribution Sharealike” license is also a copyleft license; “sharealike” is another way of stating the idea of copyleft.

      This passage explains how the “copyleft,” is a licensing approach that was developed by Richard Stallman and used by the Free Software Foundation (FSF). Copyleft ensures that the software remains free by granting the users the four essential freedoms while requiring that any of the redistribution preserves about these freedoms. The GNU General Public License (GPL) is a widely used copyleft license, mandating that source code accompany binaries and that modified versions remain under the same license. Creative Commons’ “Attribution Sharealike” license follows a similar principle.

    22. To counter the move to limit software use and redistribution, Richard Stallman in 1984 started developing a free, UNIX-compatible operating system called GNU (which is a recursive acronym for “GNU's Not Unix!”). To Stallman, “free” refers to freedom of use, not price. The free-software movement does not object to trading a copy for an amount of money but holds that users are entitled to four certain freedoms: (1) to freely run the program, (2) to study and change the source code, and to give or sell copies either (3) with or (4) without changes. In 1985, Stallman published the GNU Manifesto, which argues that all software should be free. He also formed the Free Software Foundation (FSF) with the goal of encouraging the use and development of free software.

      This passage explains how the Richard Stallman’s creation of the GNU operating system in the 1984 to promote about the software freedom. “Free” refers to liberty, not price, granting users the rights to run, study, modify, and distribute software with or without changes. Stallman’s GNU Manifesto and the Free Software Foundation (FSF) advocate for these freedoms and encourage the development and use of free software.

    23. Computer and software companies eventually sought to limit the use of their software to authorized computers and paying customers. Releasing only the binary files compiled from the source code, rather than the source code itself, helped them to achieve this goal, as well as protecting their code and their ideas from their competitors. Although the Homebrew user groups of the 1970s exchanged code during their meetings, the operating systems for hobbyist machines (such as CPM) were proprietary. By 1980, proprietary software was the usual case.

      This passage explains how computer and software companies began restricting software use to authorized users and paying customers. By distributing only compiled binaries instead of source code, companies protected their intellectual property and ideas. While early hobbyist groups shared code freely, operating systems like CPM were proprietary, and by 1980, proprietary software had become the norm.

    24. In the early days of modern computing (that is, the 1950s), software generally came with source code. The original hackers (computer enthusiasts) at MIT's Tech Model Railroad Club left their programs in drawers for others to work on. “Homebrew” user groups exchanged code during their meetings. Company-specific user groups, such as Digital Equipment Corporation's DECUS, accepted contributions of source-code programs, collected them onto tapes, and distributed the tapes to interested members. In 1970, Digital's operating systems were distributed as source code with no restrictions or copyright notice.

      This passage explains how the early history of software distribution in the 1950s–1970s. The Software often came with the source code, and the communities of the enthusiasts—like the MIT hackers, Homebrew groups, and company user groups such as DECUS—shared, modified, and distributed programs freely. Digital Equipment Corporation even distributed operating systems as unrestricted source code, highlighting the collaborative culture of early computing.

    25. There are many benefits to open-source operating systems, including a community of interested (and usually unpaid) programmers who contribute to the code by helping to write it, debug it, analyze it, provide support, and suggest changes. Arguably, open-source code is more secure than closed-source code because many more eyes are viewing the code. Certainly, open-source code has bugs, but open-source advocates argue that bugs tend to be found and fixed faster owing to the number of people using and viewing the code.

      This passage highlights how the benefits of open-source operating systems. A community of the programmers contributes by the writing, debugging, analyzing, and also improving the code. The Open-source code can be more secure and reliable than closed-source software because more people examine it, helping to identify and fix bugs more quickly.

    26. Starting with the source code allows the programmer to produce binary code that can be executed on a system. Doing the opposite—reverse engineering the source code from the binaries—is quite a lot of work, and useful items such as comments are never recovered. Learning operating systems by examining the source code has other benefits as well. With the source code in hand, a student can modify the operating system and then compile and run the code to try out those changes, which is an excellent learning tool.

      This passage explains the advantages of studying operating systems using source code. Starting from the source allows programmers to compile executable binaries directly, whereas reverse-engineering binaries is difficult and loses valuable information like comments. Access to source code also lets students modify, compile, and test the OS, providing a hands-on learning experience.

    27. The study of operating systems has been made easier by the availability of a vast number of free software and open-source releases. Both free operating systems and open-source operating systems are available in source-code format rather than as compiled binary code. Note, though, that free software and open-source software are two different ideas championed by different groups of people (see http://gnu.org/philosophy/open-source-misses-the-point.html for a discussion on the topic).

      This passage highlights how the studying of the operating systems is easier thanks to free and open-source software, which is available in source-code form. While both provide access to the code, free software and open-source software are distinct concepts promoted by different communities.

    28. A real-time system has well-defined, fixed time constraints. Processing must be done within the defined constraints, or the system will fail. For instance, it would not do for a robot arm to be instructed to halt after it had smashed into the car it was building. A real-time system functions correctly only if it returns the correct result within its time constraints. Contrast this system with a traditional laptop system where it is desirable (but not mandatory) to respond quickly.

      This passage explains that real-time systems have strict, well-defined timing requirements. The system must process data and respond within set time constraints, or it fails—unlike traditional computers, where fast responses are desirable but not critical. For example, a robot arm must stop on time to avoid the damage, illustrating the importance of timing in real-time systems.

    29. Embedded systems almost always run real-time operating systems. A real-time system is used when rigid time requirements have been placed on the operation of a processor or the flow of data; thus, it is often used as a control device in a dedicated application. Sensors bring data to the computer. The computer must analyze the data and possibly adjust controls to modify the sensor inputs.

      This passage explains how the embedded systems typically run real-time operating systems (RTOS). RTOS are used when strict timing is required for processing or data flow, such as in control applications. Sensors provide data, and the system must quickly analyze it and adjust controls as needed.

    30. The use of embedded systems continues to expand. The power of these devices, both as standalone units and as elements of networks and the web, is sure to increase as well. Even now, entire houses can be computerized, so that a central computer—either a general-purpose computer or an embedded system—can control heating and lighting, alarm systems, and even coffee makers. Web access can enable a home owner to tell the house to heat up before she arrives home. Someday, the refrigerator will be able to notify the grocery store when it notices the milk is gone.

      This passage highlights the growing use and potential of embedded systems. They are increasingly very powerful, both as sthe tandalone devices and as the networked components. Examples include smart homes such as where a central computer can control the heating, lighting, alarms, and the appliances, and the future possibilities like the refrigerators that are automatically notifying stores when supplies run out.

    31. These embedded systems vary considerably. Some are general-purpose computers, running standard operating systems—such as Linux—with special-purpose applications to implement the functionality. Others are hardware devices with a special-purpose embedded operating system providing just the functionality desired

      This passage notes that the embedded systems can be varied widely. Some are the general-purpose computers running standard OSs likethe Linux with their specialized applications, while the others use a dedicated embedded operating systems that provide only the specific functionality required for that device.

    32. Embedded computers are the most prevalent form of computers in existence. These devices are found everywhere, from car engines and manufacturing robots to optical drives and microwave ovens. They tend to have very specific tasks. The systems they run on are usually primitive, and so the operating systems provide limited features.

      This passage explains how the embedded computers are the most common type of computers, found in the devices like the car engines, robots, and the household appliances. They are designed for the specific tasks, and their operating systems are considered typically simple, offering only essential features.

    33. Certainly, there are traditional operating systems within many of the types of cloud infrastructure. Beyond those are the VMMs that manage the virtual machines in which the user processes run. At a higher level, the VMMs themselves are managed by cloud management tools, such as VMware vCloud Director and the open-source Eucalyptus toolset. These tools manage the resources within a given cloud and provide interfaces to the cloud components, making a good argument for considering them a new type of operating system.

      Cloud infrastructure uses the traditional OSs and the virtual machine monitors (VMMs) to manage the virtual machines. Tools like the VMware vCloud Director and the Eucalyptus manage the VMMs and provide the interfaces, acting as a higher-level OS for the cloud environments.

    34. Cloud computing is a type of computing that delivers computing, storage, and even applications as a service across a network. In some ways, it's a logical extension of virtualization, because it uses virtualization as a base for its functionality. For example, the Amazon Elastic Compute Cloud (ec2) facility has thousands of servers, millions of virtual machines, and petabytes of storage available for use by anyone on the Internet.

      This passage explains how the cloud computing delivers computing power, storage, and applications as services over a network. It builds on virtualization, allowing resources to be shared efficiently. For example, Amazon EC2 provides millions of virtual machines and massive storage that can be accessed by users over the Internet.

    35. Skype is another example of peer-to-peer computing. It allows clients to make voice calls and video calls and to send text messages over the Internet using a technology known as voice over IP (VoIP). Skype uses a hybrid peer-to-peer approach

      This passage describes how Skype as an example of peer-to-peer (P2P) computing. It enables the voice and the video calls, as well as the text messaging, over the Internet using the voice over IP (VoIP) technology. Skype employs a hybrid P2P approach, combining direct peer connections with centralized services for tasks like user authentication.

    36. Peer-to-peer networks gained widespread popularity in the late 1990s with several file-sharing services, such as Napster and Gnutella, that enabled peers to exchange files with one another. The Napster system used an approach similar to the first type described above: a centralized server maintained an index of all files stored on peer nodes in the Napster network, and the actual exchange of files took place between the peer nodes

      This passage describes how the peer-to-peer (P2P) networks became popular in the late 1990s through file-sharing services like Napster and Gnutella. Napster used a hybrid approach: a central server kept an index of files, while the actual file transfers occurred directly between peers, combining centralized indexing with distributed file sharing.

    37. Another structure for a distributed system is the peer-to-peer (P2P) system model. In this model, clients and servers are not distinguished from one another. Instead, all nodes within the system are considered peers, and each may act as either a client or a server, depending on whether it is requesting or providing a service. Peer-to-peer systems offer an advantage over traditional client–server systems. In a client–server system, the server is a bottleneck; but in a peer-to-peer system, services can be provided by several nodes distributed throughout the network.

      This passage explains the peer-to-peer (P2P) model of distributed systems, where all nodes are equal and can act as either client or server. Unlike traditional client–server systems, which can have a server bottleneck, P2P systems distribute services across multiple nodes, improving scalability and reducing single points of failure.

    38. Two operating systems currently dominate mobile computing: Apple iOS and Google Android. iOS was designed to run on Apple iPhone and iPad mobile devices. Android powers smartphones and tablet computers available from many manufacturers. We examine these two mobile operating systems in further detail in Chapter 2.

      This passage notes that the mobile computing market is dominated by two operating systems: Apple iOS, which runs on iPhones and iPads, and Google Android, which powers devices from multiple manufacturers. The text indicates that these two OSs will be explored in more detail in Chapter 2.

    39. To provide access to on-line services, mobile devices typically use either IEEE standard 802.11 wireless or cellular data networks. The memory capacity and processing speed of mobile devices, however, are more limited than those of PCs. Whereas a smartphone or tablet may have 256 GB in storage, it is not uncommon to find 8 TB in storage on a desktop computer. Similarly, because power consumption is such a concern, mobile devices often use processors that are smaller, are slower, and offer fewer processing cores than processors found on traditional desktop and laptop computers.

      This passage explains how the mobile devices connect to the online services through the Wi-Fi (IEEE 802.11) or the cellular networks. However, they have the limitations as compared with PCs: less storage, slower and smaller processors, and fewer cores, mainly to conserve power. For example, a smartphone might have 256 GB of storage, while a desktop could have 8 TB.

    40. Today, mobile systems are used not only for e-mail and web browsing but also for playing music and video, reading digital books, taking photos, and recording and editing high-definition video. Accordingly, tremendous growth continues in the wide range of applications that run on such devices. Many developers are now designing applications that take advantage of the unique features of mobile devices, such as global positioning system (GPS) chips, accelerometers, and gyroscopes. An embedded GPS chip allows a mobile device to use satellites to determine its precise location on Earth.

      This passage highlights that the expanding of the capabilities of the mobile devices beyond the basic tasks like email and web browsing. Modern devices are used to handle the media playback, digital books, photography, and the high-definition video editing. Developers are creating the applications which are used to leverage the built-in features like GPS, accelerometers, and gyroscopes, enabling location-based services and motion-sensing functionality.

    41. Mobile computing refers to computing on handheld smartphones and tablet computers. These devices share the distinguishing physical features of being portable and lightweight. Historically, compared with desktop and laptop computers, mobile systems gave up screen size, memory capacity, and overall functionality in return for handheld mobile access to services such as e-mail and web browsing. Over the past few years, however, features on mobile devices have become so rich that the distinction in functionality between, say, a consumer laptop and a tablet computer may be difficult to discern. In fact, we might argue that the features of a contemporary mobile device allow it to provide functionality that is either unavailable or impractical on a desktop or laptop computer.

      This passage explains how the mobile computing involves handheld devices like smartphones and tablets, which are portable and lightweight. While early mobile devices sacrificed screen size, memory, and functionality, modern devices now offer features comparable to—or even exceeding—those of desktops and laptops, making them highly capable for tasks like web browsing, email, and other services.

    42. Traditional time-sharing systems are rare today. The same scheduling technique is still in use on desktop computers, laptops, servers, and even mobile computers, but frequently all the processes are owned by the same user (or a single user and the operating system). User processes, and system processes that provide services to the user, are managed so that each frequently gets a slice of computer time. Consider the windows created while a user is working on a PC, for example, and the fact that they may be performing different tasks at the same time. Even a web browser can be composed of multiple processes, one for each website currently being visited, with time sharing applied to each web browser process.

      This text emphasizes that although the traditional time-sharing systems are now seen as very uncommon, the scheduling method remains to be prevalent. Contemporary computers—desktops, laptops, servers, and mobile devices—utilize the time-sharing for controlling the numerous user and system processes. For instance, a PC can manage various windows, and a web browser can execute several processes at once, with each process getting portions of CPU time.

    43. In the latter half of the 20th century, computing resources were relatively scarce. (Before that, they were nonexistent!) For a period of time, systems were either batch or interactive. Batch systems processed jobs in bulk, with predetermined input from files or other data sources. Interactive systems waited for input from users. To optimize the use of the computing resources, multiple users shared time on these systems. These time-sharing systems used a timer and scheduling algorithms to cycle processes rapidly through the CPU, giving each user a share of the resources.

      This passage explains how computing evolved when resources were limited. Early systems are either the batch (processing jobs in bulk) or the interactive (waiting for user input). Time-sharing systems were introduced to optimize the resource use, allowing multiple users to share CPU time through timers and scheduling algorithms.

    44. At home, most users once had a single computer with a slow modem connection to the office, the Internet, or both. Today, network-connection speeds once available only at great cost are relatively inexpensive in many places, giving home users more access to more data. These fast data connections are allowing home computers to serve up web pages and to run networks that include printers, client PCs, and servers. Many homes use firewalls to protect their networks from security breaches. Firewalls limit the communications between devices on a network.

      This passage describes how the home computing has evolved with faster and more affordable network connections. Modern home networks can include multiple devices like PCs, printers, and servers, and can even serve web pages. Firewalls are commonly used to protect these networks by controlling and limiting communications between devices.

    45. Today, web technologies and increasing WAN bandwidth are stretching the boundaries of traditional computing. Companies establish portals, which provide web accessibility to their internal servers. Network computers (or thin clients)—which are essentially terminals that understand web-based computing—are used in place of traditional workstations where more security or easier maintenance is desired. Mobile computers can synchronize with PCs to allow very portable use of company information. Mobile devices can also connect to wireless networks and cellular data networks to use the company's web portal (as well as the myriad other web resources).

      This passage explains how modern web technologies and faster WAN connections have expanded traditional computing. Companies now use web portals for internal access, thin clients for secure and easy-to-maintain workstations, and mobile devices that sync with PCs or connect via wireless/cellular networks, enabling flexible and portable access to company resources.

    46. As computing has matured, the lines separating many of the traditional computing environments have blurred. Consider the “typical office environment.” Just a few years ago, this environment consisted of PCs connected to a network, with servers providing file and print services. Remote access was awkward, and portability was achieved by use of laptop computers.

      This passage describes how the traditional computing environments, like the typical office setup, have evolved. Previously, offices had PCs connected to servers for file and print services, with limited remote access and portability mostly relying on laptops. It highlights how computing has become more flexible and interconnected over time.

    47. The power of bitmaps becomes apparent when we consider their space efficiency. If we were to use an eight-bit Boolean value instead of a single bit, the resulting data structure would be eight times larger. Thus, bitmaps are commonly used when there is a need to represent the availability of a large number of resources. Disk drives provide a nice illustration. A medium-sized disk drive might be divided into several thousand individual units, called disk blocks. A bitmap can be used to indicate the availability of each disk block.

      This passage highlights the space efficiency of bitmaps. Using the single bit per item instead of larger data types drastically reduces memory usage, making bitmaps ideal for tracking large numbers of resources. For example, disk drives use bitmaps to indicate which disk blocks are available or in use.

    48. A bitmap is a string of n binary digits that can be used to represent the status of n items. For example, suppose we have several resources, and the availability of each resource is indicated by the value of a binary digit: 0 means that the resource is available, while 1 indicates that it is unavailable (or vice versa). The value of the ith position in the bitmap is associated with the ith resource.

      This passage explains that a bitmap is the sequence of binary digits used to represent the status of multiple items. Each position in the bitmap corresponds to a specific resource, with values like 0 or 1 indicating whether the resource is available or is unavailable.

    49. One use of a hash function is to implement a hash map, which associates (or maps) [key:value] pairs using a hash function. Once the mapping is established, we can apply the hash function to the key to obtain the value from the hash map (Figure 1.21). For example, suppose that a user name is mapped to a password. Password authentication then proceeds as follows: a user enters her user name and password. The hash function is applied to the user name, which is then used to retrieve the password. The retrieved password is then compared with the password entered by the user for authentication.

      This text describes how hash functions can be utilized to create hash maps that hold data in key–value pairs. Using a hash function on a key allows the system to swiftly access its corresponding value. In password authentication, the username is hashed to retrieve the stored password, which is then matched against the user’s input to confirm identity

    50. One potential difficulty with hash functions is that two unique inputs can result in the same output value—that is, they can link to the same table location. We can accommodate this hash collision by having a linked list at the table location that contains all of the items with the same hash value. Of course, the more collisions there are, the less efficient the hash function is.

      This passage highlights the limitation for the hash functions: different inputs can produce the same output, causing a hash collision. To handle this, a linked list can store all the items that share the same hash index. However, the frequent collisions reduce the efficiency of the hash function, making the retrieval slower.

    51. A hash function takes data as its input, performs a numeric operation on the data, and returns a numeric value. This numeric value can then be used as an index into a table (typically an array) to quickly retrieve the data. Whereas searching for a data item through a list of size n can require up to O(n) comparisons, using a hash function for retrieving data from a table can be as good as O(1), depending on implementation details. Because of this performance, hash functions are used extensively in operating systems.

      This passage explains how that a hash function converts the data into the numeric value, which can be used as an index to quickly access the data in the table. Unlike searching a list, which can take O(n) time, a hash table can often retrieve data in O(1) time. This efficiency is why operating systems frequently use hash functions for tasks like indexing and also the quick lookups.

    52. A tree is a data structure that can be used to represent data hierarchically. Data values in a tree structure are linked through parent–child relationships. In a general tree, a parent may have an unlimited number of children. In a binary tree, a parent may have at most two children, which we term the left child and the right child. A binary search tree additionally requires an ordering between the parent's two children in which left_child <= right_child. Figure 1.20 provides an example of a binary search tree. When we search for an item in a binary search tree, the worst-case performance is O(n) (consider how this can occur). To remedy this situation, we can use an algorithm to create a balanced binary search tree. Here, a tree containing n items has at most lg n levels, thus ensuring worst-case performance of O(lg n). We shall see in Section 5.7.1 that Linux uses a balanced binary search tree (known as a red-black tree) as part its CPU-scheduling algorithm.

      This passage explains how a tree is a hierarchical data structure with the parent–child relationships. Binary trees limit the parents to two children, and the binary search trees (BSTs) impose that an order for efficient searching. In the worst case, a BST can have O(n) search time, but while balancing the tree reduces this to O(log n). Linux uses the balanced trees, such as the red-black trees, in the CPU scheduling to improve the performance.

    53. A queue, in contrast, is a sequentially ordered data structure that uses the first in, first out (FIFO) principle: items are removed from a queue in the order in which they were inserted. There are many everyday examples of queues, including shoppers waiting in a checkout line at a store and cars waiting in line at a traffic signal. Queues are also quite common in operating systems—jobs that are sent to a printer are typically printed in the order in which they were submitted, for example. As we shall see in Chapter 5, tasks that are waiting to be run on an available CPU are often organized in queues.

      What is a queue, and how does the first-in, first-out (FIFO) principle works? Give some examples of how queues are used both in everyday life and in operating systems.

    54. A stack is a sequentially ordered data structure that uses the last in, first out (LIFO) principle for adding and removing items, meaning that the last item placed onto a stack is the first item removed. The operations for inserting and removing items from a stack are known as push and pop, respectively. An operating system often uses a stack when invoking function calls. Parameters, local variables, and the return address are pushed onto the stack when a function is called; returning from the function call pops those items off the stack.

      What is a stack, and how does the last-in, first-out (LIFO) principle determine the push and pop operations? Additionally, how does an operating system use a stack during function calls?

    55. Linked lists accommodate items of varying sizes and allow easy insertion and deletion of items. One potential disadvantage of using a list is that performance for retrieving a specified item in a list of size n is linear—O(n), as it requires potentially traversing all n elements in the worst case. Lists are sometimes used directly by kernel algorithms. Frequently, though, they are used for constructing more powerful data structures, such as stacks and queues.

      This passage highlights how that the linked lists are flexible, supporting the variable-sized items and easy insertion or the deletion. However, searching for a specific element can be slow (O(n)). Lists are often used directly in kernel algorithms or as building blocks for other structures like stacks and queues.

    56. After arrays, lists are perhaps the most fundamental data structures in computer science. Whereas each item in an array can be accessed directly, the items in a list must be accessed in a particular order. That is, a list represents a collection of data values as a sequence. The most common method for implementing this structure is a linked list, in which items are linked to one another

      This section explains that the lists are a fundamental data structure where these elements are accessed sequentially rather than directly. Linked lists are a common implementation, connecting each item to the next, which allows flexible insertion and removal of elements compared to arrays.

    57. An array is a simple data structure in which each element can be accessed directly. For example, main memory is constructed as an array. If the data item being stored is larger than one byte, then multiple bytes can be allocated to the item, and the item is addressed as “item number × item size.” But what about storing an item whose size may vary? And what about removing an item if the relative positions of the remaining items must be preserved? In such situations, arrays give way to other data structures.

      This passage describes arrays as the basic data structure which allows the direct access to elements, making them simple and efficient for fixed-size items. However, arrays have limitations when storing variable-sized data or when items need to be removed while maintaining order, which is why more flexible data structures (like linked lists) are used in such cases.

    58. Some operating systems have taken the concept of networks and distributed systems further than the notion of providing network connectivity. A network operating system is an operating system that provides features such as file sharing across the network, along with a communication scheme that allows different processes on different computers to exchange messages. A computer running a network operating system acts autonomously from all other computers on the network

      This section explains that a network operating system goes beyond basic connectivity by enabling features like file sharing and inter-process communication across machines. Each computer still runs independently, but the OS provides tools to make collaboration and resource sharing possible across the network.

    59. The media to carry networks are equally varied. They include copper wires, fiber strands, and wireless transmissions between satellites, microwave dishes, and radios. When computing devices are connected to cellular phones, they create a network. Even very short-range infrared communication can be used for networking. At a rudimentary level, whenever computers communicate, they use or create a network. These networks also vary in their performance and reliability.

      This passage highlights the many types of transmission media used in networking, from traditional copper wires to advanced fiber optics and wireless methods like satellite or cellular. It shows that networks can exist at any scale—even short-range infrared—and that their performance and reliability depend on the medium used.

    60. Networks are characterized based on the distances between their nodes. A local-area network (LAN) connects computers within a room, a building, or a campus. A wide-area network (WAN) usually links buildings, cities, or countries. A global company may have a WAN to connect its offices worldwide, for example. These networks may run one protocol or several protocols

      This section explains that networks are classified by distance. LANs cover small areas like buildings or campuses, while WANs span larger regions such as cities or even countries. Companies often use WANs to connect global offices, and these networks may rely on one or multiple communication protocols.

    61. A network, in the simplest terms, is a communication path between two or more systems. Distributed systems depend on networking for their functionality. Networks vary by the protocols used, the distances between nodes, and the transport media. TCP/IP is the most common network protocol, and it provides the fundamental architecture of the Internet. Most operating systems support TCP/IP, including all general-purpose ones

      This passage emphasizes that networks are the backbone of distributed systems, enabling communication between computers. It highlights that networks differ in protocol, distance, and media, but TCP/IP has become the universal standard—forming the foundation of the Internet and being supported by nearly all major operating systems.

    62. A distributed system is a collection of physically separate, possibly heterogeneous computer systems that are networked to provide users with access to the various resources that the system maintains. Access to a shared resource increases computation speed, functionality, data availability, and reliability. Some operating systems generalize network access as a form of file access, with the details of networking contained in the network interface's device driver. Others make users specifically invoke network functions.

      This section defines distributed systems as independent computers working together through a network. The benefit is that these resources can be shared,by improving speed, functionality, andthe reliability. It also notes that the operating systems handle the networking differently—some make it seamless by treating network access like file access, while others require users to call specific network functions.

    63. Within data centers, virtualization has become a common method of executing and managing computing environments. VMMs like VMware ESXand Citrix XenServer no longer run on host operating systems but rather are the host operating systems, providing services and resource management to virtual machine processes.

      This passage explains that in data centers, virtualization is not just an add-on but the foundation of the system. Modern Virtual Machine Monitors (like VMware ESX or Citrix XenServer) act like an actual operating system, directly managing hardware resources and running virtual machines. This shows how central virtualization has become in enterprise environments.

    64. Virtualization allows operating systems to run as applications within other operating systems. At first blush, there seems to be little reason for such functionality. But the virtualization industry is vast and growing, which is a testament to its utility and importance.

      This section points out that virtualization might seem unnecessary at first because operating systems already manage multiple applications. However, its growth shows how valuable it really is—virtualization enables flexibility, testing, security isolation, and efficient use of hardware, which explains why the industry keeps expanding.

    65. Even though modern operating systems are fully capable of running multiple applications reliably, the use of virtualization continues to grow. On laptops and desktops, a VMM allows the user to install multiple operating systems for exploration or to run applications written for operating systems other than the native host

      This passage explains about why the virtualization is still widely being used even though the modern operating systems can be used for multitasking. A Virtual Machine Monitor (VMM) lets the usersto run the different operating systems on the same hardware, which is considered useful for experimenting, testing the software, or running the programs that aren’t compatible with the host system.

  2. Aug 2025