OPERATING SYSTEMS CHAPTER 1
The book we’re using:
Core Role: Operating systems (OS) manage computer resources and make it possible for users to interact with hardware and applications.
Educational Importance: OS courses are a foundation of computer science, covering essential topics like memory, process, and storage management.
Dynamic Field: As technology evolves, OS principles are used across many areas—from personal devices to large-scale industrial systems.
Key Concepts: Core ideas such as process control, resource management, and virtualization are explained using practical examples.
Coding Languages: Examples often use C and Java, making it easier to understand OS structures without needing deep expertise in the languages.
OS is Software that acts as a bridge between the user and computer hardware.
Purpose:
Manages computer hardware and system resources
Supports smooth execution of applications
Provides a user-friendly interface (GUI)
Goals:
Ensure correct system operation
Allocate resources efficiently
Prevent conflicts between programs
Structure: OS design is complex due to managing multiple tasks and resources simultaneously
OPERATING SYSTEM MODULARITY
OSs are modular, with each component handling a specific task.
This design reduces complexity and makes development and maintenance easier.
Key Concepts
Hardware Management: Controls CPU, memory, I/O devices, and storage.
Resource Allocation: Shares hardware resources efficiently among programs.
Ubiquity: OSs are used everywhere—from IoT devices and appliances to PCs, smartphones, and servers.
Foundation Topics: Requires understanding hardware organization and core OS data structures.
Diverse Systems: Includes proprietary and open-source OSs for cloud, mobile, and embedded environments.
The OS acts like a conductor, coordinating hardware resources.
It doesn’t do the work itself—it ensures programs run smoothly and without conflict.
Perspectives of the Operating System
User View: Focuses on how users interact with the system.
System View: Focuses on managing hardware and resources efficiently.
User View: Interfaces
Interfaces range from command-line (CLI) to graphical user interfaces (GUI).
The goal is ease of use and efficiency across different devices.
I. Personal Computers (PCs)
Used with keyboard, mouse, and display.
Typically single-user systems with full control over resources.
Primary focus: user convenience, productivity, and entertainment.
Less emphasis on resource sharing.
II. Mobile Devices
Includes smartphones and tablets.
Use touch-based interfaces (tap, swipe).
Network-connected via cellular or Wi-Fi.
May support voice interaction (e.g., Siri).
III. Embedded Systems
Designed for specific, dedicated tasks.
Often operate with real-time constraints.
Minimal or no user interaction.
Examples: appliances, cars, industrial devices.
Interfaces may be limited (keypads, lights).
Designed to function autonomously.
SYSTEM VIEW OF THE OPERATING SYSTEM
OS is the core program of the computer.
OS directly interacts with hardware.
System view focuses on resource allocation and control.
I. Resource Allocator
Manages CPU time.
Manages memory space.
Manages storage.
Manages I/O devices.
Handles multiple programs and users.
Allocates resources efficiently.
Allocates resources fairly.
Prevents resource conflicts.
Ensures correct program execution.
II. Control Program
Controls execution of user programs.
Prevents system errors.
Prevents misuse of resources.
Controls I/O device operations.
Coordinates data transfer between system and devices.
Ensures safe system operation.
Ensures smooth hardware–software interaction.
Operating systems have evolved significantly over time.
Core purpose remains consistent across generations.
Focus is on improving user and program interaction with computers.
OS is a collection of system software.
Provides an interface between hardware and users.
Enables efficient program execution.
Makes the computing environment user-friendly.
Supports interaction between users, programs, and hardware.
USER VIEW OF THE OPERATING SYSTEM
The User View of an operating system shifts the focus from resource management and hardware control to accessibility, convenience, and performance. From this perspective, the OS is seen as an interface designed to hide the complexity of the hardware.
Primary Interface: Acts as the intermediary between the human user and the raw machine.
Abstraction Layer: Hides complex hardware details (like instruction sets or disk sectors) behind a simplified environment.
Goal-Oriented: Prioritizes Ease of Use and User Convenience over raw resource efficiency.
I. Interaction Models
Single-User Systems: Designed for one person to maximize responsiveness and monopolize the hardware
(e.g., a PC or Laptop).
Multi-User Systems: Focuses on a smooth, shared experience where the user is unaware of others using the same server resources.
Ubiquitous Computing: On mobile devices or embedded systems, the user view is centered on touch interfaces and battery longevity.
II. Execution Environment
Program Support: Provides the necessary libraries and services for applications to run without the user needing to manually manage memory.
Error Transparency: Manages system faults gracefully so the user experience is not interrupted by low-level hardware triggers.
Input/Output Simplification: Allows users to interact with complex peripherals (printers, cameras, disks) through uniform commands and icons.
IV. Evolution of the User Experience
Interface Progress: Transitioned from Command Line Interfaces (CLI) to Graphical User Interfaces (GUI) and Natural Language Processing.
Consistency: Modern OS design ensures that regardless of the hardware brand, the user interacts with a familiar desktop or mobile metaphor.
Efficiency vs. Convenience: While the System View cares about saving every byte of RAM, the User View cares about how fast an app opens and how intuitive the navigation feels.
OS COMPONENTS
System Architecture: Operating Systems are complex software frameworks designed to manage hardware and execute applications.
The Kernel: Serving as the central nervous system, the kernel is the most critical internal component.
Modular Integration: Multiple distinct subsystems work in tandem to maintain system stability and performance.
THE KERNEL
System Core: The foundational layer that bridges the gap between software and physical hardware.
Privileged Software: Operates within a protected Kernel Mode to prevent user applications from compromising the system.
Hardware Abstraction: Maintains direct, exclusive access to CPU, memory, and I/O devices.
Low-Level Management: Handles fundamental operations, including process scheduling and memory allocation.
Security Enforcement: Acts as the primary gatekeeper for system resources and permission protocols.
Resource Coordination: Manages the continuous interaction between hardware components and high-level software.
TYPES OF KERNELS
Different kernel architectures exist.
Each design has trade-offs.
I. Monolithic Kernel
All OS services run in kernel space.
High performance.
Fewer context switches.
Harder to maintain and debug.
Example: Linux.
II. Microkernel
Minimal core in kernel space.
Runs drivers and services in user space.
High modularity.
Improved fault tolerance.
Lower performance due to more context switches.
Examples: Minix, QNX.
III. Hybrid Kernel
Combines monolithic and microkernel designs.
Balances performance and modularity.
Improves flexibility and stability.
Example: Windows NT.
SYSTEM PROGRAMS
Built on top of the kernel.
Provide essential user and system services.
Help manage system resources.
I. Common System Programs
File Managers
Create files.
Delete files.
Modify files.
Manage access permissions.
Device Drivers
Enable hardware communication.
Act as intermediaries between kernel and devices.
System Libraries
Provide reusable code.
Support common application functions.
II. Middleware
Sits between applications and OS services.
Simplifies application development.
Hides low-level system details.
Common in mobile operating systems.
Middleware Examples - Middleware & Essential Services
Communication Frameworks (IPC & Networking)
Inter-Process Communication (IPC): Acts as the post office for the OS, allowing separate apps to exchange data securely without crashing each other.
Network Stack Abstraction: Simplifies complex protocols (TCP/IP, HTTP/S, Bluetooth) into easy-to-use APIs for developers, enabling seamless cloud and social connectivity.
Remote Procedure Calls (RPC): Allows an application to execute code on a different address space or even a different machine as if it were local.
Multimedia & Hardware Services
Sensor Abstraction Layers (SAL): Provides a unified interface for the camera, accelerometer, and gyroscope, so apps don't need to know the specific hardware model to function.
Media Frameworks: A heavy-duty pipeline for real-time audio and video processing, handling everything from hardware-accelerated codecs to GPS-based location services.
Graphic Engines: Bridges the gap between the app and the GPU to render smooth UI transitions and 3D environments without manual memory management.
Mobile-Specific Middleware (Android & iOS)
Android (The Borg Approach): Relies on the ART (Android Runtime) and HAL (Hardware Abstraction Layer). Uses Binder IPC, a specialized mechanism designed specifically for the high-frequency communication needs of mobile apps and system services.
iOS (The Walled Garden Approach): Utilizes Cocoa Touch and the Media Layer to provide high-performance animation (Core Animation) and fluid touch response. Prioritizes energy-efficient middleware to ensure background processes don't drain the battery.
III. The Graphical User Interface (GUI) & UX Evolution
Enables visual system interaction.
Visual Metaphors: Modern GUIs use WIMP architecture (Windows, Icons, Mouse, Pointers) to translate abstract binary operations into relatable physical concepts like folders and trash cans.
Cognitive Load Reduction: By utilizing buttons and menus, the OS removes the need for users to memorize complex syntax, shifting the burden of remembering from the human to the machine.
Haptic & Gesture Integration: Mobile GUIs replaced the mouse with multi-touch gestures (pinch-to-zoom, swipe-to-refresh), requiring a sophisticated Gesture Recognition Engine in the middleware to distinguish between a tap and a scroll.
Responsive Design: Modern interfaces are context-aware, automatically scaling and rearranging elements based on screen orientation and resolution to maintain usability across foldable phones, tablets, and desktops.
IV. The Middleware Wars: Antitrust & Legal Precedents
The Microsoft 1998 Landmark Case: The U.S. government argued that by bundling Internet Explorer directly into the Windows OS, Microsoft was using its 90% OS market share to crush competing browsers like Netscape.
The Tying Argument: The core legal debate was whether a Web Browser is a feature of the OS or a separate product. Microsoft claimed it was an integral part of the user experience, while the courts ruled it was an anti-competitive tie-in.
The Verdict's Legacy: This case set the stage for how we view software today—it forced Microsoft to allow middleware (browsers, media players, Java) from third parties to run natively without being sidelined by the OS.
V. The Great OS Design Debate: What is The Core?
Monolithic vs. Microkernel: There is a constant architectural tension. Should the OS be a Swiss Army Knife (including browsers, PDF viewers, and cloud sync) or a Stripped-Down Engine that only handles the bare essentials?
The Bloatware Dilemma: Modern OS vendors (Apple/Google) argue that deep integration of services like Maps and App Stores creates a seamless ecosystem. Critics argue this is just a modern version of the 90s monopoly, locking users into a single brand.
Security vs. Openness: A tight OS design (like iOS) offers better security by controlling every component, whereas an open design (like Linux) provides more freedom, but requires the user to manage their own middleware and drivers.
COMPUTER ORGANIZATION
Modern computers consist of multiple cooperating components.
Designed to process data and interact with external devices.
I. Major Components
One or more CPUs.
Device controllers.
Shared main memory.
Connected by a common system bus.
II. Central Processing Unit (CPU)
Executes program instructions.
Performs calculations.
Manages data flow.
Acts as the brain of the computer.
Multiple CPUs or cores enable parallel processing.
Improves performance and responsiveness.
III. Device Controllers
Manage specific peripheral devices.
Examples: disk drives, audio devices, network interfaces.
Act as intermediaries between CPU and devices.
Use local buffers and control registers.
One controller may manage multiple devices.
IV. The System Bus: The Digital Central Nervous System
Communication pathway between components.
Connects CPU, memory, and device controllers.
Includes data bus.
Includes address bus.
Includes control bus.
The System Bus is the shared communication infrastructure that allows the CPU, memory, and I/O controllers to synchronize. It isn't just one wire; it is a massive parallel interface divided into three specialized functional lanes:
The Data Bus (The Payload Lane)
Carries the actual data being processed (e.g., a line of code, a pixel for a video frame, or a document fragment). The width of this bus (e.g., 64-bit) determines how much data the CPU can inhale in a single cycle. A wider data bus directly increases the system’s throughput.
The Address Bus (The Location Lane)
Carries the information specifying where the data needs to go or where it is being fetched from in the System RAM. The width of the address bus determines the Maximum Memory Capacity. If an address bus is only 32 bits wide, the system is physically incapable of seeing more than 4GB of RAM, regardless of how much you plug into the motherboard.
The Control Bus (The Traffic Cop)
Transmits command signals to coordinate the timing and direction of data flow. It sends Read or Write signals and handles Interrupts. Without the control bus, the data and address buses would have collisions, where two components try to talk at the same time, leading to a system crash.
Why the Bus Matters for High-Performance Tasks
In modern systems, the traditional Front Side Bus (FSB) has evolved into high-speed point-to-point interconnects (like AMD’s Infinity Fabric or Intel’s UPI). This is crucial for creators:
Bottlenecking: If you are editing 4K video, your GPU and CPU are constantly screaming for data. If the bus architecture is inefficient, you get stuttering even if your components are top-tier.
Expansion: Technologies like PCIe 5.0 are essentially just extremely evolved versions of the System Bus, designed to move massive amounts of data to NVMe drives and GPUs with near-zero latency.
V. Main Memory (RAM)
Stores programs and data.
Provides fast access for the CPU.
Shared by CPU and device controllers.
Managed by the memory controller.
Prevents memory access conflicts.
Volatile Execution Space: RAM serves as the primary staging area where the OS loads active machine code and working data. Because the CPU cannot execute programs directly from a Hard Drive or SSD due to their mechanical or electrical slowness, RAM provides a random access environment where any memory address can be reached in near-constant time (O(1) complexity).
The Von Neumann Bottleneck: A core challenge in system design is that the CPU is significantly faster than the RAM. To mitigate this, the OS and hardware use techniques like Memory Interleaving and Dual-Channel architecture to increase bandwidth, ensuring the System Bus is constantly fed with data.
Direct Memory Access (DMA): In sophisticated systems, device controllers (like your GPU or Network Card) can access Main Memory directly without constantly interrupting the CPU. This is facilitated by the Memory Controller, which arbitrates these requests to prevent Bus Contention—a state where multiple components fight for the same memory cycles.
Protection and Isolation: Beyond just storing bits, the OS uses the memory controller to enforce Memory Segmentation and Paging. This ensures that one running program cannot see or overwrite the memory space of another, which is the foundational defense against most system crashes and buffer overflow exploits.
Modern Performance Insight: For a high-end workstation, the CAS Latency and MT/s (MegaTransfers per second) of your RAM are just as vital as the capacity. If the RAM cannot cycle fast enough to keep up with the CPU’s clock speed, the processor enters Wait States, essentially wasting millions of compute cycles while waiting for the memory controller to fetch the next instruction.
VI. Device Drivers
Software components.
Enable OS–hardware communication.
Provide standardized device interfaces.
Hide device-specific details.
Example: disk drivers for storage devices.
Software-to-Hardware Translation: At their core, drivers take high-level OS commands (like Write data to disk) and translate them into the specific, low-level electrical signals or bit-sequences required by a particular hardware model.
The Abstraction Principle: They provide a Uniform Interface. The OS kernel doesn't need to know if you are using a Western Digital HDD or a Samsung NVMe SSD; it simply talks to the Block Device Driver. The driver handles the messy, manufacturer-specific details, allowing the OS to remain Hardware Independent.
Privileged Execution: Most drivers run in Kernel Mode. Because they interact directly with hardware registers and memory, a bug in a driver is far more dangerous than a bug in a regular app—it's the leading cause of Blue Screens (BSOD) or system-wide crashes.
Interrupt Handling: Drivers are the primary managers of Interrupt Service Routines (ISRs). When a device (like a mouse or a network card) has data ready, it sends a signal to the CPU. The driver is the specialized code that tells the CPU exactly how to handle that specific signal without dropping data.
VII. Interrupts
Signals that require CPU attention.
Generated by hardware or software.
Temporarily pause current CPU execution.
Interrupt handler processes the event.
CPU resumes previous task after handling.
Improves system efficiency.
VIII. Storage Hierarchy
Organized by speed, size, and cost.
Registers: fastest and smallest.
Cache memory: fast and limited.
Main memory (RAM): moderate speed and size.
Secondary storage: slow, large, persistent.
INPUT-OUTPUT OR I/O STRUCTURE
The I/O Structure is the architectural framework that dictates how the CPU, memory, and peripherals exchange data. Choosing the right method is a balance between CPU overhead and system throughput. I/O methods include:
I/O Execution Methods
Programmed I/O (Polling)
Mechanism: The CPU constantly polls the device status register to check if it is ready for the next operation.
Efficiency: Extremely low. The CPU remains in a busy-wait loop, wasting billions of compute cycles while waiting for relatively slow hardware.
Interrupt-Driven I/O
Mechanism: The CPU initiates the I/O command and then moves on to other tasks. When the device is ready, it triggers a hardware interrupt signal to grab the CPU's attention.
Efficiency: High for small data transfers. It eliminates the need for constant polling but still requires the CPU to move every byte of data from the device to memory.
Direct Memory Access (DMA)
Mechanism: Used for high-speed, bulk data transfers (like disk or network traffic). A dedicated DMA Controller manages the data transfer directly between the device and RAM.
Efficiency: Maximum. The CPU is only involved at the beginning (setup) and the end (completion signal). This allows the processor to focus entirely on computation while the hardware handles the heavy lifting of data movement.
Production Note
For a workstation, DMA is why you can render a video and still browse the web smoothly. Without it, your CPU would be so busy moving video frames to the disk that the entire interface would freeze.
COMPUTER COMPONENTS
I. System Board / Motherboard
Central hub of the computer.
Houses processor, memory, and other components.
Design varies by manufacturer.
The motherboard is the most undervalued component in a system build, yet it dictates the ceiling for every other part. It is not just a housing; it is a complex multilayered PCB that manages power delivery, signal integrity, and high-speed data routing.
I. The Anatomy of Power: VRMs and Stability
The biggest difference between a budget board and a flagship is the VRM (Voltage Regulator Module).
The Best: High-end boards (e.g., ASUS ROG Maximus, MSI MEG, Gigabyte Aorus Xtreme) feature massive heatsinks and 20+ phase power designs. This ensures the CPU receives clean, stable power even under 100% load during 4K video rendering.
Low-tier boards (A320 or H610 chipsets) often have naked VRMs with no heatsinks. If you pair a high-end CPU with a poor-quality board, the VRMs will overheat, causing the CPU to throttle or the board to eventually fail (the blown capacitor or burning smell scenario).
Use PCpartpicker/BuildCores/YouTubers like ZTT, GamersNexus or Techsource, to find the right components for your build.
Check reviews on reddit about the motherboard that you want to buy too, if you're doing a PC build.
/r/buildapc
/r/ASUS
You can also do motherboards that support dual CPUs and dual GPUs eg For running dual-3060s making 24GB VRAM for AI inference.
Modern software like Ollama or vLLM can split models across both cards via the PCIe lanes.
If you run dual GPUs, you need a motherboard that supports PCIe Bifurcation (splitting a x16 slot into x8/x8).
Consumer (AMD X870E / Intel Z890): Best for 1–2 GPUs.
Look for boards like the ASUS ProArt or Gigabyte Aorus Master which have reinforced slots for heavy cards.
If you ever want to move to 3 or 4 GPUs using workstations(AMD Threadripper / Intel Xeon), you’ll need a board that might make your bank go dry eg ASUS Pro WS WRX90E-SAGE SE
These provide the massive number of PCIe lanes required to let all GPUs talk to the CPU at full speed without a bottleneck.
The CPU:
The motherboard able to handle it...(Note there's many other workstation motherboards, this is just one of them.
II. The Tier List: Brands and Segments
The market is dominated by the Big Four, but each has distinct tiers:
III. The Blowing Up Phenomenon: Common Failures
SOC Voltage Issues: In early 2023, certain high-end boards were pushing too much voltage to Ryzen 7000 series CPUs, literally melting the chips and sockets. This was fixed via BIOS updates, highlighting why BIOS support is a critical motherboard feature.
Poor QC (Quality Control): Budget brands or no-name OEM boards often use cheap electrolytic capacitors that leak over time.
Component Crowding: On cheap boards, the GPU might block all the SATA ports, or the RAM slots might be so close to the CPU that large coolers won't fit.
IV. Chipsets: The Brain of the Board
The chipset (e.g., Z790, X670, B650) determines how many high-speed lanes you have.
X/Z Series: Maximum PCIe lanes for multiple NVMe SSDs and high-end GPUs. Essential for creators using multiple capture cards or RAID storage.
B Series: The sweet spot. Supports most features but usually limits CPU overclocking or the number of high-speed USB ports.
A/H Series: Stripped down. Avoid these for video production; they lack the bandwidth for modern high-speed peripherals.
Say you are editing heavy 4k videos, you shouldn't just look for a motherboard. You need a board with Thunderbolt 4 support (for fast external drives) and at least three M.2 slots (one for OS, one for scratch disk, one for active projects).
Pg 32