INTRODUCTION: ASSEMBLY LANGUAGE X86 TOPICS
Basic Concepts: Applications of assembly language, basic concepts, machine language, and data representation.
x86 Processor Architecture: Basic microcomputer design, instruction execution cycle, x86 processor architecture, Intel64 architecture, x86 memory management, components of a microcomputer, and the input–output system.
Assembly Language Fundamentals: Introduction to assembly language, linking and debugging, and defining constants and variables.
Data Transfers, Addressing, and Arithmetic: Simple data transfer and arithmetic instructions, assemble-link-execute cycle, operators, directives, expressions, JMP and LOOP instructions, and indirect addressing.
Procedures: Linking to an external library, description of the book’s link library, stack operations, defining and using procedures, flowcharts, and top-down structured design.
Conditional Processing: Boolean and comparison instructions, conditional jumps and loops, high- level logic structures, and finite-state machines.
Integer Arithmetic: Shift and rotate instructions with useful applications, multiplication and division, extended addition and subtraction, and ASCII and packed decimal arithmetic.
Advanced Procedures: Stack parameters, local variables, advanced PROC and INVOKE directives, and 2/31 recursion.
Strings and Arrays: String primitives, manipulating arrays of characters and integers, two- dimensional arrays, sorting, and searching.
Structures and Macros: Structures, macros, conditional assembly directives, and defining repeat blocks.
MS-Windows Programming: Protected mode memory management concepts, using the Microsoft-Windows API to display text and colors, and dynamic memory allocation.
Floating-Point Processing and Instruction Encoding: Floating-point binary representa- tion and floating-point arithmetic. Learning to program the IA-32 floating-point unit. Under- standing the encoding of IA-32 machine instructions.
High-Level Language Interface: Parameter passing conventions, inline assembly code, and linking assembly language modules to C and C++ programs.
16-Bit MS-DOS Programming: Memory organization, interrupts, function calls, and stan- dard MS- DOS file I/O services.
Disk Fundamentals: Disk storage systems, sectors, clusters, directories, file allocation tables, handling MS-DOS error codes, and drive and directory manipulation.
BIOS-Level Programming: Keyboard input, video text, graphics, and mouse programming.
Expert MS-DOS Programming: Custom-designed segments, runtime program structure, and Interrupt handling. Hardware control using I/O ports.
💻 ASCII CONTROL CHARACTERS: MNEMONICS & LABELS
What they are
ASCII control characters are non-printable characters.
They do not show text on the screen.
Why they exist
They are used to control devices, not display letters.
Examples: screen, keyboard, printer, serial port.
How they work
Generated by pressing combinations like Ctrl + C, Ctrl + G
Sent as numeric values, not visible symbols
The receiving device interprets them as commands
What they control
They are commonly used to:
Move the cursor
Start a new line
Make a beep sound
Reset positions in terminals or printers
Common examples
LF (Line Feed) – move down one line
CR (Carriage Return) – move cursor to start of line
BEL (Bell) – beep sound
ESC (Escape) – start control sequences
Representation
Each control character has:
A mnemonic (e.g. BEL, CR, ESC)
A numeric value (usually hex, e.g. 07h, 0Dh, 1Bh)
A defined behavior
Important note
These are not magic and not special to assembly.
They are just agreed-upon signals that software and hardware understand.
Find the ASCII.html OR ASCII.png attached for the full table.
🤖 Why do mnemonics matter?
Mnemonics are like nicknames.
Instead of remembering what 1Bh means, you use ESC.
This makes code easy to read and easier for your brain to understand.
Mnemonics are like street names.
Hex codes are like GPS numbers.
Both go to the same place, but street names are easier for people.
❗ Ctrl + Hyphen (–)
Yes, Ctrl + – makes ASCII 1Fh.
It is a real control character.
People don’t use it much, but it exists in the official ASCII list.
🔤 ALT Key Combinations
When you hold ALT and press a number or letter, the keyboard makes a special character.
These are used to:
Make keyboard shortcuts
Type special symbols (©, █, ü, etc.)
Work with other languages or old programs
🛠️ Why they mattered (and still matter):
In old DOS and early Windows programs, people used them to:
Draw text screens (like ALT + 219 for blocks)
Read keyboard input with ALT codes
Make custom keyboard shortcuts
Today, modern apps don’t use them much, but they still work in BIOS, command line, and low-level programs.
Find the AltKeyCombinations.html OR AltKeyCombinations.png for the full table.
🧱 Keyboard Scan Codes vs ASCII vs ALT Keys
(Know what your keyboard is really sending.)
⌨ Keyboard Scan Codes – Hardware Level
When you press a key (like A), the keyboard does not send the letter A.
It sends a scan code, which is a basic signal.
It is like saying: “Someone pressed this key spot on the keyboard.”
These codes are numbers like 1Eh for the A key.
The BIOS or operating system uses them to find out which key was pressed.
Scan codes are about key position, not the letter you see on the screen.
🛠️ Example:
Find the KeyboardScanCodes.html OR KeyboardScanCodes.png for the full table.
🧩 WHY THIS MATTERS FOR REVERSE ENGINEERING
Keyboard input is not always ASCII
BIOS keyboard services (int 16h) return scan codes
Scan codes tell you which key was pressed, not which character
This matters for:
Function keys
Arrow keys
Key combinations
ASCII is for characters, not keys
ASCII is used when working with text
Examples:
Printing characters to the screen
Parsing input strings
Comparing text values
ASCII does not represent physical keys
ALT combinations
ALT key combinations can generate specific ASCII values
Common in:
Old DOS programs
Terminal-style user interfaces
Custom input handling
Useful when you want:
Non-standard characters
Direct control over input values
Reverse engineering relevance
Programs may mix scan codes and ASCII
Malware and old software often read raw keyboard input
Understanding the difference helps you:
Read input-handling code correctly
Avoid misinterpreting key logic
Key Rule to Remember
Scan codes = physical keys
ASCII = characters
If you remember only that, you won’t get lost.
HOW PROGRAMMING LANGUAGES UNDERSTAND ASCII
💡 What is ASCII and why is it important?
ASCII is a system that gives letters and symbols a number.
For example:
A = 65
! = 33
3 = 51
Computers only understand numbers (1s and 0s).
ASCII lets computers save and show text in a way people understand.
🧬 Character Encoding = Translation
Computers do not know what a letter is.
They only know numbers.
Character encoding is a translation system:
You press A
The keyboard sends a scan code
The system turns it into ASCII 65
The computer stores it as 01000001 (binary)
This lets:
Programming languages work with text
Operating systems show letters on the screen
Programs save readable text in files
ASCII BASICS
ASCII uses 7 bits, which allows 128 characters total.
0–31 → Control characters
32–126 → Printable characters
127 → Delete (DEL)
🔧 1. Control Characters
These do not show on the screen. They control how text works.
Examples:
LF (10) → New line
CR (13) → Go to start of line
TAB (9) → Tab space
BEL (7) → Beep sound
DEL (127) → Delete
These were very important in early computers.
🔤 2. Printable Characters
These are the characters you can see:
Letters (A–Z, a–z)
Numbers (0–9)
Punctuation (?, ., ,)
Symbols (@, #, $, %)
🧮 Binary and Hex
ASCII numbers are often written in decimal (65)
Inside the computer, they are binary (01000001)
Programmers often use hex (41h) because it is shorter
Binary, decimal, and hex all mean the same value, they are just different ways to write the number.
UNICODE: THE WORLD’S TEXT SYSTEM
ASCII only has 128 characters.
That works for English, but not for other languages or emojis.
Unicode fixes this.
It supports:
All languages
Emojis 🙂
Math symbols
Rare and old scripts
Unicode includes ASCII as the first 128 characters, so ASCII is part of Unicode.
🤔 What is Unicode?
Unicode is the modern system computers use to store text.
It works for every language and symbol in the world.
Unicode:
Has over 144,000 characters (and growing)
Supports more than 150 writing systems
Includes emojis and special symbols
Is used in websites, apps, and operating systems
It exists because ASCII was too limited.
📦 Fixed vs Variable Length Encoding
ASCII (Fixed Size)
Uses 7 bits per character
Takes 1 byte for each character
Can only show 128 characters
Mostly English only
If you want, I can also simplify the UTF-8 / UTF-16 part next.
Unicode: Variable-Length
Unicode supports multiple encodings depending on the use case (See the UTF.html):
⚠️ UTF-8 is the king. It’s compact, fast, compatible, and everywhere — from HTML files to JSON to APIs.
ASCII vs UNICODE: Differences
From Text to Machine Code
🔠 Step 1: Tokenization (Lexical Analysis)
When you write code, the computer does not understand it right away. First, it splits the code into small pieces called tokens. This step is called lexical analysis.
Think of it like this: A sentence is broken into words so it can be read. Code is broken into tokens so the computer can understand it. Tokens can be things like:
Keywords
Names
Numbers
Symbols
Example code:
This gets broken down into tokens like:
if → keyword
( → punctuation
x → identifier
> → operator
10 → constant
{ → block opener
y → identifier
= → operator
5 → constant
;, } → punctuation
Analogy: It’s like chopping a sentence into “The”, “quick”, “brown”, “fox”… so it’s easier to parse.
⚙️ Step 2: Compilation or Interpretation
After the code is broken into tokens,
the next step is to turn it into something the computer can run.
What happens next depends on the language:
Compiled languages turn the code into a program first
Interpreted languages run the code step by step
Either way, the computer is getting ready to execute your code.
At the end, all code must become machine code.
Machine code is made of 1s and 0s.
This is the only language the CPU understands.
The machine code tells the CPU exactly what to do, one small step at a time.
Where ASCII and Unicode Fit
Your code starts as text.
That text is saved using ASCII or Unicode.
Before the computer can understand your code, it reads the binary value of each character.
Examples:
i → 01101001 (ASCII 105)
f → 01100110 (ASCII 102)
Every word, symbol, and name in your code starts as bytes.
The compiler uses these bytes to find tokens.
Short Version: From Code to CPU
You write code in a text editor
It is saved using ASCII or Unicode
The compiler reads the text as bytes
The code is split into tokens
Tokens become machine instructions
The CPU runs those instructions
That’s the full trip — from letters → to logic → to machine.
MACHINE CODE STRUCTURE: OPCODES AND OPERANDS
🔧 What is machine code?
Machine code is the binary instructions the CPU runs directly.
All assembly and C code turns into machine code in the end.
Each machine instruction has two main parts:
Opcode → what to do (like move, add, jump)
Operands → what to use (data, registers, or memory)
The opcode tells the CPU the action.
The operands tell the CPU where and on what to do it.
🧱 Example Breakdown:
This means: “Move the value 41h (which is ASCII 'A') into the AL register.”
Let’s break that into real machine code (x86):
Opcode for MOV AL, imm8 = B0
Operand = 41
Final Machine Code: B0 41
The CPU reads this as:
“Instruction B0 means: move the next byte into AL. The next byte is 41h, which is ‘A’. Okay, done.”
🔤 Assembly and String Handling
I mean, In Assembly, a string is just a row of bytes in memory.
There’s nothing automatic, you work with each byte yourself.
You handle them manually, byte by byte.
How to make a string:
db means “define byte”
The 0 at the end is the null terminator (like in C)
Reading character by character:
This code performs the same action as a standard 'for each' loop, but it operates at a much lower, more fundamental level of the computer's hardware.
Memory and ASCII/Unicode Storage.
Let’s say you store the word "Hi" in memory.
ASCII is shown in the image...
Memory layout (byte by byte):
The CPU doesn’t see characters, it just sees:
UNICODE (UTF-16)
Now every character is 2 bytes:
That’s why UTF-8 is more compact, it uses just as many bytes as needed.
SUMMARIZING UNICODE
The Unicode Expansion: Encoding the World’s Language
Let’s go back to the start of our journey, from letters to logic.
Your source code (what you type) is saved using an encoding standard.
We first looked at ASCII.
ASCII (The Simple System)
ASCII is simple.
It uses 7 bits.
It can represent 128 characters.
It works well for basic English text.
But it has limits.
It cannot handle:
Letters like ä
Symbols like ¥
Languages like Japanese or Chinese
So ASCII is too small for the real world.
Unicode (The Big System)
Unicode solves this problem.
Unicode is like a huge dictionary of all characters.
It includes:
All languages
Symbols
Emojis
ASCII is a small vocabulary.
Unicode is a global encyclopedia.
Important Idea: Code Points vs Encoding
This is a very important idea.
Unicode does NOT store bytes directly.
Instead, it gives every character a number.
This number is called a Code Point.
Examples of Code Points
'A' → U+0041
'¥' → U+00A5
'😄' → U+1F604
These are just numbers.
They are NOT stored in memory yet.
To store them, we must convert them into bytes.
This process is called encoding.
Encoding (Turning Numbers into Bytes)
Encoding turns a Code Point into binary data.
Different encoding systems exist.
The most common ones are:
UTF-8
UTF-16
UTF-16 (A Smart Strategy)
UTF-16 MEANS UNICODE TRANSFORMATION FORMAT (16-BIT)
It is a variable-length encoding.
This means:
Some characters use 2 bytes
Some characters use 4 bytes
1. The 2-Byte Zone (Basic Multilingual Plane - BMP)
Many common characters use 2 bytes.
This area is called the BMP.
It includes:
English letters
Greek letters
Arabic
Chinese (most common ones)
Symbols
A 16-bit number can store values from 0 to 65,535
Examples (2-Byte UTF-16)
Yen symbol (¥)
Code Point: U+00A5
Stored as: 0000 0000 1010 0101 (Hex: 00 A5)
Greek Lambda (λ)
Code Point: U+03BB
Stored as: 0000 0011 1011 1011 (Hex: 03 BB)
For these characters: UTF-16 is simple
The Code Point = the stored value
2. The 4-Byte Zone (Surrogate Pairs)
Some characters are too large for 16 bits.
Their Code Points are bigger than 0xFFFF.
Examples:
Emojis
Ancient scripts
Rare symbols
Problem - 16 bits is not enough.
Solution: Surrogate Pairs
UTF-16 uses two 16-bit values together.
These are called:
High Surrogate
Low Surrogate
Together, they represent ONE character.
Example (Emoji 😄)
Code Point: U+1F604
Stored as:
High Surrogate: 0xD83D (1101 1000 0011 1101)
Low Surrogate: 0xDE04 (1101 1110 0000 0100)
The system reads the first part.
It knows: “This is a high surrogate.”
It must read the next 2 bytes.
Then it combines both parts.
It rebuilds the original Code Point.
This one character now uses 4 bytes.
Why This Matters in Assembly Language
In Assembly, strings are not simple. You cannot treat a string as just a list of characters. What looks like text to a human is actually raw bytes in memory, and those bytes do not always represent characters in a consistent way.
In the ASCII case, things are very easy. Each character uses exactly one byte, so moving through a string is simple and predictable. If you want to go to the next character, you just move forward by one byte. An instruction like INC EBX works perfectly because every step lands exactly on the next character.
UTF-16 changes this completely. Characters are no longer the same size. A simple letter like 'f' takes 2 bytes, but a character like the emoji '😄' takes 4 bytes. This means you can no longer move through the string blindly, because you do not know how many bytes the next character will use.
To handle this correctly, your code must read the first 2 bytes and check what kind of value it is. If it is a normal value, then you are looking at a complete character and you move forward by 2 bytes. But if it falls into the surrogate range, then it is only half of a character. In that case, you must read the next 2 bytes as well and combine them to form the full character.
If you ignore this and just move forward without checking, you will split a surrogate pair in half. When that happens, the string becomes corrupted and the output turns into garbage, because the data no longer represents valid characters.
High-level languages hide all of this complexity. In a language like Python, strings look simple and safe to use. But in Assembly, nothing is hidden. Every character has a real cost in memory, and every byte must be handled correctly.
Looking at the bigger picture, Unicode is not just an encoding system. It is a complete system for representing text in computers. It defines code points, which are numbers for characters, and it provides encoding formats like UTF-8, UTF-16, and UTF-32 to store those numbers in memory. It also includes rules for combining characters and supports all human languages.
Unicode is what connects human language to machine memory. It acts as the bridge between what we read and what the computer actually stores.
CHAPTER 2: VM PLATFORM, DATA CONVERSIONS & BOOLEAN LOGIC
🔻 Why Low-Level Knowledge Matters (Especially for C/C++ Devs)
If you're serious about becoming a C or C++ developer, you can’t ignore what’s happening under the hood.
These languages give you power and control, but also more chances to shoot yourself in the foot if you don’t understand memory, instructions, or the system architecture.
High-level languages hide a lot from you.💥
C/C++ forces you to face reality: memory, pointers, addresses, machine code.
When bugs happen in C or C++, they often don’t show up clearly at the source code level. You may need to “drill down” into the:
CPU registers
Raw memory
Instruction-level behavior
That’s where tools like assemblers, linkers, and debuggers come in.
🛠️ Core Tools of the Low-Level World
1) Assembler — Translating Assembly to Machine Code
An assembler takes your assembly language code and converts it into machine code — the binary stuff the CPU can actually execute.
Input: .asm or .s file (your code)
Output: .obj or .o file (machine-readable object code)
Example:
Assembler turns this into something like:
Assemblers are like compilers, but for assembly. They don’t optimize, they just translate.
2) Linker — Merging the Pieces
A linker takes multiple object files and glues them together into a final executable.
Why do we need it?
Programs are often split into multiple files/modules
Some parts (like libraries) come from external sources
The linker resolves things like:
Function calls across files
Global variable references
External libraries
Input: .obj files
Output: .exe or .elf or .bin (platform-specific)
Think of the linker as your program’s final assembler, stitching all parts into one runnable body.💡
3) Debugger — Your X-Ray Vision Tool
A debugger lets you:
Pause your program at specific points (breakpoints)
Step through code line-by-line
Inspect registers, memory, variables, and the call stack
Find the exact moment things go wrong
This is how pros figure out:
Where segmentation faults happen
Why a variable has a garbage value
Whether a function even got called
Popular debuggers:
gdb (GNU Debugger — Linux CLI)
Visual Studio Debugger (GUI-based)
WinDbg (Windows kernel-level debugging)
Yes, loaders are the last critical piece of the code-to-execution pipeline that most people skip.
If assemblers translate, linkers stitch, then loaders are the ones that launch the beast into RAM.
Let’s plug that in and wrap this whole section up neatly before moving forward.
4. What is a Loader?
A loader is a system-level program that takes your fully-linked executable (like .exe or .out) and loads it into RAM for execution.
Think of it like the valet:
You hand over the car keys (the executable), and the loader parks the program into memory, sets up the environment, and gives control to the CPU to start execution.
What the Loader Actually Does:
Reads the executable file: Loads headers, segments, and metadata.
Allocates memory in RAM for code, data, and stack.
Loads code & data sections into correct memory locations.
Resolves runtime dynamic links (e.g. to shared DLLs or .so files).
Sets up the stack, heap, and registers
Starts execution by jumping to the program's entry point (usually main())
Without the loader, your code would just sit there in storage, not running, not alive.
Why this matters for you as a Developer
Knowing what the loader does helps you debug runtime crashes, missing DLLs, or initialization errors
Low-level tools (like manual shellcode injection, OS dev, reverse engineering) rely heavily on manual loading
Understanding the loader’s job clarifies how your .exe or .out goes from disk → memory → CPU
Alright, now that we’ve locked in the full pipeline from writing code to executing it, you’ve got the full picture:
Source code → Assembler → Linker → Loader → CPU runs it.
These tools give you deep control over the program, allowing for:
Efficient code generation
Tight memory usage
Advanced debugging and reverse engineering
FILE TYPES AND CPU MODES SUPPORTED BY MASM (MICROSOFT MACRO ASSEMBLER)🗂️
MASM creates different types of output files depending on the CPU mode your code is written for.
These modes correspond to processor architectures (16-bit, 32-bit, and 64-bit) and affect how your assembly code is written and executed.
16-Bit Real-Address Mode
Legacy mode used in DOS and embedded systems.
Not supported on modern 64-bit Windows OS without emulators.
Will only be covered in Chapters 14–17 for historical context.
32-Bit Protected Mode
Supported by all 32-bit Windows OS versions.
Easier to write and debug than 16-bit real mode.
This is what most MASM tutorials use as the default.
We’ll refer to this as just “32-bit mode” from now on.
64-Bit Long Mode
Runs on all 64-bit versions of Windows.
Requires use of different registers (RAX, RBX, etc.) and calling conventions.
MASM64 or x64 assembly tools are needed.
Let’s go deep on the above, I can’t leave my beginners hanging:
✅ Target: You’ll finally understand what MASM is, what it creates, what “CPU modes” even mean, and why it all matters when writing assembly code.
✅ No more guessing. No more reading the same paragraph 5 times like “wait huh?” 🤯
🔧 What is MASM?
MASM stands for Microsoft Macro Assembler.
It’s a program that takes your assembly code (with commands like mov eax, 1) and converts it into machine code — the raw binary instructions the CPU actually runs.
In other words:
You write readable low-level code → MASM turns it into .obj (object) files or .exe (executable) files.
This is just like how gcc compiles C code into machine code — but MASM does it for Assembly.
📁 What File Types Does MASM Create?
MASM (the assembler) only does the .ASM → .OBJ part. You need a separate tool called a linker (like LINK.EXE on Windows) to go from .OBJ → .EXE.
🧪 MASM vs Other Assemblers — They're All Translators
Let’s compare MASM to other assemblers:
Full description in the html and the image.
They all convert assembly into machine code.
The only difference is syntax, platform, and ecosystem preferences.
💬 Think of it like this:
MASM is like American English
NASM is like British English
GAS is like Shakespearean English
FASM is like minimalist text-speak
They all say the same thing, just in slightly different dialects.
✅ If you're targeting:
Windows dev / WinAPI / legacy x86 tutorials ➜ MASM is your friend
Cross-platform / Linux / modern low-level ➜ NASM or GAS is better
Extreme performance & size optimization ➜ FASM might appeal to you
But the fundamentals of Assembly language don’t change — registers are registers, instructions are instructions, and you're still doing the same CPU-level work.
✅ You now fully understand:
What MASM is
What it produces
How it compares to NASM/FASM/GAS
Why it matters when choosing your tool
FILE TYPES CREATED BY ASSEMBLERS: FROM CODE TO EXECUTABLE
Assembly code doesn't run by itself.
You write source code, then the assembler, linker, and maybe even other tools step in to create something the CPU can actually execute.
Let’s break it down beginner-friendly, real-world style:
📝 .ASM — Your Source File
This is your code, written by you, in a human-readable format:
It’s just plain text, like a .txt file.
You write it using MASM, NASM, VSCode, etc.
It’s not executable — it needs to be assembled.
⚙️ .OBJ — Object File (Assembled, But Not Ready Yet)
Think of .OBJ as a half-built car. The engine's in, but the wheels and doors aren’t connected yet.
Created by the assembler from your .ASM file.
Contains machine code — but some parts are still unresolved (like calls to external functions).
Still not runnable — you need to link it.
Real analogy: It's like you’ve prepped all your ingredients and chopped your veggies, but you haven’t cooked the dish yet.
🚀.EXE — Executable File (Ready to Run)
This is the final boss — the file you can actually double-click in Windows and watch it run.
Created by a linker, not the assembler.
Combines one or more .OBJ files + any required libraries (like Windows API).
Links up external references (like printf, CreateFile, etc.).
Adds sections like .text (code), .data (vars), .rdata (constants), etc.
Real analogy: The linker takes all the car parts from .OBJ and builds a full car you can drive.
💾 .BIN — Raw Binary File (No Frills, No OS)
This is bare metal, a file that’s meant to be run without an OS.
A .BIN file is just raw data, with no extra formatting or system information.
Just raw bytes: pure machine code, no headers or OS-specific structures.
Often used in:
Bootloaders.
BIOS firmware.
Embedded devices (like Arduino, STM32).
Doesn’t work like .EXE — it gets flashed to chips or run by a boot ROM.
Real analogy: Imagine taking your machine code and tattooing it straight onto a CPU, no fancy packaging, no instructions.
WHY THIS MATTERS FOR REVERSE ENGINEERING
When you're reverse engineering, you’re usually handed a mysterious .EXE, .DLL, or .BIN.
Understanding how that file was built helps you know what you're dealing with:
Knowing whether a file was created by MASM, GCC, or NASM also gives clues about calling conventions, section layouts, and symbol naming.
After you read this, I wanted to keep going on CPU modes, but damn! 🥵That topic is hot, I can’t just toss it in here, lets move to the next document.
We shall cover all the CPU-Modes addressed in the 007.MASM.FileTypes.Operation.Modes.confusion.removed.html document.
⚖️ Assembly Language vs. Machine Language
Let’s compare these two side-by-side:
Find the htmls or images file attached for a complete table of comparison.
Assembly = human-friendly representation of machine code
Machine code = binary the CPU executes directly
Assembly is easier to write, debug, and understand
Machine language is harder to work with, but runs at maximum efficiency
💬 Final Thoughts:
Writing machine code by hand is like writing in Morse code with no mistakes allowed.
Writing assembly is like shorthand — better than Morse, but still not English.
Use assembly when you need fine-tuned control, speed, or hardware access.
Use a debugger and linker to manage and inspect the full system-level workflow.
Assembly gives you power and control, but also responsibility and danger.
High-level languages give you speed of development, maintainability, and portability.
In today’s world:
Use high-level languages for most applications.
Use assembly when you absolutely need performance or hardware control.
And now you understand why game engines, reverse engineers, hardware and software hackers eg Sam KamKar and Joe Grand, OS kernel developers like Terry Davis and the rest on Github, also malware authors still use this 40+ year-old beast. 💥