[# Data Structures & Algorithms Study Guide
I. Explains Algorithms (29% of assessment)
A. Data Flow Diagrams (DFDs) - Visualizing System Processes
- Concept: DFDs visually map how information flows through a system or process. They help analyze efficiency, understand system components (inputs, outputs, processes, data stores), and plan improvements or new systems.
- Purpose:
- Identify inefficiencies and opportunities for improvement.
- Clearly document and communicate how a system works.
- Plan and design new processes or systems.
- Key Types:
- Logical DFD: Focuses on WHAT the system does (business activities, information being transmitted, entities involved). Non-technical, good for communication with stakeholders.
- Physical DFD: Focuses on HOW the system is implemented (specific software, hardware, files, people). More technical, aids in system development.
- Both can describe the same flow and provide a comprehensive view together.
- Levels of Detail (Increasing Granularity):
- Level 0 (Context Diagram): Most basic, broad overview. Shows the entire system as a single process node and its connections to external entities (data sources/destinations).
- Level 1 DFD: Breaks down the single process from Level 0 into its major sub-processes. Shows data flows between these sub-processes and major data stores.
- Level 2+ DFDs: Further decompose processes from Level 1 (or higher) into more detailed sub-processes. Level 3 is often detailed enough.
- Basic DFD Symbols/Elements:
- External Entity: Source or destination of data, outside the system (e.g., Customer, Admin). (Often a rectangle).
- Process: An activity or function that transforms or manipulates data (e.g., “Verify Payment”, “Update Inventory”). (Often a circle or rounded rectangle).
- Data Store: Where data is held or stored (e.g., “Customer Database”, “Product Catalog”). (Often two parallel lines or an open-ended rectangle).
- Data Flow: The path along which data moves between entities, processes, and data stores. (Often an arrow indicating direction).
- General Steps to Create a DFD:
- Identify major inputs to and outputs from the system.
- Create a Context Diagram (Level 0) showing the overall system and external entities.
- Expand to a Level 1 DFD by breaking down the main process into sub-processes, adding data stores and flows.
- Further expand to Level 2+ DFDs for necessary detail by decomposing sub-processes.
- Confirm the DFD’s accuracy and clarity.
B. Core Algorithm Concepts
- Algorithm: A step-by-step procedure for solving a problem or accomplishing a task.
- Recursive Algorithms (Ch 11, 21):
- Functions that call themselves.
- Require a base case (to stop recursion) and a recursive step (to move towards the base case).
- Think about how the problem breaks down into smaller, self-similar subproblems.
- Classes (Ch 20): Blueprints for creating objects. Understand how they encapsulate data (attributes) and behavior (methods).
C. Searching Algorithms
- Linear Search (Ch 11.1):
- Checks each element sequentially until a match is found or the list ends.
- Complexity: O(n) - worst/average, O(1) - best.
- Binary Search (Ch 11.2):
- Requires a sorted list.
- Repeatedly divides the search interval in half.
- Compares target to middle element: if match, found; if smaller, search left half; if larger, search right half.
- Complexity: O(log n) - worst/average, O(1) - best.
D. Sorting Algorithms (General - See “Applies Algorithms” for more detail)
- Understand the general purpose: arranging elements in a specific order.
E. Big O Notation & Efficiency Analysis (Ch 11.7; Big O Worksheet)
- Time Complexity: How the runtime of an algorithm scales with the input size (n).
- Cases:
- Best Case: Minimum time required.
- Average Case: Expected time for typical input.
- Worst Case: Maximum time required.
- Common Growth Rates (Fastest to Slowest):
O(1): Constant (e.g., accessing an array element by index)O(log n): Logarithmic (e.g., binary search)O(n): Linear (e.g., linear search, iterating through a list)O(n log n): Log-linear (e.g., efficient sorting like Merge Sort, Heap Sort)O(n^2): Quadratic (e.g., nested loops like Selection Sort, Bubble Sort)O(2^n): Exponential (e.g., some recursive algorithms without optimization)O(n!): Factorial (e.g., traveling salesman brute force)
- Efficiency Analysis: Determining the Big O complexity of an algorithm.
⭐ KEY TIP: Pseudocode Practice
- Carefully trace pseudocode step-by-step (e.g., on a whiteboard).
- Track variable values through loops and conditional statements.
- Example provided:
// Example: // 1st for loop (i) // 2nd for loop (j) // Display something based on i and j // Trace i and j values meticulously.
II. Determines Data Structure Impact (31% of assessment)
A. Implementation & Operations
- Array (Ch 1.1, 1.4, 1.5):
- Fixed-size, sequential, same-type elements. Access by index (0-based).
- Indexing: O(1)
- Linked List (singly & doubly) (Ch 2):
- Nodes with data and pointer(s) to next (and previous for doubly). Dynamic size.
- Singly: Node → Next Node
- Doubly: Prev Node ←> Node ←> Next Node
- Insertion/Deletion at ends: O(1) (if pointers to head/tail are maintained).
- Insertion/Deletion in middle/Search: O(n)
- Stack (LIFO - Last In, First Out) (Ch 3):
push(): Add to top.pop(): Remove from top.peek()/top(): View top element.
- Queue (FIFO - First In, First Out) (Ch 3):
enqueue()/push(): Add to rear/back.dequeue()/pop(): Remove from front. (Note: If using Python list,pop(0)is O(n) for queue dequeue; proper queue uses O(1) dequeue).peek(): View front element.
- Deque (Double-Ended Queue) (Ch 3):
- Add/remove from both ends:
append(),appendleft(),pop(),popleft().
- Add/remove from both ends:
- Hash Table (Dictionary/Map) (Ch 4):
- Key-value pairs. Uses a hash function to map keys to array indices.
- Average Operations (insert, delete, search): O(1)
- Worst Case (due to collisions): O(n)
- Hashing: Process of converting key to index.
- Chaining: Collision resolution: store colliding items in a linked list at the index.
- Hash Key: The output of the hash function (the index).
- Modulo Operator (
%): Often used in hash functions (e.g.,key % table_size).
- Trees (Ch 4, 5.7):
- Hierarchical structure. Nodes connected by edges.
- Root: Topmost node.
- Leaf Nodes: Nodes with no children.
- Height: Length of the longest path from root to a leaf.
- Traversal:
- Inorder (LNR): Left subtree → Node → Right subtree (useful for BSTs to get sorted order).
- Preorder (NLR): Node → Left subtree → Right subtree (useful for copying trees).
- Postorder (LRN): Left subtree → Right subtree → Node (useful for deleting nodes).
- Heaps (Priority Queue Implementation) (Ch 7):
- Specialized tree (usually binary).
- Min-Heap: Parent ⇐ Children. Smallest element at root.
- Max-Heap: Parent >= Children. Largest element at root.
- Operations: Insert O(log n), Delete Min/Max O(log n).
- Sets (Ch 8):
- Unordered collection of unique elements.
- Intersection: Elements common to two sets.
- Graph (Ch 9):
- Nodes (vertices) connected by edges.
- Vertices: The nodes.
- Edges: Connections between vertices.
- Adjacency: Two vertices are adjacent if connected by an edge.
- Weighted: Edges have values/costs.
- Directed: Edges have a direction (A → B != B → A).
- Undirected: Edges have no direction (A - B == B - A).
⭐ KEY TIP: .pop() Behavior
- Stack: Removes and returns the top element.
- Queue (if
pop()is its dequeue method): Removes and returns the front element. - Priority Queue (heap): Removes and returns the element with the highest priority (min/max).
B. Design of Data Structures
- ADT (Abstract Data Type) Characteristics (Ch 1.5, 1.6):
- Defines a set of operations (what it does) without specifying implementation (how it does it).
- Focus on behavior, not internal structure.
- Dictionary (Ch 1.5, 14.5): ADT for key-value stores (often implemented with Hash Tables).
- Lists (Ch 2): Ordered collection of items.
⭐ KEY TIP: Design Properties
- Comparison: How elements are ordered/compared.
- Insertion/Deletion: How elements are added/removed, and associated costs.
- Indexing: Can elements be accessed directly by position? (Arrays: Yes; Linked Lists: No, requires traversal).
- Hierarchy: Is there a parent-child relationship? (Trees, Heaps: Yes; Arrays, Lists: No).
III. Applies Algorithms (40% of assessment)
A. Programming Fundamentals
- Variable Declaration (dynamic vs. static) (Ch 18.3):
- Static: Type fixed at compile time.
- Dynamic: Type checked at runtime.
- Assignment Operators (
=) vs. Comparison (==) (Ch 13.1) - Types / List Basics (Ch 14.2)
- Order of Operations / Precedence (Ch 13 & 15): PEMDAS/BODMAS.
- Loops (for, while) (Ch 17): Iteration.
- Conditional Statements / Branching (if, else if, else) (Ch 15): Decision making.
- Classes (Ch 20): (As above)
B. Algorithm Application & Analysis
- Linear Search (Ch 11.1): (As above)
- Binary Search (Ch 11.2): (As above, remember SORTED DATA requirement)
- Big O Growth Rates (Ch 11.7): (As above -
O(1), O(log n), O(n), O(n log n), O(n^2)) - Time Complexity, Worst Case, Efficiency Analysis (Ch 11.9): (As above)
- Methods for List (Ch 2.1):
append,pop,insert,remove,len, etc. - Stack Operations (Ch 3.1):
push,pop,peek. - Deque Operations (Ch 3.8):
append,appendleft,pop,popleft,peek, etc.
C. Sorting Algorithms (Ch 11) - Know Identification & Complexities!
| Algorithm | Best Case | Average Case | Worst Case | Key Idea for Identification |
|---|---|---|---|---|
| Selection Sort | O(n^2) | O(n^2) | O(n^2) | Finds min/max, swaps to front/end, repeats for rest. |
| Insertion Sort | O(n) | O(n^2) | O(n^2) | Builds sorted portion by inserting elements one by one. |
| Bubble Sort | O(n) | O(n^2) | O(n^2) | Repeatedly swaps adjacent elements if out of order. |
| Merge Sort | O(n log n) | O(n log n) | O(n log n) | Divide & Conquer: Splits list, sorts halves, merges. |
| Quick Sort | O(n log n) | O(n log n) | O(n^2) | Divide & Conquer: Uses a “pivot” to partition array. |
| Heap Sort | O(n log n) | O(n log n) | O(n log n) | Uses a heap data structure. |
| Radix Sort | O(nk) | O(nk) | O(nk) | Sorts by individual digits/characters (k = #digits/length). |
| Bucket Sort | O(n+k) | O(n+k) | O(n^2) | Distributes elements into “buckets”, sorts buckets. |
⭐ KEY TIP: Sorting Algorithm Identification
- Bubble: Look for adjacent swaps in nested loops, “bubbling” largest/smallest to one end.
- Bucket: Look for distribution into multiple sub-lists (“buckets”) then sorting those.
- Merge: Look for recursive splitting of the list in half and a merge step.
- Quick: Look for keywords “pivot”, “partition”, “split”.
IV. General Knowledge & Definitions
A. Key Data Types
- Boolean:
True/False. - Int: Whole numbers.
- Char: Single character (e.g., ‘a’).
- Float/Double: Numbers with decimal points.
- String: Sequence of characters.
B. Core Programming Concepts
- Assignment (
=) vs. Comparison (==): Assign value vs. check equality. - Garbage Collection: Automatic memory management; reclaims unused memory.
- Reference Count: Tracks how many references point to an object (if 0, can be garbage collected).
- Memory Allocation:
- Sequential: Contiguous block (e.g., arrays).
- Linked: Nodes with pointers (e.g., linked lists).
- Pointer: Variable storing a memory address.
- Null/None: Represents absence of a value or a non-pointing reference.
- Object Constructor: Special method to initialize an object when created.
C. ADT Quick Summary (Focus on Function & Distinctions)
| ADT | Function | Distinctions / Principle | Underlying (Common) | Key Ops (Python-like) |
|---|---|---|---|---|
| List/Array | Ordered collection | Indexed | Dynamic Array | [], append(), pop() |
| Tuple | Ordered, immutable collection | Indexed, unchangeable | Dynamic Array | (), indexing |
| Stack | Collection, LIFO | Add/remove from top | List/Linked List | append() (push), pop() (pop) |
| Queue | Collection, FIFO | Add rear, remove front | Linked List | append() (enqueue), pop(0) (dequeue) |
| Deque | Collection, add/remove from both ends | Double-ended | Linked List | append(), appendleft(), pop(), popleft() |
| Set | Unordered collection of unique elements | No duplicates | Hash Table | set(), add(), remove(), intersection() |
| Dictionary (Map) | Collection of key-value pairs | Unique keys map to vals | Hash Table | {}, d[key]=val, d[key], keys(), values() |
| Priority Queue | Collection, elements have priority | Ordered by priority | Heap | heappush(), heappop() (removes highest priority) |
Final Review Tips:
- Review website links from Chapter 1.
- Focus on understanding the WHY behind choosing a particular data structure or algorithm for a given problem (efficiency, properties).
- Practice, practice, practice reading and tracing pseudocode! This is highlighted as the hardest section.](<# Data Structures & Algorithms Study Guide
I. Explains Algorithms (29% of assessment)
A. Data Flow Diagrams (DFDs) - Visualizing System Processes
- Concept: DFDs visually map how information flows through a system or process. They help analyze efficiency, understand system components (inputs, outputs, processes, data stores), and plan improvements or new systems.
- Purpose:
- Identify inefficiencies and opportunities for improvement.
- Clearly document and communicate how a system works.
- Plan and design new processes or systems.
- Key Types:
- Logical DFD: Focuses on WHAT the system does (business activities, information being transmitted, entities involved). Non-technical, good for communication with stakeholders.
- Physical DFD: Focuses on HOW the system is implemented (specific software, hardware, files, people). More technical, aids in system development.
- Both can describe the same flow and provide a comprehensive view together.
- Levels of Detail (Increasing Granularity):
- Level 0 (Context Diagram): Most basic, broad overview. Shows the entire system as a single process node and its connections to external entities (data sources/destinations).
- Level 1 DFD: Breaks down the single process from Level 0 into its major sub-processes. Shows data flows between these sub-processes and major data stores.
- Level 2+ DFDs: Further decompose processes from Level 1 (or higher) into more detailed sub-processes. Level 3 is often detailed enough.
- Basic DFD Symbols/Elements:
- External Entity: Source or destination of data, outside the system (e.g., Customer, Admin). (Often a rectangle).
- Process: An activity or function that transforms or manipulates data (e.g., “Verify Payment”, “Update Inventory”). (Often a circle or rounded rectangle).
- Data Store: Where data is held or stored (e.g., “Customer Database”, “Product Catalog”). (Often two parallel lines or an open-ended rectangle).
- Data Flow: The path along which data moves between entities, processes, and data stores. (Often an arrow indicating direction).
- General Steps to Create a DFD:
- Identify major inputs to and outputs from the system.
- Create a Context Diagram (Level 0) showing the overall system and external entities.
- Expand to a Level 1 DFD by breaking down the main process into sub-processes, adding data stores and flows.
- Further expand to Level 2+ DFDs for necessary detail by decomposing sub-processes.
- Confirm the DFD’s accuracy and clarity.
B. Core Algorithm Concepts
- Algorithm: A step-by-step procedure for solving a problem or accomplishing a task.
- Recursive Algorithms (Ch 11, 21):
- Functions that call themselves.
- Require a base case (to stop recursion) and a recursive step (to move towards the base case).
- Think about how the problem breaks down into smaller, self-similar subproblems.
- Classes (Ch 20): Blueprints for creating objects. Understand how they encapsulate data (attributes) and behavior (methods).
- Key algorithm characteristics include: correctness, finiteness (must terminate), efficiency, etc.
- High-level design considerations include: simplicity/clarity (ease of understanding, implementation, maintenance), efficiency, resource usage, scalability.
C. Searching Algorithms
- Linear Search (Ch 11.1):
- Checks each element sequentially until a match is found or the list ends.
- Complexity: O(n) - worst/average, O(1) - best.
- Binary Search (Ch 11.2):
- Requires a sorted list. Offers the best performance (O(log n)) on sorted data by repeatedly dividing the search interval. Sometimes referred to as an ‘interval search’.
- Repeatedly divides the search interval in half.
- Compares target to middle element: if match, found; if smaller, search left half; if larger, search right half.
- Complexity: O(log n) - worst/average, O(1) - best.
- Distinguish between specific algorithm types (e.g., Linear, Binary) and general algorithmic paradigms/strategies (e.g., Divide-and-Conquer, Greedy). A paradigm describes an approach, not a specific algorithm itself.
D. Sorting Algorithms (General - See “Applies Algorithms” for more detail)
- Understand the general purpose: arranging elements in a specific order.
E. Big O Notation & Efficiency Analysis (Ch 11.7; Big O Worksheet)
- Time Complexity: How the runtime of an algorithm scales with the input size (n).
- Cases:
- Best Case: Minimum time required.
- Average Case: Expected time for typical input.
- Worst Case: Maximum time required.
- Common Growth Rates (Fastest to Slowest):
O(1): Constant (e.g., accessing an array element by index)O(log n): Logarithmic (e.g., binary search)O(n): Linear (e.g., linear search, iterating through a list)O(n log n): Log-linear (e.g., efficient sorting like Merge Sort, Heap Sort)O(n^2): Quadratic (e.g., nested loops like Selection Sort, Bubble Sort)O(2^n): Exponential (e.g., some recursive algorithms without optimization)O(n!): Factorial (e.g., traveling salesman brute force)O(N^N): (Supra-)Exponential
- Efficiency Analysis: Determining the Big O complexity of an algorithm. Drop constants and lower-order terms. For example,
O(N^N + 1)simplifies toO(N^N). - When comparing growth rates, simplify terms (e.g.,
log(N^2)is2 log N, soN log(N^2)isO(N log N)). Constants (37) and terms that approach zero (2/N) are smaller than logarithmic or linear terms.
⭐ KEY TIP: Pseudocode Practice
- Carefully trace pseudocode step-by-step (e.g., on a whiteboard).
- Track variable values through loops and conditional statements.
- Example provided:
// Example: // 1st for loop (i) // 2nd for loop (j) // Display something based on i and j // Trace i and j values meticulously.
II. Determines Data Structure Impact (31% of assessment)
A. Implementation & Operations
- Array (Ch 1.1, 1.4, 1.5):
- Fixed-size, sequential, same-type elements. Access by index (0-based).
- Indexing: O(1)
- Linked List (singly & doubly) (Ch 2):
- Nodes with data and pointer(s) to next (and previous for doubly). Dynamic size.
- Singly: Node -%3E Next Node
- Doubly: Prev Node %3C→ Node ←> Next Node
- Insertion/Deletion at ends: O(1) (if pointers to head/tail are maintained).
- Insertion/Deletion in middle/Search: O(n)
- Stack (LIFO - Last In, First Out) (Ch 3):
push(): Add to top.pop(): Remove from top.peek()/top(): View top element.
- Queue (FIFO - First In, First Out) (Ch 3):
enqueue()/push(): Add to rear/back.dequeue()/pop(): Remove from front. (Note: If using Python list,pop(0)is O(n) for queue dequeue; proper queue uses O(1) dequeue).peek(): View front element.
- Deque (Double-Ended Queue) (Ch 3):
- Add/remove from both ends:
append(),appendleft(),pop(),popleft().
- Add/remove from both ends:
- Hash Table (Dictionary/Map) (Ch 4):
- Key-value pairs. Uses a hash function to map keys to array indices.
- Average Operations (insert, delete, search): O(1)
- Worst Case (due to collisions): O(n)
- Hashing: Process of converting key to index.
- Chaining: Collision resolution: store colliding items in a list at the index. Typically uses singly linked lists for simplicity. Doubly linked lists can be considered ‘best’ if frequent deletions from the middle of a chain are needed (allowing O(1) deletion if the node is known), though they add overhead.
- Hash Key: The output of the hash function (the index).
- Modulo Operator (
%): Often used in hash functions (e.g.,key % table_size).
- Trees (Ch 4, 5.7):
- Hierarchical structure. Nodes connected by edges.
- Root: Topmost node.
- Leaf Nodes: Nodes with no children.
- Height: Length of the longest path from root to a leaf.
- Traversal:
- Inorder (LNR): Left subtree → Node → Right subtree (useful for BSTs to get sorted order).
- Preorder (NLR): Node → Left subtree → Right subtree (useful for copying trees).
- Postorder (LRN): Left subtree → Right subtree → Node (useful for deleting nodes).
- Heaps (Priority Queue Implementation) (Ch 7):
- Heaps are a specialized tree-based data structure (often implemented using an array) where parent nodes have a specific relationship (min/max) with children. Heap Sort uses a heap.
- Min-Heap: Parent ⇐ Children. Smallest element at root.
- Max-Heap: Parent >= Children. Largest element at root.
- Operations: Insert O(log n), Delete Min/Max O(log n).
- Sets (Ch 8):
- Unordered collection of unique elements.
- Intersection: Elements common to two sets.
- Graph (Ch 9):
- Nodes (vertices) connected by edges.
- Vertices: The nodes.
- Edges: Connections between vertices.
- Adjacency: Two vertices are adjacent if connected by an edge.
- Weighted: Edges have values/costs.
- Directed: Edges have a direction (A → B != B → A).
- Undirected: Edges have no direction (A - B == B - A).
⭐ KEY TIP: .pop() Behavior
- Stack: Removes and returns the top element.
- Queue (if
pop()is its dequeue method): Removes and returns the front element. - Priority Queue (heap): Removes and returns the element with the highest priority (min/max).
B. Design of Data Structures
- ADT (Abstract Data Type) Characteristics (Ch 1.5, 1.6):
- Defines a set of operations (what it does) without specifying implementation (how it does it).
- Focus on behavior, not internal structure.
- Dictionary (Ch 1.5, 14.5): ADT for key-value stores (often implemented with Hash Tables).
- Lists (Ch 2): Ordered collection of items.
⭐ KEY TIP: Design Properties
- Comparison: How elements are ordered/compared.
- Insertion/Deletion: How elements are added/removed, and associated costs.
- Indexing: Can elements be accessed directly by position? (Arrays: Yes; Linked Lists: No, requires traversal).
- Hierarchy: Is there a parent-child relationship? (Trees, Heaps: Yes; Arrays, Lists: No).
III. Applies Algorithms (40% of assessment)
A. Programming Fundamentals
- Variable Declaration (dynamic vs. static) (Ch 18.3):
- Static: Type fixed at compile time.
- Dynamic: Type checked at runtime.
- Assignment Operators (
=) vs. Comparison (==) (Ch 13.1) - Types / List Basics (Ch 14.2)
- Order of Operations / Precedence (Ch 13 & 15): PEMDAS/BODMAS.
- Loops (for, while) (Ch 17): Iteration.
- Conditional Statements / Branching (if, else if, else) (Ch 15): Decision making.
- Classes (Ch 20): (As above)
B. Algorithm Application & Analysis
- Linear Search (Ch 11.1): (As above)
- Binary Search (Ch 11.2): (As above, remember SORTED DATA requirement)
- Big O Growth Rates (Ch 11.7): (As above -
O(1), O(log n), O(n), O(n log n), O(n^2)) - Time Complexity, Worst Case, Efficiency Analysis (Ch 11.9): (As above)
- Methods for List (Ch 2.1):
append,pop,insert,remove,len, etc. - Stack Operations (Ch 3.1):
push,pop,peek. - Deque Operations (Ch 3.8):
append,appendleft,pop,popleft,peek, etc.
C. Sorting Algorithms (Ch 11) - Know Identification & Complexities!
| Algorithm | Best Case | Average Case | Worst Case | Key Idea for Identification |
|---|---|---|---|---|
| Selection Sort | O(n^2) | O(n^2) | O(n^2) | Finds min/max, swaps to front/end, repeats for rest. |
| Insertion Sort | O(n) | O(n^2) | O(n^2) | Builds sorted portion by inserting elements one by one. |
| Bubble Sort | O(n) | O(n^2) | O(n^2) | Repeatedly swaps adjacent elements if out of order. |
| Merge Sort | O(n log n) | O(n log n) | O(n log n) | Divide & Conquer: Splits list, sorts halves, merges. |
| Quick Sort | O(n log n) | O(n log n) | O(n^2) | Divide & Conquer: Uses a “pivot” to partition array. |
| Heap Sort | O(n log n) | O(n log n) | O(n log n) | Uses a heap data structure. |
| Radix Sort | O(nk) | O(nk) | O(nk) | Sorts by individual digits/characters (k = length). |
| Bucket Sort | O(n+k) | O(n+k) | O(n^2) | Distributes elements into “buckets”, sorts buckets. |
⭐ KEY TIP: Sorting Algorithm Identification
- Bubble: Look for adjacent swaps in nested loops, “bubbling” largest/smallest to one end.
- Bucket: Look for distribution into multiple sub-lists (“buckets”) then sorting those.
- Merge: Look for recursive splitting of the list in half and a merge step.
- Quick: Look for keywords “pivot”, “partition”, “split”.
IV. General Knowledge & Definitions
A. Key Data Types
- Boolean:
True/False. - Int: Whole numbers.
- Char: Single character (e.g., ‘a’).
- Float/Double: Numbers with decimal points.
- String: Sequence of characters.
B. Core Programming Concepts
- Assignment (
=) vs. Comparison (==): Assign value vs. check equality. - Garbage Collection: Automatic memory management; reclaims unused memory.
- Reference Count: Tracks how many references point to an object (if 0, can be garbage collected).
- Memory Allocation:
- Sequential: Contiguous block (e.g., arrays).
- Linked: Nodes with pointers (e.g., linked lists).
- Pointer: Variable storing a memory address.
- Null/None: Represents absence of a value or a non-pointing reference.
- Object Constructor: Special method to initialize an object when created.
C. ADT Quick Summary (Focus on Function & Distinctions)
| ADT | Function | Distinctions / Principle | Underlying (Common) | Key Ops (Python-like) |
|---|---|---|---|---|
| List/Array | Ordered collection | Indexed | Dynamic Array | [], append(), pop() |
| Tuple | Ordered, immutable collection | Indexed, unchangeable | Dynamic Array | (), indexing |
| Stack | Collection, LIFO | Add/remove from top | List/Linked List | append() (push), pop() (pop) |
| Queue | Collection, FIFO | Add rear, remove front | Linked List | append() (enqueue), pop(0) (dequeue) |
| Deque | Collection, add/remove from both ends | Double-ended | Linked List | append(), appendleft(), pop(), popleft() |
| Set | Unordered collection of unique elements | No duplicates | Hash Table | set(), add(), remove(), intersection() |
| Dictionary (Map) | Collection of key-value pairs | Unique keys map to vals | Hash Table | {}, d[key]=val, d[key], keys(), values() |
| Priority Queue | Collection, elements have priority | Ordered by priority | Heap | heappush(), heappop() (removes highest priority) |
Final Review Tips:
- Review website links from Chapter 1.
- Focus on understanding the WHY behind choosing a particular data structure or algorithm for a given problem (efficiency, properties).
- Practice, practice, practice reading and tracing pseudocode! This is highlighted as the hardest section.>)