C++ Memory Management Deep Dive Quiz

C++
0 Passed
0% acceptance

40 in-depth questions covering advanced C++ memory management with custom allocators, memory pools, RAII patterns, and cache-friendly programming — with 16 code examples to solidify understanding.

40 Questions
~80 minutes
1

Question 1

What is the std::allocator interface in C++?

cpp
template <typename T>
class MyAllocator {
public:
    using value_type = T;
    
    T* allocate(std::size_t n) {
        return static_cast<T*>(::operator new(n * sizeof(T)));
    }
    
    void deallocate(T* p, std::size_t n) {
        ::operator delete(p);
    }
    
    // Other required methods...
};
A
Interface defining allocate/deallocate methods for custom memory management, used by STL containers to abstract memory allocation strategies
B
Interface for creating objects
C
Interface for destroying objects
D
Interface for copying objects
2

Question 2

What is RAII (Resource Acquisition Is Initialization)?

cpp
class FileHandle {
    FILE* file;
public:
    FileHandle(const char* filename) : file(fopen(filename, "r")) {
        if (!file) throw std::runtime_error("Failed to open file");
    }
    
    ~FileHandle() { if (file) fclose(file); }
    
    FILE* get() { return file; }
    
    // Prevent copying
    FileHandle(const FileHandle&) = delete;
    FileHandle& operator=(const FileHandle&) = delete;
};
A
Programming idiom where resource acquisition occurs in constructor and release in destructor, ensuring exception-safe resource management
B
Programming without resource management
C
Programming with manual resource cleanup
D
Programming that leaks resources
3

Question 3

What is a memory pool allocator?

cpp
class MemoryPool {
    struct Block {
        Block* next;
    };
    
    Block* free_list = nullptr;
    char* pool;
    size_t block_size;
    
public:
    MemoryPool(size_t pool_size, size_t blk_size) 
        : block_size(blk_size), pool(new char[pool_size]) {
        // Initialize free list
        for (size_t i = 0; i < pool_size; i += blk_size) {
            Block* block = reinterpret_cast<Block*>(&pool[i]);
            block->next = free_list;
            free_list = block;
        }
    }
    
    void* allocate() {
        if (!free_list) return nullptr;
        Block* block = free_list;
        free_list = block->next;
        return block;
    }
    
    void deallocate(void* ptr) {
        Block* block = static_cast<Block*>(ptr);
        block->next = free_list;
        free_list = block;
    }
};
A
Allocator that pre-allocates large memory block and divides into fixed-size chunks, maintaining free list for constant-time allocation/deallocation
B
Allocator that allocates one object at a time
C
Allocator that never deallocates
D
Allocator that allocates variable sizes
4

Question 4

What is cache alignment and why is it important?

cpp
struct alignas(64) CacheAlignedData {
    int data;
    // Padding ensures struct doesn't share cache lines
};

// Without alignment:
struct UnalignedData {
    char c;
    int data; // May share cache line with adjacent objects
};
A
Ensuring data structures align to cache line boundaries to prevent false sharing and improve memory access performance in multi-threaded applications
B
Making data structures smaller
C
Ignoring memory layout
D
Creating memory fragmentation
5

Question 5

What is the difference between internal and external memory fragmentation?

A
Internal fragmentation is wasted space within allocated blocks due to fixed allocation sizes, external fragmentation is wasted space between allocated blocks preventing large allocations
B
They are identical concepts
C
Internal fragmentation occurs between blocks
D
External fragmentation occurs within blocks
6

Question 6

What is a custom deleter in smart pointers?

cpp
auto file_deleter = [](FILE* f) { if (f) fclose(f); };

std::unique_ptr<FILE, decltype(file_deleter)> file(
    fopen("data.txt", "r"), file_deleter);

// Or with custom class
auto cuda_deleter = [](float* ptr) { cudaFree(ptr); };
std::unique_ptr<float, decltype(cuda_deleter)> gpu_mem(nullptr, cuda_deleter);
A
Callable object that defines how smart pointer cleans up managed resource, enabling custom cleanup logic for non-standard resources
B
Object that prevents cleanup
C
Object that creates resources
D
Object that copies resources
7

Question 7

What is an arena allocator?

cpp
class ArenaAllocator {
    char* buffer;
    size_t capacity;
    size_t offset = 0;
    
public:
    ArenaAllocator(size_t cap) : capacity(cap), buffer(new char[cap]) {}
    
    void* allocate(size_t size) {
        if (offset + size > capacity) return nullptr;
        void* ptr = &buffer[offset];
        offset += size;
        return ptr;
    }
    
    // No individual deallocation - all freed at once
    void reset() { offset = 0; }
    
    ~ArenaAllocator() { delete[] buffer; }
};
A
Allocator that allocates sequentially from large buffer with no individual deallocation, providing fast allocation and bulk deallocation
B
Allocator that allocates randomly
C
Allocator with slow allocation
D
Allocator that deallocates individually
8

Question 8

What is the rule of three/five/zero in C++?

cpp
// Rule of Zero: Use smart pointers/default operations
class RuleOfZero {
    std::unique_ptr<int> data;
    // Compiler generates correct copy/move operations
};

// Rule of Three: Manual copy constructor, assignment, destructor
class RuleOfThree {
    int* data;
public:
    RuleOfThree() : data(new int[100]) {}
    ~RuleOfThree() { delete[] data; }
    RuleOfThree(const RuleOfThree& other) : data(new int[100]) {
        std::copy(other.data, other.data + 100, data);
    }
    RuleOfThree& operator=(const RuleOfThree& other) {
        if (this != &other) {
            delete[] data;
            data = new int[100];
            std::copy(other.data, other.data + 100, data);
        }
        return *this;
    }
};
A
Guidelines for implementing copy constructor, copy assignment, and destructor (rule of three), plus move operations (rule of five), or using smart pointers to avoid manual implementation (rule of zero)
B
Rules for creating objects
C
Rules for destroying objects
D
Rules for copying objects only
9

Question 9

What is false sharing in multi-threaded applications?

cpp
struct BadLayout {
    std::atomic<int> counter1; // Cache line 1
    std::atomic<int> counter2; // Still cache line 1 - false sharing!
};

struct GoodLayout {
    std::atomic<int> counter1; // Cache line 1
    char padding[60];         // Fill cache line
    std::atomic<int> counter2; // Cache line 2 - no false sharing
};
A
Performance degradation when multiple threads modify variables in same cache line, causing expensive cache invalidation despite no true data sharing
B
Sharing that improves performance
C
Sharing that prevents cache usage
D
Sharing that eliminates cache lines
10

Question 10

What is a placement new operator?

cpp
char buffer[sizeof(MyClass)];

// Construct object in existing memory
MyClass* obj = new (buffer) MyClass(args);

// Use object...
obj->~MyClass(); // Manual destruction

// Buffer can be reused or freed
// No delete - memory wasn't allocated by new
A
Operator that constructs object in pre-allocated memory without allocating new memory, requiring manual destruction and careful memory management
B
Operator that allocates new memory
C
Operator that destroys objects
D
Operator that copies objects
11

Question 11

What is memory ownership and transfer of ownership?

cpp
class DataOwner {
    std::unique_ptr<int[]> data;
public:
    DataOwner(size_t size) : data(new int[size]) {}
    
    // Transfer ownership
    std::unique_ptr<int[]> release() {
        return std::move(data);
    }
    
    // Share ownership
    std::shared_ptr<int[]> share() {
        return data; // Increases ref count
    }
};
A
Concept where specific code entity is responsible for resource lifetime, with ownership transfer moving responsibility between scopes or objects
B
Concept where no one owns memory
C
Concept where everyone owns memory
D
Concept where memory is copied
12

Question 12

What is cache-friendly data structure design?

cpp
// Cache-friendly: Structure of Arrays (SoA)
struct ParticlesSoA {
    std::vector<float> x, y, z;     // Contiguous memory
    std::vector<float> vx, vy, vz;
};

// Cache-unfriendly: Array of Structures (AoS)
struct Particle {
    float x, y, z, vx, vy, vz;
};
std::vector<Particle> particlesAoS; // Scattered memory access
A
Designing data structures to maximize spatial locality and minimize cache misses through contiguous memory layout and linear access patterns
B
Designing data structures randomly
C
Designing data structures with gaps
D
Designing data structures for slow access
13

Question 13

What is the difference between std::make_unique and std::make_shared?

cpp
// make_unique: separate allocation for object and control block
auto unique = std::make_unique<Widget>(args); // 1 allocation

// make_shared: single allocation for both object and control block
auto shared = std::make_shared<Widget>(args); // 1 allocation

// Manual shared_ptr construction: 2 allocations!
auto manual_shared = std::shared_ptr<Widget>(new Widget(args));
A
make_unique creates unique_ptr with single allocation, make_shared creates shared_ptr with optimized single allocation for object and reference counting control block
B
They are identical functions
C
make_shared creates unique_ptr
D
make_unique creates shared_ptr
14

Question 14

What is memory compaction or defragmentation?

A
Process of reorganizing allocated memory to eliminate external fragmentation by moving objects and updating pointers to create contiguous free memory
B
Process of creating more fragmentation
C
Process of allocating more memory
D
Process of ignoring memory layout
15

Question 15

What is a bump pointer allocator?

cpp
class BumpAllocator {
    char* start;
    char* current;
    char* end;
    
public:
    BumpAllocator(void* memory, size_t size) 
        : start(static_cast<char*>(memory)), 
          current(start), 
          end(start + size) {}
    
    void* allocate(size_t size, size_t alignment = alignof(std::max_align_t)) {
        void* ptr = std::align(alignment, size, reinterpret_cast<void*&>(current), 
                              reinterpret_cast<size_t&>(end - current));
        if (!ptr) return nullptr;
        current = static_cast<char*>(ptr) + size;
        return ptr;
    }
    
    void reset() { current = start; }
};
A
Simple allocator that maintains single pointer to next free memory location, providing extremely fast allocation by just incrementing pointer
B
Allocator that searches for free memory
C
Allocator that is very slow
D
Allocator that deallocates individually
16

Question 16

What is the difference between stack and heap allocation?

cpp
// Stack allocation: automatic, fast, limited scope
void func() {
    int stack_var = 42;        // Stack allocated
    MyClass obj(args);         // Stack allocated
} // Automatically destroyed

// Heap allocation: manual, slower, flexible lifetime
void func2() {
    int* heap_var = new int(42);    // Heap allocated
    MyClass* obj = new MyClass(args); // Heap allocated
    
    delete heap_var; // Manual cleanup required
    delete obj;
}
A
Stack allocation is automatic with function scope and fast but limited size, heap allocation is manual with flexible lifetime but slower with allocation overhead
B
They are identical allocation methods
C
Stack allocation is manual
D
Heap allocation is automatic
17

Question 17

What is reference counting in smart pointers?

cpp
std::shared_ptr<int> ptr1 = std::make_shared<int>(42);
// Reference count = 1

std::shared_ptr<int> ptr2 = ptr1; // Copy
// Reference count = 2

ptr1.reset(); // ptr1 no longer owns
// Reference count = 1

ptr2.reset(); // ptr2 no longer owns
// Reference count = 0 -> object destroyed
A
Mechanism tracking number of shared_ptr instances owning object, automatically destroying object when count reaches zero
B
Mechanism that prevents destruction
C
Mechanism that creates objects
D
Mechanism that copies objects
18

Question 18

What is memory-mapped I/O and its relation to memory management?

A
Technique mapping file contents directly into virtual memory address space, allowing file access through normal pointer operations without explicit read/write calls
B
Technique that copies files to memory
C
Technique that prevents file access
D
Technique that destroys files
19

Question 19

What is the difference between weak_ptr and shared_ptr?

cpp
std::shared_ptr<int> shared = std::make_shared<int>(42);
std::weak_ptr<int> weak = shared; // Doesn't increase ref count

if (auto locked = weak.lock()) { // Try to get shared_ptr
    // Use *locked - object still exists
} else {
    // Object was destroyed
}

// weak_ptr doesn't prevent destruction
shared.reset(); // Object destroyed here
A
weak_ptr observes shared_ptr without owning object or increasing reference count, providing safe access without preventing destruction
B
They are identical pointer types
C
weak_ptr owns objects
D
shared_ptr doesn't own objects
20

Question 20

What is cache prefetching and its impact on memory management?

cpp
// Software prefetching hint
void process_array(int* data, size_t size) {
    for (size_t i = 0; i < size; ++i) {
        __builtin_prefetch(&data[i + 16]); // Prefetch next cache line
        process(data[i]);
    }
}
A
Technique loading data into cache before it's needed, reducing memory access latency by hiding fetch time behind computation
B
Technique that slows down memory access
C
Technique that prevents caching
D
Technique that ignores memory access
21

Question 21

What is the purpose of std::allocator_traits?

cpp
template <typename Alloc>
void use_allocator(const Alloc& alloc) {
    using Traits = std::allocator_traits<Alloc>;
    
    // Get pointer type
    using pointer = typename Traits::pointer;
    
    // Allocate space for 10 elements
    pointer ptr = Traits::allocate(alloc, 10);
    
    // Construct objects
    Traits::construct(alloc, ptr, value);
    
    // Destroy and deallocate
    Traits::destroy(alloc, ptr);
    Traits::deallocate(alloc, ptr, 10);
}
A
Template providing uniform interface to different allocator types, defining pointer types and standardized allocate/construct/destroy/deallocate operations
B
Template that creates allocators
C
Template that destroys allocators
D
Template that copies allocators
22

Question 22

What is memory overcommitment and its implications?

A
OS technique allowing programs to allocate more virtual memory than physical RAM, relying on sparse allocation and paging to handle actual memory usage
B
Technique that prevents memory allocation
C
Technique that allocates physical memory immediately
D
Technique that destroys memory
23

Question 23

What is the difference between std::unique_ptr and raw pointers?

cpp
void func() {
    int* raw = new int(42);
    // Manual cleanup required
    delete raw;
}

void func2() {
    std::unique_ptr<int> smart(new int(42));
    // Automatic cleanup when goes out of scope
} // Automatically deleted
A
unique_ptr provides automatic resource management with RAII semantics, raw pointers require manual memory management and are prone to leaks
B
They are identical pointer types
C
raw pointers provide automatic management
D
unique_ptr requires manual management
24

Question 24

What is NUMA (Non-Uniform Memory Access) awareness?

A
Memory architecture where memory access time depends on physical location relative to processing core, requiring thread-to-memory affinity for optimal performance
B
Memory architecture with uniform access
C
Memory architecture without cores
D
Memory architecture without memory
25

Question 25

What is the purpose of std::pmr (Polymorphic Memory Resources)?

cpp
#include <memory_resource>

std::pmr::monotonic_buffer_resource buffer(1024);
std::pmr::vector<int> vec(&buffer); // Uses arena allocation

// Different containers can share same memory resource
std::pmr::string str(&buffer);
std::pmr::unordered_map<int, int> map(&buffer);
A
Framework allowing runtime selection of memory allocation strategy through polymorphic allocators, enabling containers to use different memory resources
B
Framework that prevents memory allocation
C
Framework that creates memory
D
Framework that destroys memory
26

Question 26

What is memory leak detection and prevention?

cpp
// RAII prevents leaks
auto resource = std::make_unique<Resource>();

// Smart pointers prevent leaks
std::shared_ptr<Data> shared = std::make_shared<Data>();

// Tools like Valgrind/ASan detect leaks
// Prevention: RAII, smart pointers, careful ownership
A
Using RAII patterns and smart pointers to ensure automatic cleanup, with tools like Valgrind detecting unfreed memory in long-running applications
B
Creating memory leaks intentionally
C
Preventing memory allocation
D
Ignoring memory management
27

Question 27

What is the difference between internal and external padding in data structures?

cpp
struct InternalPadding {
    char c;     // 1 byte
    // 3 bytes padding for alignment
    int i;      // 4 bytes
    // Total: 8 bytes
};

struct ExternalPadding {
    InternalPadding data;
    // Add external padding to avoid false sharing
    char padding[56]; // Fill cache line
};
A
Internal padding fills gaps within structures for alignment, external padding adds space between structures to prevent cache line sharing
B
They are identical padding types
C
Internal padding prevents alignment
D
External padding fills internal gaps
28

Question 28

What is garbage collection vs manual memory management?

A
Garbage collection automatically reclaims unreachable memory with runtime overhead, manual management requires explicit deallocation but provides precise control and no runtime pauses
B
They are identical approaches
C
Garbage collection requires manual deallocation
D
Manual management has runtime overhead
29

Question 29

What is memory pool fragmentation and how to avoid it?

cpp
class FragmentedPool {
    std::vector<void*> free_blocks;
    // allocate() removes from free_blocks
    // deallocate() adds to free_blocks
    // Problem: scattered free blocks create fragmentation
};

class DefragmentedPool {
    std::deque<bool> used; // Track contiguous usage
    char* pool;
    // allocate() finds contiguous free region
    // Avoids fragmentation by maintaining contiguous free space
};
A
Fragmentation where free blocks become scattered preventing large allocations, avoided by maintaining contiguous free memory regions and compaction
B
Fragmentation that improves allocation
C
Fragmentation that prevents small allocations
D
Fragmentation that creates contiguous memory
30

Question 30

What is the purpose of alignas specifier?

cpp
struct alignas(64) CacheLineAligned {
    int data;
    // Ensures structure starts at 64-byte boundary
};

// Runtime alignment
void* aligned_ptr = std::aligned_alloc(64, size);

// Check alignment
bool is_aligned = (reinterpret_cast<uintptr_t>(ptr) % 64) == 0;
A
Specifier controlling minimum alignment of types or variables to optimize memory access patterns and prevent false sharing
B
Specifier that reduces alignment
C
Specifier that ignores alignment
D
Specifier that creates misalignment
31

Question 31

What is the difference between malloc/free and new/delete?

cpp
// C-style: no construction/destruction
int* arr = (int*)malloc(10 * sizeof(int));
free(arr); // No destructor calls

// C++-style: calls constructors/destructors
int* arr2 = new int[10]; // Calls default constructors
double* arr3 = new double[10]{1.0, 2.0}; // Calls constructors with values
delete[] arr2; // Calls destructors
delete[] arr3;
A
new/delete call constructors/destructors and handle types properly, malloc/free only allocate/deallocate raw memory without object lifecycle management
B
They are identical allocation functions
C
malloc/free call constructors
D
new/delete only allocate memory
32

Question 32

What is memory-mapped file I/O performance characteristics?

A
Provides virtual memory interface to files with lazy loading, page-level access granularity, and OS-managed caching for efficient large file processing
B
Provides slow file access
C
Provides immediate file loading
D
Provides no caching
33

Question 33

What is the purpose of std::weak_ptr::expired()?

cpp
std::weak_ptr<int> weak;
{
    auto shared = std::make_shared<int>(42);
    weak = shared;
    
    std::cout << weak.expired(); // false - object exists
}

std::cout << weak.expired(); // true - object destroyed
A
Method checking if observed shared_ptr object has been destroyed without attempting to access it, enabling safe existence checking
B
Method that destroys objects
C
Method that creates objects
D
Method that locks objects
34

Question 34

What is slab allocation?

cpp
class SlabAllocator {
    struct Slab {
        char* memory;
        std::vector<bool> used;
        size_t object_size;
    };
    
    std::vector<Slab> slabs;
    
    void* allocate(size_t size) {
        // Find slab with free slot or create new slab
        for (auto& slab : slabs) {
            if (slab.object_size == size) {
                auto free_slot = std::find(slab.used.begin(), slab.used.end(), false);
                if (free_slot != slab.used.end()) {
                    *free_slot = true;
                    return &slab.memory[std::distance(slab.used.begin(), free_slot) * size];
                }
            }
        }
        // Create new slab...
    }
};
A
Memory allocation strategy organizing memory into slabs of uniform object sizes, providing efficient allocation/deallocation for objects of same size
B
Allocation strategy for variable sizes
C
Allocation strategy that wastes memory
D
Allocation strategy without organization
35

Question 35

What is the difference between std::shared_ptr thread safety guarantees?

cpp
std::shared_ptr<int> ptr = std::make_shared<int>(42);

// Thread-safe: reference counting
std::thread t1([ptr]() { auto p = ptr; }); // OK
std::thread t2([ptr]() { auto p = ptr; }); // OK

// Not thread-safe: pointed-to object
std::thread t3([ptr]() { *ptr = 1; }); // Race condition!
std::thread t4([ptr]() { *ptr = 2; }); // Race condition!
A
Reference counting operations are thread-safe but access to managed object is not, requiring external synchronization for object modification
B
All operations are thread-safe
C
No operations are thread-safe
D
Only object access is thread-safe
36

Question 36

What is virtual memory and its relation to physical memory?

A
Abstraction providing each process with illusion of contiguous address space larger than physical memory, managed through paging and virtual-to-physical address translation
B
Direct access to physical memory
C
Memory without addressing
D
Memory without processes
37

Question 37

What is the purpose of std::unique_ptr custom deleters?

cpp
auto file_deleter = [](FILE* f) {
    if (f) {
        std::cout << "Closing file\n";
        fclose(f);
    }
};

std::unique_ptr<FILE, decltype(file_deleter)> file_ptr(
    fopen("data.txt", "r"), file_deleter);

// File automatically closed when file_ptr goes out of scope
A
Enable unique_ptr to manage resources requiring custom cleanup logic beyond simple delete, such as file handles or system resources
B
Prevent resource cleanup
C
Create resources
D
Copy resources
38

Question 38

What is memory access pattern optimization?

cpp
// Bad: scattered access
for (size_t i = 0; i < N; ++i) {
    for (size_t j = 0; j < N; ++j) {
        sum += matrix[j][i]; // Column-major access on row-major array
    }
}

// Good: linear access
for (size_t i = 0; i < N; ++i) {
    for (size_t j = 0; j < N; ++j) {
        sum += matrix[i][j]; // Row-major access on row-major array
    }
}
A
Arranging memory accesses to maximize cache locality and prefetching efficiency, minimizing cache misses through linear and predictable access patterns
B
Creating random memory access
C
Preventing cache usage
D
Ignoring memory layout
39

Question 39

What is the difference between std::make_shared and direct shared_ptr construction?

cpp
// Efficient: single allocation
auto ptr1 = std::make_shared<int>(42);

// Inefficient: two allocations
std::shared_ptr<int> ptr2(new int(42));

// Why: make_shared allocates object + control block together
// Direct construction allocates separately, then constructs control block
A
make_shared performs single allocation for object and control block improving performance and exception safety, direct construction may allocate separately
B
They are identical construction methods
C
Direct construction is more efficient
D
make_shared allocates separately
40

Question 40

What are the fundamental principles for effective memory management in C++?

A
Use RAII and smart pointers for automatic resource management, choose appropriate allocators for specific patterns, optimize for cache locality, avoid fragmentation, and understand ownership semantics
B
Never use smart pointers
C
Always use raw pointers
D
Ignore memory management

QUIZZES IN C++