Golang Memory Model Quiz

Golang
0 Passed
0% acceptance

45 comprehensive questions on Golang's memory model, covering stack vs heap allocation, escape analysis, allocation patterns, garbage collection basics, and avoiding unnecessary allocations — with 25 code examples demonstrating memory management techniques.

45 Questions
~90 minutes
1

Question 1

What is the stack in Go's memory model?

go
func f() {
    x := 42        // Allocated on stack
    y := "hello"  // Allocated on stack
    f2()
    // x, y deallocated when f returns
}

func f2() {
    z := make([]int, 10)  // May escape to heap
}
A
Fast, per-goroutine memory for local variables and function calls
B
Shared memory between goroutines
C
Persistent memory that never gets freed
D
Memory managed by garbage collector only
2

Question 2

What is the heap in Go's memory model?

go
func createSlice() []int {
    return make([]int, 100)  // Allocated on heap
}

func main() {
    s := createSlice()
    // s points to heap memory
    _ = s
}
A
Shared memory pool managed by garbage collector
B
Per-goroutine memory
C
Memory that never gets garbage collected
D
Faster than stack allocation
3

Question 3

What is escape analysis in Go?

go
func noEscape() *int {
    x := 42
    return &x  // x escapes to heap
}

func doesEscape() int {
    x := 42
    return x  // x stays on stack
}
A
Compiler analysis that determines if variables should be allocated on heap or stack
B
Runtime memory tracking
C
Manual memory management
D
Garbage collection algorithm
4

Question 4

When does a variable escape to the heap?

go
func escapes() *int {
    x := 42
    return &x  // Address returned, escapes
}

func doesntEscape(x *int) {
    *x = 42  // Parameter already on heap
}

func conditionalEscape(b bool) *int {
    if b {
        x := 42
        return &x  // Escapes due to conditional return
    }
    return nil
}
A
When its address is taken and used after function returns
B
Always for pointers
C
Never in Go
D
Only for global variables
5

Question 5

How can you check if variables escape?

bash
go build -gcflags='-m' main.go
A
Use go build -gcflags='-m' to see escape analysis decisions
B
Use runtime profiling
C
Use pprof
D
Cannot check escape analysis
6

Question 6

What is garbage collection in Go?

go
func main() {
    for {
        s := make([]int, 1000)  // Heap allocation
        _ = s
        // s becomes garbage when loop iterates
    }
    // GC will eventually free unreachable memory
}
A
Automatic process that reclaims unreachable heap memory
B
Manual memory deallocation
C
Stack memory management
D
Memory allocation
7

Question 7

What causes GC pressure?

go
func highPressure() {
    for {
        s := make([]byte, 1024*1024)  // 1MB allocations
        _ = s
        // Creates lots of garbage quickly
    }
}
A
Frequent heap allocations that become garbage quickly
B
Stack allocations
C
Long-lived objects
D
No allocations
8

Question 8

How does Go's GC work?

A
Concurrent tri-color mark-and-sweep with write barriers
B
Stop-the-world mark-and-sweep
C
Reference counting
D
Manual memory management
9

Question 9

What is a GC pause?

A
Brief stop-the-world period during GC cycle
B
Time when GC runs concurrently
C
Time between GC cycles
D
No pauses in Go GC
10

Question 10

How can you reduce heap allocations?

go
func bad() []int {
    return make([]int, 100)  // Allocates new slice each call
}

func good() []int {
    s := make([]int, 0, 100)  // Pre-allocate capacity
    return s[:0]  // Reuse slice
}
A
Reuse objects, pre-allocate slices/maps, avoid unnecessary conversions
B
Use more goroutines
C
Use global variables
D
Cannot reduce allocations
11

Question 11

What is object pooling in Go?

go
var pool = sync.Pool{
    New: func() interface{} {
        return make([]byte, 1024)
    },
}

func process() {
    buf := pool.Get().([]byte)
    defer pool.Put(buf)  // Return to pool
    // Use buf
}
A
Reusing expensive objects to reduce allocations
B
Creating new objects each time
C
Manual memory management
D
No object pooling in Go
12

Question 12

What is the difference between stack and heap allocation performance?

go
func stackAlloc() {
    x := 42  // ~1ns allocation
    _ = x
}

func heapAlloc() {
    x := new(int)  // ~10-50ns allocation + GC overhead
    _ = x
}
A
Stack allocation is much faster (no GC overhead)
B
Heap allocation is faster
C
No performance difference
D
Stack is slower
13

Question 13

What is memory fragmentation?

A
Wasted memory due to allocation patterns leaving small unusable gaps
B
Memory leaks
C
Stack overflow
D
No fragmentation in Go
14

Question 14

How does Go's allocator work?

A
Uses size classes and per-P (processor) caches for fast allocation
B
Simple malloc/free
C
Manual allocation
D
No allocator in Go
15

Question 15

What is a memory leak in Go?

go
func leak() {
    ch := make(chan int, 1)
    go func() {
        for {
            <-ch  // Goroutine never exits
        }
    }()
    // Forgot to close ch or signal goroutine to exit
}
A
Unreachable memory that can't be garbage collected
B
Stack overflow
C
Heap allocation
D
No memory leaks in Go
16

Question 16

How can you profile memory usage in Go?

go
import _ "net/http/pprof"

go func() {
    log.Println(http.ListenAndServe("localhost:6060", nil))
}()

// Then: go tool pprof http://localhost:6060/debug/pprof/heap
A
Use pprof to collect heap profiles and analyze allocations
B
Use runtime.GC()
C
Use fmt.Printf
D
Cannot profile memory
17

Question 17

What is the GOGC environment variable?

bash
export GOGC=200  // GC at 200% heap growth (default 100)
./myprogram
A
Controls GC frequency - percentage heap growth before next GC
B
Controls stack size
C
Controls heap size
D
No such variable
18

Question 18

What is a finalizer in Go?

go
type Resource struct {
    handle unsafe.Pointer
}

func (r *Resource) Close() {
    // Cleanup code
}

func init() {
    runtime.SetFinalizer(r, (*Resource).Close)
}
A
Function called when object is garbage collected
B
Function called when object is allocated
C
Manual memory management
D
No finalizers in Go
19

Question 19

How can you avoid string concatenation allocations?

go
func bad() string {
    s := ""
    for i := 0; i < 100; i++ {
        s += strconv.Itoa(i)  // Allocates new string each time
    }
    return s
}

func good() string {
    var b strings.Builder
    for i := 0; i < 100; i++ {
        b.WriteString(strconv.Itoa(i))
    }
    return b.String()  // Single allocation
}
A
Use strings.Builder or bytes.Buffer for efficient string building
B
Use + operator
C
Use fmt.Sprintf
D
Cannot avoid allocations
20

Question 20

What is interface{} allocation?

go
func box(x int) interface{} {
    return x  // Allocates box for int
}

func unbox(i interface{}) int {
    return i.(int)  // No allocation
}
A
Storing concrete types in interface{} requires heap allocation
B
No allocation for interfaces
C
Stack allocation only
D
No interface allocation
21

Question 21

How does slice growth affect memory usage?

go
s := make([]int, 0, 10)
for i := 0; i < 15; i++ {
    s = append(s, i)  // Grows capacity: 10 -> 20 -> 40...
}
// Wastes memory if final size is much smaller than capacity
A
Slices grow exponentially, potentially wasting memory if over-allocated
B
Slices never waste memory
C
Linear growth
D
No growth
22

Question 22

What is the memory model for channels?

go
ch := make(chan int, 10)
go func() {
    ch <- 42  // Value copied to heap if buffered
}()

x := <-ch  // Value copied from heap
A
Channel sends/receives copy values, potentially allocating for large values
B
No copying in channels
C
Pointers only
D
No memory usage
23

Question 23

How can you optimize map allocations?

go
func bad() {
    for i := 0; i < 1000; i++ {
        m := make(map[string]int)  // Allocates new map each iteration
        m["key"] = i
        process(m)
    }
}

func good() {
    m := make(map[string]int)  // Allocate once
    for i := 0; i < 1000; i++ {
        m["key"] = i
        process(m)
        clear(m)  // Go 1.21+ clears map efficiently
    }
}
A
Reuse maps when possible, use clear() instead of reallocation
B
Create new maps each time
C
Use global maps
D
Cannot optimize maps
24

Question 24

What is the cost of reflection?

go
func direct(x interface{}) {
    s := x.(string)  // Fast type assertion
}

func reflect(x interface{}) {
    v := reflect.ValueOf(x)  // Slow reflection
    s := v.String()
}
A
Reflection is much slower than direct operations and causes allocations
B
Same performance as direct code
C
Faster than direct code
D
No reflection in Go
25

Question 25

How does GC handle circular references?

go
type A struct {
    b *B
}

type B struct {
    a *A
}

func main() {
    a := &A{}
    b := &B{}
    a.b = b
    b.a = a
    // GC will collect both when they become unreachable
}
A
GC uses reachability analysis, not reference counting, so circular references are fine
B
Circular references cause memory leaks
C
Manual breaking required
D
No circular references in Go
26

Question 26

What is memory profiling in Go?

go
f, _ := os.Create("mem.prof")
defer f.Close()

runtime.GC()  // Force GC
runtime.WriteHeapProfile(f)

// Analyze: go tool pprof mem.prof
A
Capturing heap snapshots to analyze memory usage and leaks
B
CPU profiling
C
Network profiling
D
No profiling in Go
27

Question 27

How can you reduce interface allocations?

go
func bad() {
    var i interface{} = 42  // Boxed allocation
    _ = i
}

func good() {
    var x int = 42  // No boxing
    _ = x
}

// Or use generics (Go 1.18+)
func generic[T any](x T) T {
    return x  // No boxing for concrete types
}
A
Avoid interface{} when possible, use generics for type safety without boxing
B
Always use interface{}
C
Use reflection
D
Cannot reduce interface allocations
28

Question 28

What is the impact of GC on latency?

A
GC pauses can cause latency spikes, especially in low-latency applications
B
No impact on latency
C
Improves latency
D
No GC in Go
29

Question 29

How can you implement memory-efficient linked lists?

go
type Node struct {
    value int
    next  *Node
}

func bad() *Node {
    return &Node{value: 1, next: &Node{value: 2}}  // Multiple allocations
}

func good() *Node {
    // Pre-allocate slice of nodes
    nodes := make([]Node, 2)
    nodes[0] = Node{value: 1, next: &nodes[1]}
    nodes[1] = Node{value: 2}
    return &nodes[0]  // Single allocation
}
A
Use contiguous memory allocation instead of pointer chasing
B
Use many small allocations
C
Use global variables
D
Cannot optimize linked lists
30

Question 30

What is the memory cost of goroutines?

go
go func() {
    // Each goroutine has its own stack
    x := 42  // Stack allocation
    _ = x
}()
A
Goroutines have initial stack (~2KB) that grows as needed
B
No memory cost
C
Fixed large stack
D
No goroutines in Go
31

Question 31

How can you optimize struct allocations?

go
type BigStruct struct {
    a, b, c, d int64
    s string
}

func bad() {
    return BigStruct{}  // Returns by value, copies
}

func good() *BigStruct {
    return &BigStruct{}  // Returns pointer, no copy
}

func better() BigStruct {
    var s BigStruct
    // Modify s in place
    return s  // Single copy
}
A
Return pointers for large structs, avoid unnecessary copying
B
Always return by value
C
Use global structs
D
Cannot optimize structs
32

Question 32

What is the memory model for closures?

go
func createClosure(x int) func() int {
    return func() int {
        return x  // x captured by closure
    }
}

func main() {
    f := createClosure(42)
    // x is heap-allocated because closure escapes
}
A
Captured variables are allocated on heap if closure escapes
B
Always stack allocated
C
No memory usage
D
No closures in Go
33

Question 33

How can you monitor GC performance?

go
import "runtime/debug"

debug.SetGCPercent(100)  // Default

debug.SetGCPercent(-1)  // Disable GC

debug.SetGCPercent(200)  // Less frequent GC
A
Use debug.SetGCPercent() and runtime metrics to tune GC
B
Use fmt.Printf
C
Use pprof only
D
Cannot monitor GC
34

Question 34

What is the memory impact of defer?

go
func withDefer() {
    defer cleanup()  // Allocates closure if cleanup captures variables
}

func withoutDefer() {
    cleanup()  // Direct call, no allocation
}
A
Defer can cause heap allocation if deferred function is a closure
B
No memory impact
C
Always allocates
D
No defer in Go
35

Question 35

How can you implement zero-allocation logging?

go
func bad() {
    log.Printf("User %s logged in", user)  // Allocates for formatting
}

func good() {
    if log.Level >= log.Info {
        log.Info("User logged in", "user", user)  // Structured logging, no allocation if disabled
    }
}
A
Use structured logging with level checks to avoid formatting when disabled
B
Use fmt.Printf always
C
Use global variables
D
Cannot avoid logging allocations
36

Question 36

What is the memory model for maps?

go
m := make(map[string]int, 100)  // Pre-allocates space
m["key"] = 42

// Map grows dynamically, potentially reallocating
A
Maps allocate on heap and grow dynamically with reallocation
B
Maps are stack allocated
C
No dynamic growth
D
No maps in Go
37

Question 37

How can you reduce string allocations?

go
func bad(s string) string {
    return strings.ToUpper(s)  // Allocates new string
}

func good(s string) string {
    // If possible, work with []byte
    b := []byte(s)
    for i := range b {
        if 'a' <= b[i] && b[i] <= 'z' {
            b[i] -= 32
        }
    }
    return string(b)  // Single allocation
}
A
Work with []byte when possible to avoid intermediate string allocations
B
Always use string methods
C
Use reflection
D
Cannot reduce string allocations
38

Question 38

What is the memory cost of panic/recover?

go
func mayPanic() {
    defer func() {
        if r := recover(); r != nil {
            // Handle panic
        }
    }()
    // Code that may panic
}
A
Panic allocates stack trace and defers, recover has minimal cost when not panicking
B
No memory cost
C
Always expensive
D
No panic/recover in Go
39

Question 39

How can you optimize JSON marshaling memory usage?

go
func bad(v interface{}) {
    data, _ := json.Marshal(v)  // Allocates for marshaling
    send(data)
}

func good(v interface{}) {
    var buf bytes.Buffer
    json.NewEncoder(&buf).Encode(v)  // Streams to buffer
    send(buf.Bytes())
}
A
Use streaming encoders instead of Marshal to reduce intermediate allocations
B
Always use json.Marshal
C
Use reflection
D
Cannot optimize JSON
40

Question 40

What is the memory model for slices?

go
s := make([]int, 3, 10)  // len=3, cap=10
s = append(s, 1)  // Uses existing capacity
s = append(s, make([]int, 8)...)  // May reallocate if exceeds capacity
A
Slices have length and capacity, reallocate when capacity exceeded
B
Fixed size like arrays
C
No capacity concept
D
No slices in Go
41

Question 41

How can you implement memory-efficient caching?

go
type Cache struct {
    mu    sync.RWMutex
    items map[string]*Item
}

type Item struct {
    value interface{}
    expiry time.Time
}

func (c *Cache) cleanup() {
    // Periodic cleanup of expired items
}
A
Use maps with periodic cleanup, avoid storing large values directly
B
Store everything in memory
C
Use global variables
D
Cannot implement caching
42

Question 42

What is the impact of large object allocations?

go
func allocateLarge() {
    data := make([]byte, 1024*1024)  // 1MB allocation
    // If this escapes, it's heap allocated
    process(data)
}
A
Large objects (>32KB) are allocated directly on heap, bypassing size classes
B
Always stack allocated
C
No impact
D
No large objects in Go
43

Question 43

How can you reduce function call overhead?

go
func inlineMe(x int) int {
    return x * 2
}

func caller() {
    result := inlineMe(5)  // May be inlined by compiler
}
A
Compiler inlines small functions, reducing call overhead and improving optimization
B
No inlining in Go
C
Always expensive calls
D
Use macros
44

Question 44

What is the memory model for interfaces?

go
var i interface{} = 42

// Interface value contains:
// - itab: type information and method table
// - data: pointer to concrete value (heap allocated)
A
Interface values contain type information and pointer to heap-allocated data
B
No memory usage
C
Stack only
D
No interfaces in Go
45

Question 45

What is the most important memory optimization principle?

A
Measure first, avoid premature optimization, focus on reducing heap allocations and GC pressure, use profiling to identify bottlenecks
B
Always optimize everything
C
Use more memory
D
Ignore memory usage

QUIZZES IN Golang