Golang Synchronization Tools Quiz

Golang
0 Passed
0% acceptance

35 comprehensive questions on Golang's synchronization tools, covering sync.Mutex, sync.RWMutex, sync.WaitGroup, sync.Once, and common race-condition mistakes — with 18 code examples demonstrating thread-safe patterns.

35 Questions
~70 minutes
1

Question 1

What is a race condition in Go?

go
var counter int

func increment() {
    counter++
}

func main() {
    go increment()
    go increment()
    time.Sleep(time.Second)
    fmt.Println(counter)  // May not be 2
}
A
Multiple goroutines accessing shared data without synchronization
B
Goroutines running too fast
C
Memory leaks
D
Compilation errors
2

Question 2

How does sync.Mutex work?

go
var mu sync.Mutex
var counter int

func increment() {
    mu.Lock()
    counter++
    mu.Unlock()
}
A
Provides exclusive access to critical sections
B
Allows multiple readers
C
Waits for goroutines
D
Executes once
3

Question 3

What is the difference between sync.Mutex and sync.RWMutex?

go
var rwmu sync.RWMutex

func readData() {
    rwmu.RLock()
    // read operations
    rwmu.RUnlock()
}

func writeData() {
    rwmu.Lock()
    // write operations
    rwmu.Unlock()
}
A
RWMutex allows multiple concurrent readers but exclusive writers
B
Mutex allows concurrent access
C
No difference
D
RWMutex is slower
4

Question 4

How do you use sync.WaitGroup?

go
var wg sync.WaitGroup

func worker(id int) {
    defer wg.Done()
    fmt.Printf("Worker %d done\n", id)
}

func main() {
    for i := 0; i < 3; i++ {
        wg.Add(1)
        go worker(i)
    }
    wg.Wait()
    fmt.Println("All workers done")
}
A
Wait for multiple goroutines to complete
B
Lock shared data
C
Execute once
D
Handle timeouts
5

Question 5

What does sync.Once guarantee?

go
var once sync.Once
var initialized bool

func initResource() {
    once.Do(func() {
        // initialization code
        initialized = true
    })
}
A
Function passed to Do() executes exactly once, even with multiple calls
B
Executes every time
C
Executes multiple times
D
Blocks forever
6

Question 6

What is a deadlock in Go?

go
var mu1, mu2 sync.Mutex

func goroutine1() {
    mu1.Lock()
    time.Sleep(time.Second)
    mu2.Lock()  // deadlock if goroutine2 holds mu2 and waits for mu1
    mu2.Unlock()
    mu1.Unlock()
}
A
Goroutines waiting indefinitely for locks held by each other
B
Race condition
C
Memory leak
D
Compilation error
7

Question 7

How do you fix a race condition with a counter?

go
var counter int
var mu sync.Mutex

func increment() {
    mu.Lock()
    counter++
    mu.Unlock()
}
A
Use mutex to protect the counter variable
B
Use channels
C
Use atomic operations
D
Cannot fix
8

Question 8

When should you use sync.RWMutex over sync.Mutex?

go
var data map[string]int
var rwmu sync.RWMutex

func read(key string) int {
    rwmu.RLock()
    defer rwmu.RUnlock()
    return data[key]
}

func write(key string, value int) {
    rwmu.Lock()
    defer rwmu.Unlock()
    data[key] = value
}
A
When you have many reads and few writes
B
For all cases
C
Never
D
For single goroutine
9

Question 9

What is a common mistake with sync.WaitGroup?

go
var wg sync.WaitGroup

func main() {
    for i := 0; i < 3; i++ {
        wg.Add(1)
        go func() {
            defer wg.Done()
            // work
        }()
    }
    wg.Wait()  // Waits forever if goroutines don't start
}
A
Calling Add() after starting goroutines
B
Forgetting Done()
C
Using defer
D
No mistake
10

Question 10

How do you implement a thread-safe singleton with sync.Once?

go
type singleton struct{}

var instance *singleton
var once sync.Once

func GetInstance() *singleton {
    once.Do(func() {
        instance = &singleton{}
    })
    return instance
}
A
Use sync.Once to initialize the instance exactly once
B
Use mutex
C
Use channels
D
Cannot implement singleton
11

Question 11

What is a livelock?

A
Goroutines are active but making no progress due to repeated state changes
B
Deadlock
C
Race condition
D
Memory leak
12

Question 12

How do you avoid deadlock with multiple mutexes?

go
var mu1, mu2 sync.Mutex

func safe() {
    mu1.Lock()
    defer mu1.Unlock()
    mu2.Lock()
    defer mu2.Unlock()
    // Always acquire in same order
}
A
Acquire locks in a consistent global order
B
Use random order
C
Use channels instead
D
Cannot avoid
13

Question 13

What is the copy lock problem?

go
type Counter struct {
    mu sync.Mutex
    value int
}

func (c Counter) Increment() {  // Wrong: receiver by value
    c.mu.Lock()  // Locks copy, not original
    c.value++
    c.mu.Unlock()
}
A
Locking a copied mutex instead of the original
B
Race condition
C
Deadlock
D
No problem
14

Question 14

How do you implement a thread-safe map?

go
type SafeMap struct {
    mu sync.RWMutex
    data map[string]int
}

func (sm *SafeMap) Get(key string) int {
    sm.mu.RLock()
    defer sm.mu.RUnlock()
    return sm.data[key]
}

func (sm *SafeMap) Set(key string, value int) {
    sm.mu.Lock()
    defer sm.mu.Unlock()
    sm.data[key] = value
}
A
Embed RWMutex and protect map operations
B
Use regular map
C
Use channels
D
Cannot make map thread-safe
15

Question 15

What is the double-checked locking problem?

go
var instance *singleton
var mu sync.Mutex

func GetInstance() *singleton {
    if instance == nil {  // Check without lock
        mu.Lock()
        if instance == nil {
            instance = &singleton{}
        }
        mu.Unlock()
    }
    return instance
}
A
Race condition in the first nil check without lock
B
Deadlock
C
Memory leak
D
No problem in Go
16

Question 16

How do you coordinate multiple goroutines with WaitGroup?

go
func processBatch(items []int) {
    var wg sync.WaitGroup
    results := make(chan int, len(items))
    
    for _, item := range items {
        wg.Add(1)
        go func(item int) {
            defer wg.Done()
            results <- process(item)
        }(item)
    }
    
    wg.Wait()
    close(results)
    
    for result := range results {
        fmt.Println(result)
    }
}
A
Use WaitGroup to wait for all goroutines, then process results
B
Use mutex
C
Use once
D
Cannot coordinate
17

Question 17

What is a starvation issue with Mutex?

A
Some goroutines never get the lock due to constant contention
B
Deadlock
C
Race condition
D
Memory leak
18

Question 18

How do you implement a read-write lock correctly?

go
var rwmu sync.RWMutex
var data int

func reader() {
    rwmu.RLock()
    // multiple readers can be here
    _ = data
    rwmu.RUnlock()
}

func writer() {
    rwmu.Lock()
    // exclusive access
    data = 42
    rwmu.Unlock()
}
A
Use RLock() for reads, Lock() for writes, matching Unlock() calls
B
Use Lock() for everything
C
Mix RLock and Lock randomly
D
No read-write locks
19

Question 19

What is the problem with this WaitGroup usage?

go
func main() {
    var wg sync.WaitGroup
    for i := 0; i < 3; i++ {
        go func(i int) {
            wg.Add(1)  // Wrong: Add called in goroutine
            defer wg.Done()
            fmt.Println(i)
        }(i)
    }
    wg.Wait()
}
A
Add() called after goroutine starts, causing race condition
B
Done() not called
C
Defer used
D
No problem
20

Question 20

How do you implement lazy initialization with sync.Once?

go
type Config struct {
    data map[string]string
}

var config *Config
var once sync.Once

func GetConfig() *Config {
    once.Do(func() {
        config = loadConfig()  // expensive operation
    })
    return config
}
A
Use Once.Do() to perform initialization exactly once
B
Use mutex
C
Use channels
D
Cannot lazy initialize
21

Question 21

What is a goroutine leak caused by synchronization?

go
func worker(ch chan int) {
    for {
        select {
        case v := <-ch:
            process(v)
        case <-time.After(time.Second):
            return  // timeout
        }
    }
}

func main() {
    ch := make(chan int)
    go worker(ch)
    // Forgot to close ch or send quit signal
    time.Sleep(time.Minute)
}
A
Goroutines blocked forever waiting for channels that never send
B
Race condition
C
Deadlock
D
Memory leak
22

Question 22

How do you fix a race condition in a slice?

go
type SafeSlice struct {
    mu sync.Mutex
    data []int
}

func (ss *SafeSlice) Append(value int) {
    ss.mu.Lock()
    defer ss.mu.Unlock()
    ss.data = append(ss.data, value)
}
A
Protect slice operations with mutex
B
Use channels
C
Use atomic operations
D
Cannot fix
23

Question 23

What is the difference between Lock() and RLock()?

go
var rwmu sync.RWMutex

func read() {
    rwmu.RLock()  // Allows other RLock calls
    defer rwmu.RUnlock()
}

func write() {
    rwmu.Lock()  // Blocks all other locks
    defer rwmu.Unlock()
}
A
RLock allows concurrent readers, Lock is exclusive
B
No difference
C
Lock is faster
D
RLock doesn't exist
24

Question 24

How do you implement a barrier with WaitGroup?

go
func barrierExample() {
    var wg sync.WaitGroup
    var mu sync.Mutex
    var counter int
    
    for i := 0; i < 3; i++ {
        wg.Add(1)
        go func(id int) {
            defer wg.Done()
            // phase 1
            mu.Lock()
            counter++
            mu.Unlock()
            
            wg.Wait()  // barrier: wait for all to finish phase 1
            
            // phase 2
            fmt.Printf("Goroutine %d in phase 2\n", id)
        }(i)
    }
    wg.Wait()  // wait for all to complete
}
A
Use WaitGroup twice: once as barrier, once to wait for completion
B
Use mutex only
C
Use once
D
Cannot implement barrier
25

Question 25

What is a common race condition in map access?

go
var m map[string]int  // concurrent map writes

func add(key string, value int) {
    m[key] = value  // panic: concurrent map writes
}
A
Concurrent writes to map cause panic
B
Reads are safe
C
No problem
D
Only append is unsafe
26

Question 26

How do you use sync.Once for error handling?

go
var once sync.Once
var err error

func initResource() error {
    once.Do(func() {
        err = loadResource()  // capture error
    })
    return err
}
A
Capture error in Once.Do() and return it on subsequent calls
B
Ignore errors
C
Use panic
D
Cannot handle errors
27

Question 27

What is the lock hierarchy anti-pattern?

go
func A() {
    muA.Lock()
    B()  // B tries to lock muA while holding muB
    muA.Unlock()
}

func B() {
    muB.Lock()
    A()  // A tries to lock muB while holding muA
    muB.Unlock()
}
A
Calling functions that acquire locks while holding other locks
B
Using defer
C
Using mutex
D
No problem
28

Question 28

How do you implement a semaphore with channels?

go
func semaphore(n int) chan struct{} {
    return make(chan struct{}, n)
}

func worker(sem chan struct{}, id int) {
    sem <- struct{}{}  // acquire
    defer func() { <-sem }()  // release
    work(id)
}
A
Use buffered channel as counting semaphore
B
Use mutex
C
Use WaitGroup
D
Cannot implement semaphore
29

Question 29

What is the problem with recursive locking?

go
var mu sync.Mutex

func recursive() {
    mu.Lock()
    recursive()  // deadlock: mutex not reentrant
    mu.Unlock()
}
A
Mutex is not reentrant; recursive calls deadlock
B
Race condition
C
Memory leak
D
No problem
30

Question 30

How do you coordinate producer-consumer with WaitGroup?

go
func producerConsumer() {
    var wg sync.WaitGroup
    data := make(chan int, 100)
    
    // Producer
    wg.Add(1)
    go func() {
        defer wg.Done()
        defer close(data)
        for i := 0; i < 10; i++ {
            data <- i
        }
    }()
    
    // Consumer
    wg.Add(1)
    go func() {
        defer wg.Done()
        for v := range data {
            process(v)
        }
    }()
    
    wg.Wait()
}
A
Use WaitGroup to wait for both producer and consumer completion
B
Use mutex
C
Use once
D
Cannot coordinate
31

Question 31

What is a priority inversion?

A
Low priority goroutine holds lock needed by high priority goroutine
B
Deadlock
C
Race condition
D
Memory leak
32

Question 32

How do you implement a thread-safe counter with atomic operations?

go
import "sync/atomic"

var counter int64

func increment() {
    atomic.AddInt64(&counter, 1)
}

func get() int64 {
    return atomic.LoadInt64(&counter)
}
A
Use atomic operations for simple types instead of mutex
B
Use mutex
C
Use channels
D
Cannot make counter thread-safe
33

Question 33

What is the convoy effect?

A
Goroutines queue up behind a slow lock holder, reducing throughput
B
Deadlock
C
Race condition
D
Memory leak
34

Question 34

How do you handle cleanup with sync.Once?

go
var once sync.Once
var cleanup func()

func init() {
    once.Do(func() {
        resource := acquire()
        cleanup = func() { release(resource) }
    })
}

func Close() {
    if cleanup != nil {
        cleanup()
    }
}
A
Store cleanup function in Once.Do() for later execution
B
Use defer
C
Use panic
D
Cannot handle cleanup
35

Question 35

What is the most important synchronization best practice?

A
Keep critical sections small, acquire locks in consistent order, use defer for unlocks, prefer channels over shared memory
B
Use lots of mutexes
C
Lock everything
D
Ignore synchronization

QUIZZES IN Golang