Go Programming: Concurrency - Goroutines
Go's concurrency model is a powerful feature that allows you to write efficient and scalable applications. At the heart of this model are goroutines. This document will cover what goroutines are, how they work, and how to use them effectively.
What are Goroutines?
- Lightweight Threads: Goroutines are functions that can run concurrently with other functions. They are much lighter weight than traditional operating system threads. This means you can create thousands, even millions, of goroutines without significant performance overhead.
- Managed by the Go Runtime: Goroutines are not directly managed by the operating system. Instead, they are multiplexed onto a smaller number of OS threads by the Go runtime. This runtime handles scheduling, context switching, and other low-level details.
- Independent Execution: Each goroutine has its own stack, but they all share the same memory space. This allows for easy communication and data sharing between goroutines, but also requires careful synchronization to avoid race conditions.
- Cheap to Create: Creating a goroutine is very inexpensive. It typically takes a few microseconds, compared to the milliseconds required to create an OS thread.
How to Start a Goroutine
You launch a goroutine by simply prefixing a function call with the go keyword.
package main
import (
"fmt"
"time"
)
func say(s string) {
for i := 0; i < 5; i++ {
time.Sleep(100 * time.Millisecond) // Simulate some work
fmt.Println(s)
}
}
func main() {
go say("world") // Launch 'say' in a new goroutine
say("hello") // 'say' runs in the main goroutine
// Wait for a moment to allow goroutines to complete.
// Without this, the main goroutine might exit before the
// other goroutine has a chance to run. This is a common
// issue when starting out with goroutines.
time.Sleep(time.Second)
fmt.Println("Done!")
}
Explanation:
go say("world"): This line launches thesayfunction as a goroutine. Themainfunction continues execution immediately without waiting forsayto finish.say("hello"): This line calls thesayfunction directly in the main goroutine.time.Sleep(time.Second): This is crucial. Without it, themainfunction might exit before thego say("world")goroutine has a chance to execute. This is because themainfunction doesn't inherently wait for goroutines it launches. In real-world applications, you'll use more sophisticated synchronization mechanisms (see below) instead oftime.Sleep.
Output (may vary due to concurrency):
hello
world
hello
world
hello
world
hello
world
hello
Done!
Notice that the output of "hello" and "world" is interleaved. This demonstrates that the two say functions are running concurrently.
Goroutines vs. Threads
| Feature | Goroutines | Threads |
|---|---|---|
| Weight | Lightweight | Heavyweight |
| Creation Cost | Low (microseconds) | High (milliseconds) |
| Management | Go runtime | Operating System |
| Stack Size | Dynamically sized | Fixed size |
| Context Switching | Fast, managed by runtime | Slower, managed by OS |
| Number | Can create thousands/millions | Limited by system resources |
Communication and Synchronization
Because goroutines share memory, you need mechanisms to coordinate access to shared resources and prevent race conditions. Go provides these mechanisms:
- Channels: The preferred way to communicate and synchronize between goroutines. Channels allow you to send and receive values of a specific type.
- Mutexes (Mutual Exclusion Locks): Used to protect shared data from concurrent access. A mutex allows only one goroutine to access a critical section of code at a time.
- WaitGroups: Used to wait for a collection of goroutines to finish.
Channels
package main
import "fmt"
func worker(id int, jobs <-chan int, results chan<- int) {
for j := range jobs {
fmt.Println("worker", id, "started job", j)
// Simulate some work
results <- j * 2
fmt.Println("worker", id, "finished job", j)
}
}
func main() {
jobs := make(chan int, 100)
results := make(chan int, 100)
// Start 3 worker goroutines
for w := 1; w <= 3; w++ {
go worker(w, jobs, results)
}
// Send 5 jobs to the jobs channel
for j := 1; j <= 5; j++ {
jobs <- j
}
close(jobs) // Signal that no more jobs will be sent
// Collect the results from the results channel
for a := 1; a <= 5; a++ {
fmt.Println("Result:", <-results)
}
}
Explanation:
jobs := make(chan int, 100): Creates a buffered channel that can hold up to 100 integers. Buffered channels can improve performance by allowing goroutines to send data without immediately blocking if a receiver isn't ready.results := make(chan int, 100): Creates another buffered channel for receiving results.worker(id int, jobs <-chan int, results chan<- int): This function represents a worker goroutine.jobs <-chan int: Receives data from thejobschannel. The<-indicates a receive-only channel.results chan<- int: Sends data to theresultschannel. The->indicates a send-only channel.
for j := range jobs: This loop iterates over thejobschannel, receiving values until the channel is closed.close(jobs): This is important! Closing thejobschannel signals to the worker goroutines that no more jobs will be sent. Without this, the workers would block indefinitely waiting for more jobs.
Mutexes
package main
import (
"fmt"
"sync"
)
var (
counter int
mutex sync.Mutex
)
func incrementCounter(wg *sync.WaitGroup) {
defer wg.Done() // Decrement the WaitGroup counter when the goroutine exits
for i := 0; i < 1000; i++ {
mutex.Lock() // Acquire the lock before accessing the shared resource
counter++
mutex.Unlock() // Release the lock after accessing the shared resource
}
}
func main() {
var wg sync.WaitGroup
wg.Add(2) // Add 2 goroutines to the WaitGroup
go incrementCounter(&wg)
go incrementCounter(&wg)
wg.Wait() // Wait for all goroutines to finish
fmt.Println("Counter:", counter)
}
Explanation:
sync.Mutex: A mutex type that provides locking and unlocking mechanisms.mutex.Lock(): Acquires the lock. If another goroutine already holds the lock, this call will block until the lock is released.mutex.Unlock(): Releases the lock.sync.WaitGroup: Used to wait for a collection of goroutines to finish.wg.Add(2): Increments the WaitGroup counter by 2, indicating that we're launching 2 goroutines.defer wg.Done(): This line is crucial.deferensures thatwg.Done()is called when theincrementCounterfunction exits, even if there's a panic.wg.Done()decrements the WaitGroup counter.wg.Wait(): Blocks until the WaitGroup counter reaches zero, meaning all goroutines have finished.
WaitGroups
The sync.WaitGroup is demonstrated in the Mutex example above. It's a fundamental tool for coordinating goroutines.
Best Practices
- Use Channels for Communication: Channels are generally preferred over shared memory and mutexes for communication between goroutines. They promote a more structured and less error-prone approach.
- Avoid Sharing Memory by Communicating Instead: This is a core principle of concurrent programming in Go.
- Handle Errors: Goroutines can panic. Use
recoverto handle panics gracefully and prevent your application from crashing. - Use
deferfor Cleanup:deferensures that resources are released even if a function panics. - Be Mindful of Race Conditions: Carefully analyze your code for potential race conditions and use appropriate synchronization mechanisms to prevent them.
- Profile Your Code: Use Go's profiling tools to identify performance bottlenecks and optimize your concurrent