Go's Concurrency Model
Go's concurrency is built around two key concepts: goroutines and channels. This model makes it easy to write concurrent programs without the complexity of traditional threading.
What are Goroutines?
A goroutine is a lightweight thread managed by the Go runtime. They're extremely cheap to create - you can have millions of them running simultaneously.
package main
import (
"fmt"
"time"
)
func main() {
// Start a goroutine
go sayHello("World")
// This runs in the main goroutine
sayHello("Go")
// Wait a bit to see the output
time.Sleep(100 * time.Millisecond)
}
func sayHello(name string) {
for i := 0; i < 3; i++ {
fmt.Printf("Hello %s! (%d)\n", name, i)
time.Sleep(100 * time.Millisecond)
}
}
Basic Goroutine Patterns
Anonymous Goroutines
go func() {
fmt.Println("Running in anonymous goroutine")
}()
Goroutines with Parameters
func worker(id int, jobs <-chan int, results chan<- int) {
for job := range jobs {
fmt.Printf("Worker %d processing job %d\n", id, job)
time.Sleep(time.Second) // Simulate work
results <- job * 2
}
}
Channels: Communication Between Goroutines
Channels are Go's way of communicating between goroutines. They provide a way to send and receive values.
Basic Channel Operations
// Create a channel
ch := make(chan int)
// Send a value
ch <- 42
// Receive a value
value := <-ch
// Close a channel
close(ch)
Channel Types
// Unbuffered channel (synchronous)
ch1 := make(chan int)
// Buffered channel (asynchronous)
ch2 := make(chan int, 10)
// Send-only channel
var sendCh chan<- int = ch1
// Receive-only channel
var recvCh <-chan int = ch1
Practical Examples
1. Worker Pool Pattern
package main
import (
"fmt"
"time"
)
func main() {
const numWorkers = 3
const numJobs = 10
jobs := make(chan int, numJobs)
results := make(chan int, numJobs)
// Start workers
for w := 1; w <= numWorkers; w++ {
go worker(w, jobs, results)
}
// Send jobs
for j := 1; j <= numJobs; j++ {
jobs <- j
}
close(jobs)
// Collect results
for a := 1; a <= numJobs; a++ {
result := <-results
fmt.Printf("Result: %d\n", result)
}
}
func worker(id int, jobs <-chan int, results chan<- int) {
for job := range jobs {
fmt.Printf("Worker %d processing job %d\n", id, job)
time.Sleep(time.Second) // Simulate work
results <- job * 2
}
}
2. Fan-Out Fan-In Pattern
package main
import (
"fmt"
"math/rand"
"sync"
"time"
)
func main() {
// Generate numbers
numbers := generateNumbers(10)
// Fan-out: distribute work
c1 := square(numbers)
c2 := square(numbers)
// Fan-in: collect results
for result := range merge(c1, c2) {
fmt.Printf("Result: %d\n", result)
}
}
func generateNumbers(count int) <-chan int {
out := make(chan int)
go func() {
defer close(out)
for i := 0; i < count; i++ {
out <- rand.Intn(100)
}
}()
return out
}
func square(in <-chan int) <-chan int {
out := make(chan int)
go func() {
defer close(out)
for n := range in {
time.Sleep(time.Millisecond * 100) // Simulate work
out <- n * n
}
}()
return out
}
func merge(inputs ...<-chan int) <-chan int {
var wg sync.WaitGroup
out := make(chan int)
// Start output goroutine for each input channel
for _, input := range inputs {
wg.Add(1)
go func(ch <-chan int) {
defer wg.Done()
for n := range ch {
out <- n
}
}(input)
}
// Close out when all inputs are done
go func() {
wg.Wait()
close(out)
}()
return out
}
Select Statement
The select
statement lets you wait on multiple channel operations:
func main() {
ch1 := make(chan string)
ch2 := make(chan string)
go func() {
time.Sleep(1 * time.Second)
ch1 <- "from ch1"
}()
go func() {
time.Sleep(2 * time.Second)
ch2 <- "from ch2"
}()
for i := 0; i < 2; i++ {
select {
case msg1 := <-ch1:
fmt.Println(msg1)
case msg2 := <-ch2:
fmt.Println(msg2)
case <-time.After(3 * time.Second):
fmt.Println("timeout")
}
}
}
Common Concurrency Patterns
1. Rate Limiting
func rateLimiter() {
requests := make(chan int, 5)
// Rate limiter
limiter := time.Tick(200 * time.Millisecond)
for req := range requests {
<-limiter // Wait for rate limiter
fmt.Printf("Processing request %d\n", req)
}
}
2. Context for Cancellation
import (
"context"
"fmt"
"time"
)
func main() {
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()
go doWork(ctx)
time.Sleep(3 * time.Second)
}
func doWork(ctx context.Context) {
for {
select {
case <-ctx.Done():
fmt.Println("Work cancelled")
return
default:
fmt.Println("Working...")
time.Sleep(500 * time.Millisecond)
}
}
}
3. Pipeline Pattern
func pipeline() {
// Stage 1: Generate numbers
numbers := make(chan int)
go func() {
defer close(numbers)
for i := 1; i <= 10; i++ {
numbers <- i
}
}()
// Stage 2: Square numbers
squares := make(chan int)
go func() {
defer close(squares)
for n := range numbers {
squares <- n * n
}
}()
// Stage 3: Print results
for result := range squares {
fmt.Printf("Square: %d\n", result)
}
}
Best Practices
1. Always Close Channels
// Good
func process() {
ch := make(chan int)
go func() {
defer close(ch) // Always close
// ... send values
}()
// ... use channel
}
2. Use Buffered Channels When Appropriate
// For known capacity
ch := make(chan int, 100)
// For unknown capacity, use unbuffered
ch := make(chan int)
3. Avoid Goroutine Leaks
// Bad - goroutine might leak
go func() {
for {
// infinite loop without exit condition
}
}()
// Good - provide exit mechanism
done := make(chan bool)
go func() {
for {
select {
case <-done:
return
default:
// do work
}
}
}()
4. Use sync.WaitGroup for Synchronization
var wg sync.WaitGroup
for i := 0; i < 5; i++ {
wg.Add(1)
go func(id int) {
defer wg.Done()
fmt.Printf("Worker %d\n", id)
}(i)
}
wg.Wait() // Wait for all goroutines to complete
Performance Considerations
- Goroutines are cheap: You can create millions of them
- Channels have overhead: Use them wisely
- Avoid unnecessary synchronization: Don't over-engineer
- Profile your code: Use
go tool pprof
to find bottlenecks
Common Pitfalls
- Deadlocks: Make sure channels are properly closed
- Race conditions: Use proper synchronization
- Goroutine leaks: Always provide exit conditions
- Blocking operations: Be aware of blocking channel operations
Go's concurrency model is one of its strongest features. With goroutines and channels, you can build highly concurrent applications that are both efficient and easy to understand.
Master these concepts, and you'll be able to build scalable, concurrent applications in Go! 🚀