How Many Goroutines Can Go Run?
To understand how many Goroutines can be created at most, we need to clarify the following questions first: What is a Goroutine? What resources does it consume? What is a Goroutine? A Goroutine is a lightweight thread abstracted by Go. It performs scheduling at the application level, allowing us to easily perform concurrent programming. A Goroutine can be started using the go keyword. The compiler translates this keyword into a runtime.newproc function call using the methods cmd/compile/internal/gc.state.stmt and cmd/compile/internal/gc.state.call. When starting a new Goroutine to perform a task, it uses runtime.newproc to initialize a g to run the coroutine. How Much Resource Does a Goroutine Consume? Memory Consumption By launching Goroutines and blocking them, we can observe memory changes before and after to evaluate consumption: func getGoroutineMemConsume() { var c chan int var wg sync.WaitGroup const goroutineNum = 1000000 memConsumed := func() uint64 { runtime.GC() // Trigger GC to exclude object impact var memStat runtime.MemStats runtime.ReadMemStats(&memStat) return memStat.Sys } noop := func() { wg.Done()

To understand how many Goroutines can be created at most, we need to clarify the following questions first:
- What is a Goroutine?
- What resources does it consume?
What is a Goroutine?
A Goroutine is a lightweight thread abstracted by Go. It performs scheduling at the application level, allowing us to easily perform concurrent programming.
A Goroutine can be started using the go
keyword. The compiler translates this keyword into a runtime.newproc
function call using the methods cmd/compile/internal/gc.state.stmt
and cmd/compile/internal/gc.state.call
.
When starting a new Goroutine to perform a task, it uses runtime.newproc
to initialize a g
to run the coroutine.
How Much Resource Does a Goroutine Consume?
Memory Consumption
By launching Goroutines and blocking them, we can observe memory changes before and after to evaluate consumption:
func getGoroutineMemConsume() {
var c chan int
var wg sync.WaitGroup
const goroutineNum = 1000000
memConsumed := func() uint64 {
runtime.GC() // Trigger GC to exclude object impact
var memStat runtime.MemStats
runtime.ReadMemStats(&memStat)
return memStat.Sys
}
noop := func() {
wg.Done()
<-c // Prevent Goroutine from exiting and releasing memory
}
wg.Add(goroutineNum)
before := memConsumed() // Memory before creating Goroutines
for i := 0; i < goroutineNum; i++ {
go noop()
}
wg.Wait()
after := memConsumed() // Memory after creating Goroutines
fmt.Println(runtime.NumGoroutine())
fmt.Printf("%.3f KB bytes\n", float64(after-before)/goroutineNum/1024)
}
Result analysis:
Each Goroutine consumes at least 2KB of space. Assuming a computer has 2GB of memory, the maximum number of Goroutines that can simultaneously exist is 2GB / 2KB = 1 million.
CPU Consumption
The amount of CPU a Goroutine uses greatly depends on the logic of the function it executes. If the function involves CPU-intensive calculations and runs for a long duration, the CPU will quickly become the bottleneck.
The number of Goroutines that can concurrently run depends on what the program is doing. If the tasks are memory-heavy network operations, just a few Goroutines could crash the program.
Conclusion
The number of Goroutines that can be run depends on the CPU and memory consumption of the operations executed within them. If the operations are minimal (i.e., do nothing), memory will become the bottleneck first. In this case, when 2GB of memory is exhausted, the program will throw an error. If the operations are CPU-intensive, then just two or three Goroutines may cause the program to fail.
Common Issues Triggered by Excessive Goroutines
- too many open files – This occurs due to too many file or socket handles being opened.
- out of memory
Application in Business Scenarios
How to Control the Number of Concurrent Goroutines?
runtime.NumGoroutine()
can be used to monitor the number of active Goroutines.
1. Ensuring Only One Goroutine is Running a Task
When concurrency is required in an interface, the number of Goroutines should be managed at the application level. For example, if a Goroutine is used to initialize a resource that only needs to be initialized once, it is unnecessary to allow multiple Goroutines to do this simultaneously. A running
flag can be used to determine if initialization is already in progress.
// SingerConcurrencyRunner ensures only one task is running
type SingerConcurrencyRunner struct {
isRunning bool
sync.Mutex
}
func NewSingerConcurrencyRunner() *SingerConcurrencyRunner {
return &SingerConcurrencyRunner{}
}
func (c *SingerConcurrencyRunner) markRunning() (ok bool) {
c.Lock()
defer c.Unlock()
// Double-check to avoid race conditions
if c.isRunning {
return false
}
c.isRunning = true
return true
}
func (c *SingerConcurrencyRunner) unmarkRunning() (ok bool) {
c.Lock()
defer c.Unlock()
if !c.isRunning {
return false
}
c.isRunning = false
return true
}
func (c *SingerConcurrencyRunner) Run(f func()) {
// Return immediately if already running, avoiding memory overuse
if c.isRunning {
return
}
if !c.markRunning() {
// Return if unable to acquire the run flag
return
}
// Execute the actual logic
go func() {
defer func() {
if err := recover(); err != nil {
// log error
}
}()
f()
c.unmarkRunning()
}()
}
Reliability test: Check whether more than 2 Goroutines are running
func TestConcurrency(t *testing.T) {
runner := NewConcurrencyRunner()
for i := 0; i < 100000; i++ {
runner.Run(f)
}
}
func f() {
// This will never exceed the allowed number of Goroutines
if runtime.NumGoroutine() > 3 {
fmt.Println(">3", runtime.NumGoroutine())
}
}
2. Specifying the Number of Concurrent Goroutines
Other Goroutines can be set to wait with a timeout, or fall back to using old data instead of waiting.
Using Tunny allows control over the number of Goroutines. If all Worker
s are occupied, the WorkRequest
won’t be processed immediately but will be queued in reqChan
to wait for availability.
func (w *workerWrapper) run() {
//...
for {
// NOTE: Blocking here will prevent the worker from closing down.
w.worker.BlockUntilReady()
select {
case w.reqChan <- workRequest{
jobChan: jobChan,
retChan: retChan,
interruptFunc: w.interrupt,
}:
select {
case payload := <-jobChan:
result := w.worker.Process(payload)
select {
case retChan <- result:
case <-w.interruptChan:
w.interruptChan = make(chan struct{})
}
//...
}
}
//...
}
The implementation here uses resident Goroutines. When the Size
is changed, new Worker
s are created to handle the tasks. Another implementation approach is to use a chan
to control whether a Goroutine can be started. If the buffer is full, no new Goroutine will be started to handle the task.
type ProcessFunc func(ctx context.Context, param interface{})
type MultiConcurrency struct {
ch chan struct{}
f ProcessFunc
}
func NewMultiConcurrency(size int, f ProcessFunc) *MultiConcurrency {
return &MultiConcurrency{
ch: make(chan struct{}, size),
f: f,
}
}
func (m *MultiConcurrency) Run(ctx context.Context, param interface{}) {
// Do not enter if the buffer is full
m.ch <- struct{}{}
go func() {
defer func() {
// Release a slot in the buffer
<-m.ch
if err := recover(); err != nil {
fmt.Println(err)
}
}()
m.f(ctx, param)
}()
}
Test to ensure the number of Goroutines does not exceed 13:
func mockFunc(ctx context.Context, param interface{}) {
fmt.Println(param)
}
func TestNewMultiConcurrency_Run(t *testing.T) {
concurrency := NewMultiConcurrency(10, mockFunc)
for i := 0; i < 1000; i++ {
concurrency.Run(context.Background(), i)
if runtime.NumGoroutine() > 13 {
fmt.Println("goroutine", runtime.NumGoroutine())
}
}
}
With this approach, the system doesn't need to keep many running Goroutines in memory. However, even if 100 Goroutines are resident, the memory usage is just 2KB × 100 = 200KB, which is basically negligible.
We are Leapcell, your top choice for hosting Rust projects.
Leapcell is the Next-Gen Serverless Platform for Web Hosting, Async Tasks, and Redis:
Multi-Language Support
- Develop with Node.js, Python, Go, or Rust.
Deploy unlimited projects for free
- pay only for usage — no requests, no charges.
Unbeatable Cost Efficiency
- Pay-as-you-go with no idle charges.
- Example: $25 supports 6.94M requests at a 60ms average response time.
Streamlined Developer Experience
- Intuitive UI for effortless setup.
- Fully automated CI/CD pipelines and GitOps integration.
- Real-time metrics and logging for actionable insights.
Effortless Scalability and High Performance
- Auto-scaling to handle high concurrency with ease.
- Zero operational overhead — just focus on building.
Explore more in the Documentation!
Follow us on X: @LeapcellHQ