WARNING! This is all speculation, estimates, assumptions, the user’s 5 cents and maybe even marketing of AI companies:
AI researches says, if there is enough computing power self improving AI might be possible in 2027 (pretty scary because this would be a fast track to AGI (more intelligence than the most intelligent human))
and he says it won’t end well for mankind… Daniel Kokotajlo (ex OpenAI): advent of AGI “the army of super intelligences” that develop a parallel economy (for themselves) and maybe even go colonial “frankenstein” on mankind
“it’s not going to be gradual more like beeing hit by a truck”

Why the AI Race Ends in Disaster (with Daniel Kokotajlo) https://www.youtube.com/watch?app=desktop&v=V7Q3DJ9V5CQ
IF AI goes Terminator 2 on everyone it’s because a human instructed it to. Scary enough.
As Tesla and other companies with definately smart engineers tries “the impossible” and simulate parts of the human brain “drive this car without accidents from A to B only using 2 cameras (2 eyes)” = could not be reliably accomplished the last 10 years, will not be accomplished in the next 10 years, maybe in the next 30 years, but that’s still a small part of what the human brain can do using FAR less energy than current GPUs CPUs and datacenters.
Q: is it possible to simulate a full human brain on computer chips and create a fully functional human being digitally? How many GPUs or CPUs would be used? What would be the power requirements?
ChatGPT 5.0 mini: (https://duck.ai/) Short answer: not yet — we cannot create a fully functional human being purely by simulating a human brain on chips today. We can estimate the computational scale and power needs for a full-brain simulation, but there are big scientific, technical, and philosophical gaps (understanding of brain microcircuits, learning/development, embodiment, consciousness) before a digital human can be claimed.
Key points and rough estimates
1) Why it’s not currently possible
– We lack a complete, validated model of how neurons, synapses, glia, and neuromodulatory systems produce human cognition and consciousness.
– Even with the right biophysical model, the brain’s function depends on development, body feedback (senses, motor control), hormones, and long-lived learning/experience — not just raw computation.
– Practical limits: full biophysical simulation at molecular/ion-channel detail is enormously costly and currently only done for tiny circuits.
2) Two common modelling targets (very different costs)
– Biophysically detailed simulation (Hodgkin–Huxley style neurons, detailed morphology, synapse kinetics): extremely expensive.
– High-level/abstract neural-network-style simulation (spiking network or rate-based units, simplified synapses): much cheaper but may miss essential biology.
3) Scale numbers to simulate a human brain
– Neurons: ~86 billion neurons.
– Synapses: ~100–500 trillion synapses (estimates often ~10^14–5×10^14).
– Spiking events: baseline firing maybe ~1–10 Hz average, bursts/higher rates in parts.
4) Approximate compute required (order-of-magnitude)
These are rough, from published estimates (scaling from Blue Brain / large-scale simulations) and back-of-envelope conversions.
– Detailed biophysical simulation (compartmental neuron models)
– Estimates: ~10^18–10^20 floating-point operations per second (FLOPS) to simulate a human brain in real time.
– That corresponds roughly to 1 exaFLOP (10^18) to 100 exaFLOP (10^20) class sustained compute.
– Simplified spiking-network simulation
– Estimates: ~10^15–10^17 FLOPS for real-time full-brain spiking models (pessimistic to optimistic).
– Some optimistic neuromorphic / event-driven implementations claim much lower energy and compute needs but are not yet demonstrated at whole‑brain scale.
5) How many GPUs / CPUs is that?
– Modern high-end GPU (as of 2025) can deliver roughly 10^13–10^14 FLOPS for mixed/FP16/TF32 workloads peak; sustained usable FLOPS for whole-system scientific workloads will be lower.
– So:
– For 10^18 FLOPS need → ~10^4–10^5 such GPUs (10,000–100,000 GPUs).
– For 10^20 FLOPS need → ~10^6–10^7 GPUs (1 million+ GPUs).
– On CPUs (assuming ~10^11 FLOPS per high-end CPU socket sustained), you’d need ~10^7–10^9 CPU sockets for the same workloads. GPUs are far more practical.
6) Power requirements
– Modern high-end GPU card draws ~300–600 W under load; whole-node power includes CPUs, memory, interconnect, cooling overhead (PUE). Use approximate figures:
– If 10,000 GPUs × 400 W = 4 MW for GPUs alone. Add host CPUs, memory, interconnect, storage, and cooling (PUE ~1.5–2), total facility power ≈ 6–8 MW.
– If 100,000 GPUs × 400 W = 40 MW raw → facility ≈ 60–80 MW.
– For million‑GPU scale, power would be in the hundreds of MW to low GW.
– Biophysical simulations optimized on supercomputers have consumed megawatts; the Juniper/BlueGene class demonstrations scaled this way.
7) Energy-efficient alternatives
– Neuromorphic hardware (SpiNNaker, Loihi, analog/photonic neuromorphic) aims to simulate spiking networks with much lower energy (orders of magnitude less), but scaling to whole-brain with functionally equivalent cognition remains unproven. If successful, power could be in kilowatts–megawatts rather than tens of MW — but that’s speculative.
8) Other constraints
– Memory: storing neuron states, synaptic weights (10^14 synapses) requires petabytes to exabytes depending on precision. Example: 10^14 synapses × 4 bytes ≈ 400 TB? Wait: 10^14 × 4 bytes = 400,000 TB = 400 PB. So expect hundreds of petabytes to exabytes of memory/storage.
– Interconnect latency/bandwidth: simulating large networks in real-time requires extremely high-bandwidth, low-latency fabrics; these are nontrivial at exascale.
9) Conclusion (practical takeaway)
– Simulating a brain at full biophysical fidelity in real time likely requires exascale to beyond-exascale compute (10^18–10^20 FLOPS), hundreds of MW of power, and massive memory/storage/interconnect — currently beyond practical single‑site budgets and limited by science (model validity) as much as engineering.
– Simpler models or neuromorphic approaches reduce compute and power by orders of magnitude but don’t yet produce human-equivalent cognition or a “fully functional human being.”
Q: how much watts does a human, dog, lizard, fly brain use?
ChatGPT 5 mini:
Approximate resting power consumption of brains (typical values, averaged):
- Human brain (adult): ~20 W (range ~15–25 W).
- Dog brain: ~1–5 W (varies by size; small dogs toward 1 W, large dogs a few watts).
- Lizard brain (small reptile): ~0.01–0.1 W (tens to hundreds of milliwatts; very dependent on species/size/temperature).
- Fly brain (fruit fly): ~1–10 mW (mill watts; commonly quoted ~1–5 mW).
Notes: these are order-of-magnitude estimates. Brain power scales roughly with brain mass and metabolic rate; ectotherms (lizards) can be much lower and temperature-dependent; small animals’ brains often use proportionally more energy per mass but absolute power is tiny.
ChatGPT 4 mini:
Human Brain
The human brain consumes about 20 watts of power, which is roughly 20% of the body’s total energy expenditure, despite accounting for only about 2% of body weight.
Dog Brain
A dog’s brain uses approximately 5 to 10 watts. This varies based on the dog’s size and activity level.
Lizard Brain
Lizards have a lower metabolic rate, and their brains consume about 1 to 2 watts.
Fly Brain
Flies have very small brains, and their power consumption is minimal, around 0.01 to 0.1 watts.
These values can vary based on factors such as activity level, age, and specific species within each category.
Qf: how much GFLOPS would be required to simulate a complete human brain? and how much energy would that use with state of the art GPUs?
ChatGPT 5.0 mini:
Estimates vary widely because “simulate a complete human brain” can mean different things (biophysically detailed neuron+synapse models vs. abstract spiking-network models vs. functional emulation). I’ll give three representative approaches, show rough GFLOPS/TFlops numbers, and convert to energy using current top GPUs (NVIDIA H100-ish power efficiency). These are order-of-magnitude estimates.
Assumptions used
– Human brain ≈ 86 billion neurons, ≈ 100 trillion synapses (common ballpark).
– Biophysical simulation (Hodgkin–Huxley style) ≈ 10^4–10^6 FLOPS per neuron per ms (literature varies).
– Spiking-network simplified simulation ≈ 10^2–10^4 FLOPS per neuron per ms.
– High-level AI-scale emulation (transformer-like, trained networks that reproduce behavior) — effectively measured by model size; comparable functional models often in the 10^14–10^16 parameter range (very rough).
– GPU baseline: NVIDIA H100 peak FP32 ~60 TFLOPS (sustained lower); FP16/TF32/FP8 higher — use an effective ~100 TFLOPS per GPU for efficient mixed-precision workloads as a round figure. Power draw ~350 W (typical TDP); efficiency ~0.285 TFLOPS/W at 100 TFLOPS/350 W ≈ 0.285 TFLOPS per watt (or 285 GFLOPS/W).
1) Biophysically detailed simulation (Hodgkin–Huxley level)
– FLOPS estimate: 10^4–10^6 FLOPS/neuron/ms → per second multiply by 1000:
– 10^7–10^9 FLOPS per neuron per second.
– For 86e9 neurons:
– 8.6e17 – 8.6e19 FLOPS (i.e., 8.6×10^8 – 8.6×10^10 GFLOPS) → 8.6e5 – 8.6e7 TFLOPS → 0.86–86 million TFlops.
– GPUs required (100 TFLOPS effective each):
– 8.6e3 – 8.6e5 GPUs.
– Power (350 W each):
– 3.0 MW – 301 MW (megawatts). Calculation: GPUs × 350 W.
2) Spiking-network simplified simulation
– FLOPS estimate: 10^2–10^4 FLOPS/neuron/ms → per second 10^5–10^7 FLOPS/neuron/s.
– For 86e9 neurons:
– 8.6e15 – 8.6e17 FLOPS → 8.6e6 – 8.6e8 GFLOPS → 8.6e3 – 8.6e5 TFLOPS.
– GPUs required (100 TFLOPS each):
– 86 – 8.6e3 GPUs.
– Power (350 W each):
– ~30 kW – 3.0 MW.
3) Functional emulation via large AI models
– If a model with 10^15 parameters were needed and each parameter update/inference step costs ~2 FLOPs (very simplified) per token, inference FLOPS depend on tokens/sec target. Large models get to peta- or exa-FLOP requirements depending on temporal resolution. Hard to map directly to neuron count.
– Ballpark: to run a 10^15-10^16 parameter model at interactive latencies might need 10^3–10^5 TFLOPS → thousands of GPUs → tens of kW to MWs.
Energy per second compared to the human brain
– Human brain uses ~20 W. Even the most optimistic simplified GPU simulation scenarios consume kilowatts to megawatts — orders of magnitude more energy than the biological brain.
Key takeaways (concise)
– Biophysically detailed simulation: ~10^18–10^20 FLOPS → thousands–hundreds of thousands of top GPUs → megawatts to hundreds of megawatts.
– Simplified spiking model: ~10^16–10^18 FLOPS → tens–thousands of GPUs → tens of kilowatts to a few megawatts.
– Functional AI-style emulation: highly variable; likely thousands of GPUs and kilowatts–megawatts depending on fidelity and speed.
– In all realistic estimates, current GPU-based simulations would use many orders of magnitude more power than 20 W (the biological brain).
If you want, I can: (A) compute exact numerical examples for chosen parameter values (e.g., 10^3 FLOPS/neuron/ms) and show GPUs and power, or (B) translate estimates to cost/hour at current electricity prices. Which would you like?
ChatGPT 4.0 mini:
NVIDIA’s A100 GPU has 20000 GFLOPS
GFLOPS Required: 1,000,000 GFLOPS (1 exaflop)
Total Power Consumption: 15,000 Watts (15 kW)
Energy Consumption for 24 hours: 360 kWh
To simulate a human brain 1 exaflops = 1^18 FLOPS = 50x NVIDIA’s A100 GPU using 15000W
this is way less power usage than expected but of course completely unfeasable as eCar driving assistant (too high power requirements)
“GPT-3 requires 10^15 – 10^30 FLOPS to operate at the brain’s speed, which is roughly the same amount of compute necessary to run a decent emulation of the human brain” (lesswrong.com)
so the 4.0 answer seems to be way too optimistic X-D
Qff: How many NVIDIA H20 96 GBytes VRAM would be required to simulate a complete human brain on a neuron level? (Hodgkin Huxley style)
ChatGPT 5.0 mini:
Give a target FLOPS per neuron first — literature ranges widely (10^4–10^6 FLOPS per neuron per ms for Hodgkin–Huxley–style models). I’ll compute for three representative values: 1×10^4, 1×10^5, and 1×10^6 FLOPS/neuron/ms.
Assumptions
– Neuron count = 86×10^9.
– Convert per ms → per second by ×1000.
– NVIDIA H20 (96 GB) peak effective mixed-precision compute — use a conservative usable inference/training figure of 160 TFLOPS per GPU (mixed precision; pick this as a practical sustained-equivalent number). If you want a different GPU throughput, tell me and I’ll recalc.
– 1 TFLOP = 10^12 FLOPS.
Calculations
1) 1×10^4 FLOPS / neuron / ms
– Per neuron per second: 1×10^7 FLOPS.
– Total FLOPS: 86e9 × 1×10^7 = 8.6×10^17 FLOPS = 860,000 TFLOPS.
– H20 GPUs required: 860,000 TFLOPS / 160 TFLOPS per GPU = 5,375 GPUs.
2) 1×10^5 FLOPS / neuron / ms
– Per neuron per second: 1×10^8 FLOPS.
– Total FLOPS: 8.6×10^18 FLOPS = 8,600,000 TFLOPS.
– GPUs required: 8,600,000 / 160 = 53,750 GPUs.
3) 1×10^6 FLOPS / neuron / ms
– Per neuron per second: 1×10^9 FLOPS.
– Total FLOPS: 8.6×10^19 FLOPS = 86,000,000 TFLOPS.
– GPUs required: 86,000,000 / 160 = 537,500 GPUs.
Summary
– At 1×10^4 FLOPS/neuron/ms: ~5.4k H20 GPUs.
– At 1×10^5 FLOPS/neuron/ms: ~53.8k H20 GPUs.
– At 1×10^6 FLOPS/neuron/ms: ~537.5k H20 GPUs.
If you want per-GPU memory constraints checked (model state, synapse counts, connectivity, and per-neuron variables)
I can estimate VRAM needs and whether 96 GB per H20 is sufficient — tell me assumed bytes per neuron and bytes per synapse (or I can use typical guesses).
Qfff: use typical guesses
ChatGPT 5.0 mini:
I’ll use typical guesses for state per neuron and per synapse, then compute total memory and how many NVIDIA H20 (96 GiB) GPUs are needed assuming the brain is partitioned across GPUs with no memory compression and each GPU holds an equal share of state.
Assumptions (typical guesses)
– Neurons: 86e9.
– Synapses: 1e14 (100 trillion).
– Per-neuron state: 1,024 bytes (1 KiB) — membrane potential, gating variables, buffers, metadata.
– Per-synapse state: 16 bytes — weight (4 B), delay/flags (4 B), pointers/indexes or compressed connectivity info (8 B).
– Additional overhead (connectivity indices, buffers, book-keeping): 20% of total state.
– GPU VRAM usable per H20: 96 GiB = 96 × 2^30 bytes ≈ 103,079,215,104 bytes.
Calculations
1) Neuron state total
– 86e9 neurons × 1,024 B = 8.8064e13 B ≈ 88.06 TB.
2) Synapse state total
– 1e14 synapses × 16 B = 1.6e15 B ≈ 1,600 TB.
3) Raw total state
– 88.06 TB + 1,600 TB = 1,688.06 TB.
4) Add 20% overhead
– 1,688.06 TB × 1.2 = 2,025.672 TB ≈ 2,025.7 TB.
Convert to bytes: 2,025.7 TB = 2,025.7 × 10^12 bytes ≈ 2.0257e15 bytes.
Per-GPU capacity: 103,079,215,104 bytes ≈ 1.031e11 bytes.
GPUs required (memory-limited)
– 2.0257e15 bytes / 1.031e11 bytes per GPU ≈ 19,645 GPUs.
Combine with compute-from-previous (picked mid FLOPS 1e5/neuron/ms gave ~53,750 GPUs). Final requirement is max of compute and memory constraints.
Results (typical guesses)
– Memory-limited GPUs needed: ~19,600 H20 GPUs
– Compute-limited GPUs (from prior Hodgkin–Huxley FLOPS assumptions):
– 1e4 FLOPS/neuron/ms → ~5,375 GPUs (compute-bound)
– 1e5 FLOPS/neuron/ms → ~53,750 GPUs (compute-bound)
– 1e6 FLOPS/neuron/ms → ~537,500 GPUs (compute-bound)
– Using the typical guesses above, for the 1e5 FLOPS/neuron/ms Hodgkin–Huxley level you’d be limited by compute (≈53.8k GPUs). For the 1e4 case memory would be the limiter (19.6k GPUs vs 5.4k compute), so you’d need ~19.6k GPUs. For 1e6 you’d need ~537.5k GPUs (compute >> memory).
If you want different per-neuron/synapse sizes or to account for distributed-memory inefficiencies, communication buffers, or sparse connectivity formats, tell me and I’ll recompute.
Qffff: is it correct at 100% utilization 19600 H20 GPUs would use 350W each or 6.860.000W total?
Close but adjust a couple small points.
– NVIDIA H20 typical board power ≈ 350 W (depends on SKU/clocking).
– 19,600 GPUs × 350 W = 6,860,000 W = 6.86 MW — that arithmetic is correct.
But real-world total power will be higher because you must include:
– Host CPUs, memory, NVLink/IB switches, storage, and networking — add ~20–50% overhead (commonly).
– Cooling and facility PUE (power usage effectiveness) — data centers add ~1.2×–1.6× depending on efficiency.
Examples:
– With 30% system overhead: 6.86 MW × 1.30 ≈ 8.92 MW.
– With PUE 1.3 included (same as above combined): ~8.92 MW.
– With PUE 1.5 and 30% extra system draw you’d be around 6.86 MW × 1.5 ≈ 10.29 MW.
So yes, 19,600 H20s at 350 W each = 6.86 MW of GPU draw; expect total facility power ≈ 9 MW to 11 MW depending on overhead and PUE.
Links:
- currently 1x NVIDIA H20 96 GBytes VRAM sells at 13000 bucks so 254.800.000 bucks only for the GPUs
- so it’s doable to simulate a full human brain, but the efforts are massive and require at least 1 billion of investment once and cover the power costs of 10 MW operation (that is 10 MWh per hour echo “scale=5; 1000*10*0.1″|bc = 1000 bucks per hour
liked this article?
- only together we can create a truly free world
- plz support dwaves to keep it up & running!
- (yes the info on the internet is (mostly) free but beer is still not free (still have to work on that))
- really really hate advertisement
- contribute: whenever a solution was found, blog about it for others to find!
- talk about, recommend & link to this blog and articles
- thanks to all who contribute!
