Prompt to Chip

Type a question. Follow it through every physical layer that has to work before AI can answer.

Best in landscape mode
2026
[ The Application Layer ]
Your prompt leaves the device as a data packet
Your question here YOUR DEVICE TOKENIZES, ENCRYPTS, SENDS DATA PACKET HEADING TO THE NETWORK OPENAI · ANTHROPIC · GOOGLE · META

Your question is broken into tokens, encrypted, and sent as a tiny data packet toward a data center that might be thousands of miles away.

ENCRYPTED PACKET LEAVES DEVICE
WiFiWiFiThe wireless link between your device and a nearby router, typically the first hop your data packet takes.
ENTERS THE NETWORK
2026
[ Data Transmission ]
Your prompt travels through fiber, ground stations, and satellites
FIBER OPTIC CABLE GLASS CORE — YOUR DATA RIDES THIS CLADDING — KEEPS LIGHT IN PROTECTIVE JACKET SUBMARINE CABLE 800,000+ MILES ON THE OCEAN FLOOR LEO SATELLITE ALTERNATE ROUTE TRANSMISSION SPEED FIBER: ~200,000 km/s LATENCY: 10-50 ms CLOUDFLARE · EQUINIX · SPACEX · AT&T

Your prompt races through glass fiber at two-thirds the speed of light, bouncing off satellites or snaking along the ocean floor through 800,000+ miles of submarine cable.

FIBER TRUNK
100G100 GigabitA fiber network link moving 100 billion bits per second — fast enough to transfer a full movie in under a second.
ARRIVES AT DATA CENTER
2026
[ The Data Center ]
A warehouse burning megawatts of power — your prompt arrives here
YOUR PROMPT ARRIVES SERVER RACKS ~10,000 GPUs PER DATA CENTER COOLING EVAPORATIVE TOWERS REJECT FACILITY HEAT COOLING EVAPORATIVE TOWERS REJECT FACILITY HEAT POWER SUBSTATION 20-100 MW POWERS A SMALL CITY GRID → RACK LIQUID COOLING CLOSED-LOOP COOLANT DIRECT-TO-CHIP COOLANT AIR CAN'T KEEP UP SCALE (2026) $300B+ BIG TECH CAPEX 20 GW NEW CAPACITY AWS · GOOGLE CLOUD · MICROSOFT AZURE · COREWEAVE

Your prompt arrives at a building the size of several football fields. Big Tech is spending over $300 billion this year building more of them.

ROUTED TO GPU CLUSTER
IB 400GInfiniBand 400GAn ultra-fast interconnect used inside data centers so GPUs can exchange data with very low latency.
INFINIBAND
2026
[ The GPU Rack — NVL72 ]
72 GPUs working as one brain, connected at terabytes per second
GPU MODULE 8x GPU DIES + HBM NVLink BACKBONE 1.8 TB/s BETWEEN GPUs NVL72 RACK 72 GPUs PER RACK ~120 kW POWER ~$3M PER RACK NVIDIA · DELL · SUPER MICRO · VERTIV

Your prompt gets split across 72 GPUs that talk to each other at terabytes per second.

NVLink / PCIe
PCIePCIeThe high-speed bus inside a server that connects the CPU, GPUs, and other accelerators.
INTO THE SILICON
2026
[ The GPU Die + High Bandwidth Memory ]
Billions of transistors and stacked memory — where inference happens
CoWoS PACKAGE SUBSTRATE HBM4 HBM4 HBM4 HBM4 GPU COMPUTE DIE 208B TRANSISTORS · TSMC 4NP HBM4 STACK 12+ DRAM LAYERS · 2+ TB/s NVIDIA · SK HYNIX · SAMSUNG · MICRON · AMD

Each GPU has 208 billion transistors flanked by towers of stacked memory feeding data at terabytes per second.

WHERE THESE CHIPS COME FROM
SiSilicon (Si)The base semiconductor material that nearly all modern chips are built on.
THE SEMICONDUCTOR FAB
2026
[ The Semiconductor Fab ]
The most complex buildings humans construct
FAN FILTER UNITS · AIR REPLACED EVERY 3 SECONDS PROCESS TOOL ION IMPLANT / CVD MODIFIES SILICON SURFACE ETCH TOOL PLASMA CARVES CIRCUITS NANOMETER ACCURACY DEPOSITION CVD / ALD THIN FILMS BUILDS LAYERS ATOM BY ATOM METROLOGY INSPECTS EVERY LAYER DEFECT = SCRAPPED WAFER FOUP FOUP FAB CLEANROOM 3-5 YEARS TO BUILD · $20-50B CAPEX TSMC · SAMSUNG FOUNDRY · INTEL FOUNDRY

TSMC makes virtually all AI chips. A single dust particle can ruin a chip with billions of transistors smaller than a virus.

THE MACHINE INSIDE THE FAB
EUVExtreme UltravioletLight with a 13.5nm wavelength used to print the tiniest transistor features on advanced chips. Only ASML makes these machines.
13.5nm LIGHT
2026
[ ASML EUV Lithography ]
The deepest layer. $200–400M per machine. ~70 built per year.
VACUUM CHAMBER LASER + TIN SOURCE CO2 PULSE HITS TIN PLASMA CREATES EUV LIGHT 50,000 PULSES/SEC MIRROR TRAIN ZEISS OPTICS SHAPE THE BEAM POLISHED TO <0.05nm SMOOTHNESS M1 M2 M3 MASK + WAFER STAGE nm-PRECISION POSITIONING THE BOTTLENECK ~70 TOOLS / YEAR $200-400M EACH SOLE MAKER: ASML EUV LIMITS CHIP SUPPLY ASML · ZEISS · CYMER THE MACHINE THAT SETS THE CEILING FOR AI

This is the foundation of everything. A laser hits tin, mirrors shape the light, and the pattern is printed onto a wafer. Only ~70 of these machines are built per year.

Now the response races back up through every layer
07

Wafer Patterned

EUV light prints 70+ circuit layers, each aligned to sub-nanometer precision — a few atoms wide.

06

Chip Packaged

The wafer is cut, tested, and paired with HBM memory stacks on a single substrate.

05

Memory Feeds the Model

Terabytes per second of bandwidth stream your context through stacked DRAM towers into the GPU die.

04

GPUs Generate Tokens

72 coordinated GPUs turn your question into the math that produces each next word.

03

Data Center Sends

The generated tokens leave the data center as packets, headed back across the network.

02

Fiber Returns

At two-thirds the speed of light, back across the ocean and into your local network.

01

Rendered on Screen

Your device decrypts the tokens and the answer appears, character by character.

What is the meaning of life?
AI

That answer depended on fiber networks, data centers, GPUs, advanced memory, and chip fabs running at extreme precision.