What Is a Neuron?
Your brain has 86 billion neurons. Each one does something embarrassingly simple. AI neurons do the exact same thing — and that simplicity is why they're so powerful.
After this lesson you'll know
- What a neuron computes: weighted sum + bias + activation
- What weights, biases, and activation functions do
- Why stacking simple neurons creates intelligence
- The difference between Step, ReLU, and Sigmoid activations
The Concept
A voting booth in your brain.
Think of it like a voting booth. Three friends each send you a signal — maybe weak, maybe strong. You multiply each signal by how much you trust that friend (that's the weight). You add up all the votes, plus a little nudge called the bias (your default mood). Then you decide: do I fire, or stay quiet? That decision is the activation function.
That's it. That's the entire computation a neuron does. And AI is made of millions of these.
Play With It
Live neuron — move the sliders and watch.
Try this: Set w1 to a large positive number and w2 to a large negative number. Watch how they compete. What happens to the output?
Activation Function:
The bias shifts the activation threshold.
Live Computation
Key Concepts
The building blocks of every neuron.
Match Each Concept to Its Role
Tap a concept on the left, then what it does on the right
Deep Dive
Three activation functions you need to know.
Knowledge Check
Test your understanding.
This is the real building block of AI. Every neural network — from image classifiers to large language models — is made of neurons that compute exactly this: weighted sum + bias, passed through an activation function. Stack thousands of these together and you get intelligence.