Entropy Calculator
Calculate Shannon entropy, joint entropy, and information content for probability distributions
H(X) = -∑ p(x) × log(p(x))
Shannon entropy measures uncertainty in a probability distribution
Probabilities must sum to 1.0
Binary entropy: H(p) = -p×log(p) - (1-p)×log(1-p)
Frequencies will be normalized to probabilities
Quick Examples
Shannon Entropy
Entropy Scale
Maximum Entropy
Normalized
of max
Perplexity
effective states
Redundancy
unused capacity
Probability Breakdown
| Event | Probability | -p×log(p) | % of H |
|---|---|---|---|
| Total | 1.0000 | 100% |
Step-by-Step Solution
Identify probabilities
[]
Apply Shannon entropy formula
H = -∑ p × log(p)
Calculate each term
-×log() = + ...
Sum all contributions
H =
Common Entropy Values (bits)
| Distribution | Entropy | Interpretation |
|---|---|---|
| Fair coin (50/50) | 1.000 | 1 bit per flip |
| Biased coin (90/10) | 0.469 | More predictable |
| Fair 6-sided die | 2.585 | log₂(6) bits |
| DNA bases (A,T,C,G) | 2.000 | 2 bits per base |
| English letters | ~4.11 | Non-uniform freq. |
| Byte (256 values) | 8.000 | 8 bits if uniform |
If you like this calculator
Please help us simply by sharing it. It will help us a lot!
Related Calculators
Other calculators you might find useful.
Z-Score Calculator
Calculate z-scores, percentiles, and probabilities for normal distributions
Cylinder Calculator
Calculate volume, surface area, lateral area, and other cylinder properties
Parallelogram Calculator
Calculate area, perimeter, diagonals, and angles of any parallelogram
GCD Calculator
Calculate the Greatest Common Divisor using Euclidean algorithm with step-by-step solutions
Percentage Change Calculator
Calculate the percentage change between two values with step-by-step solutions
Matrix Inverse Calculator
Calculate the inverse of 2x2 and 3x3 matrices with step-by-step solutions and determinant calculations
About Entropy Calculator
What is Entropy?
Entropy in information theory is a measure of the uncertainty or randomness in a probability distribution. It was introduced by Claude Shannon in 1948 and is fundamental to data compression, cryptography, and machine learning.
Shannon Entropy Formula
For a discrete random variable X with probability distribution P:
H(X) = -∑ p(x) × log₂(p(x))
Where:
- H(X) is the entropy in bits
- p(x) is the probability of each outcome
- The sum is over all possible outcomes
Key Properties
Entropy Bounds
- Minimum entropy: 0 (when one outcome has probability 1)
- Maximum entropy: log₂(n) (when all n outcomes are equally likely)
Additivity Property
For independent variables X and Y:
H(X,Y) = H(X) + H(Y)
Conditioning Reduces Entropy
H(X|Y) ≤ H(X)
Types of Entropy
1. Shannon Entropy
Measures the average information content per symbol.
2. Joint Entropy
H(X,Y) measures the total uncertainty of two variables together.
3. Conditional Entropy
H(X|Y) measures the remaining uncertainty in X given Y.
4. Mutual Information
I(X;Y) = H(X) + H(Y) - H(X,Y) measures the shared information.
Common Entropy Values
| Distribution | Entropy (bits) |
|---|---|
| Fair coin (50/50) | 1.0 |
| Fair die (1/6 each) | 2.585 |
| Biased coin (90/10) | 0.469 |
| Certain outcome | 0 |
Applications
- Data Compression - Optimal encoding uses ≈H bits per symbol
- Cryptography - High entropy = hard to predict
- Machine Learning - Decision trees, information gain
- Communications - Channel capacity limits
- Statistical Mechanics - Thermodynamic entropy
Example Calculation
For a coin with P(heads) = 0.7, P(tails) = 0.3:
H = -[0.7 × log₂(0.7) + 0.3 × log₂(0.3)] H = -[0.7 × (-0.515) + 0.3 × (-1.737)] H = -[-0.360 + (-0.521)] H = 0.881 bits
Entropy vs Randomness
- High entropy = More uncertainty = More random
- Low entropy = Less uncertainty = More predictable
- Zero entropy = Completely deterministic