But then came Galileo Galilei. In addition to discovering the phases of Venus, Jupiter’s large moons, and more, he noted that his telescope did not simply magnify — it revealed the invisible. In 1610, Galileo coined a term that had not been used before when he called the brightest stars below naked-eye visibility “7th magnitude.”
The telescope, therefore, demanded an expansion of Hipparchus’ magnitude system, but not only on the faint end. Observers noted that 1st-magnitude stars varied greatly in brightness. Also, to assign a magnitude to the brightest planets, the Moon, and especially the Sun, scientists would have to work with negative numbers.
In 1856, English astronomer Norman R. Pogson suggested astronomers calibrate all magnitudes so that a difference of 5 magnitudes would equal a brightness difference of 100. (For example, a 1st-magnitude star is 100 times brighter than a 6th-magnitude one.) We still use Pogson’s formula today.
Astronomers routinely use two main divisions of magnitudes to describe the same object. “Apparent magnitude” describes how bright an object looks. Back in the day, observers measured apparent magnitudes by eye. Now ultrasensitive CCD cameras provide measurements with accuracies of 0.01 magnitude.
With “absolute magnitude,” astronomers indicate how bright an object really is. Two things determine this number (also called luminosity): apparent magnitude and distance. Absolute magnitude defines an object’s brightness if it were exactly 10 parsecs (32.6 light-years) from Earth. So any object closer than 32.6 light-years has an apparent magnitude brighter than its absolute magnitude. For any object farther away, the absolute magnitude is brightest. — Michael E. Bakich, Senior Editor, Astronomy magazine