Prior to the invention of fire alarms, the threat of a building burning down proved not only very real but very common. Massive blazes that engulfed major cities like New York, Boston, Philadelphia, and Chicago were the norm rather than the exception. Within the past two centuries, however, technology has developed to make it possible to detect even the slightest increase in carbon smoke within a room.
In the Beginning
The earliest fire alarm system did not use much technology at all; runners and patrols would monitor streets during the night for the tell-tale glow of red that signaled a fire. New York City famously used a patrol of eight men each night, making them the oldest continuously-operated fire fighting unit in the world. Rattle alarms that could quickly wake up citizens were employed, while telegraphs allowed firefighters to communicate the location and size of a conflagration to a headquarters.
During the 1970s, battery-powered fire alarms were made available to the public that proved not only cheap but resoundingly effective. These alarms work by scanning the air in front of a sensor, bouncing off light to be passed through the air. Whenever the light was impeded slightly, it indicated a presence of smoke. These alarms then sounded a loud wail to wake up or notify anyone within earshot.
While smoke detectors still remain the industry standard, advanced systems protect high-profile buildings and help cater to deaf individuals. Deaf fire alarms work in conjunction with noise-producing smoke alarms to send a vibration through the bed of a deaf individual, waking them to the danger. A state-of-the-art fire alarm system today uses air chemistry tools in order to measure everything from smoke levels to carbon monoxide quantity to greater amounts of heat in one particular area of a building, then communicates the threat digitally to communication systems and mobile transmissions.