“Captain, we can see on our sensors that the alien ship is adrift.”
How often do you hear something like this in a sci-fi book, movie, or TV show?
There’s nothing wrong with that line of dialog; a softer/less-technical story would fit it in without any concern. But, it could be better – and, with a rudimentary understanding of remote sensing and how they’re used in present-day applications – could be applied to sci-fi writing.
Electromagnetic Spectrum
Remote sensing, by definition, is detecting and monitoring radiation of an object or area at a distance. Radiation isn’t just the scary alpha/beta/gamma radiation, it’s any kind of energy, characterized by wavelength and frequency. Keep the below diagram in mind – it’ll be helpful as we go through the different forms of remote sensing.

Optical Sensing
This is one of the oldest forms of remote sensing, with aerial photography from balloons dating back to the mid-nineteenth century. Aerial reconnaissance was a huge source of intelligence in both World War I and World War II, and famously Gary Powers was shot down over Russia in his U-2 while collecting images of Soviet missile sites and other military installations.

Modern-day image collection in the visible spectrum is via electro-optical collection rather than film, save for legacy systems such as the wet-film camera on the still-flying U-2. It’s collected from both space and the air, and is one of the most-used forms of civil, commercial, and national security intelligence gathering. A charged collection plate – much like the one in your phone – collects photons, which are converted into an image through processing.

Electro-optical remote sensing is commonplace because the resulting images are in the same bands that our eyes see – they “make sense” without any additional processing, making analysis easier. It is also passive, meaning that EO sensing doesn’t emit any radiation and it can’t be detected. Every single smartphone has a EO sensor onboard, and EO satellite imagery powers Google and Apple Maps.
However, EO has some drawbacks as well. It can’t see through clouds or smoke, and it’s only a surface-level understanding of what it’s imaging – there’s no hidden information that can be gleamed because it’s using the same basic technology as your Mark-1 eyeball. It’s also only usable during the daytime, or for a sci-fi setting, where there’s enough light to capture photons and create an image – i.e. near a star. This is why in Star Trek, when there is a full-color, high-resolution image of a ship in deep space up on the screen, I laugh – that’s physically impossible with purely electro-optical methods.
Infrared
This one is incredibly common in our day-to-day lives – my 2015 Ford Escape’s backup camera uses it when it’s dark, and the baby monitors my wife & I used when our daughters slept in a crib let us see their movement in a completely blacked-out room.
But how does it work?
Infrared sensors work much like electro-optical as described in the above section, but collect in infrared bands – EM energy with wavelengths shorter than the color red, ones that we are unable to see. There’s multiple IR bands – near, mid, and far infrared – but for the purpose of this post I’ll treat them all the same.

This works great for weather satellites like GOES shown above, allowing for temperature and cloud recognition to help with forecasting. It also is key in astronomy – the James Webb Space Telescope uses IR sensors to detect stars, planets, and other stellar phenomena against the cold backdrop of deep space. It also works great for military applications as well, a UAV with an IR sensor can detect whether a tank is cold with its engine turned off or if it’s hot and running. Even better, IR imagery works in the night as well as in the day – night vision goggles are a prime application.
There are some tradeoffs, though. IR sensors collect less information than EO sensors in the visible bands, which makes the resulting images lower in resolution and sometimes lacking in key details. It also requires a great deal of cooling; the instrument itself must be kept at a low temperature to function. Still, it’s a key type of sensor with a wide variety of uses.
Radar
Radar is one of the most common forms of remote sensing, in both military (GTMI, satellite imagery, aircraft fire control) and civil (weather, FAA) applications, and now with self-driving cars utilizing it as well as commercial companies such as Capella Space and Umbra selling radar satellite imagery that you can buy with a credit card. But how does it work?

Radar uses radio waves (remember the EM spectrum image!) to determine distance, direction, and radial velocity of objects. This is done by sending RF energy through an antenna at a target or area, then looking at the return to see what that RF energy bounced off of.
This could be as simple as an air traffic controller’s screen, which provides range and velocity of the aircraft flying around an airport, or as detailed as a complete image of a landscape such as the one from the ICEYE-X2 satellite below. Different radar types (S-band, X-band, etc) are used for different results.

Radar is incredibly powerful – it can penetrate through clouds and treetops and it can be used at any time of the day. It does have drawbacks, however. Radar is limited by the propagation of EM energy, twice as much as EO or IR, because not only do you have an r-squared factor in the denominator for propagation, you have one for the return as well!

In addition, radar is an active sensing system, where EO and IR are not – they’re both passive. An adversary can detect that you are radiating energy and either move or attack based on that information. Radar waves can also be jammed or spoofed (which is all I can say without getting myself in trouble!) and can need a great deal of energy to operate. RF waves also do not propagate underwater, and have limited ability to go through solid matter. Still, radar is a key modern sensing method and isn’t going anywhere – just be aware of its limitations when using it in a technothriller or sci-fi novel.
Sonar
This is the one I’m the least familiar with, although I have gotten more up-to-speed as I’ve been writing (and editing) Crush Depth.
The only kind of waves that propagate at a distance underwater are sound waves. This is how dolphins and whales communicate with each other, via their “songs” that are able to be heard at great distances away. Submarines (and other passive sensing systems not discussed here!) use the same method to detect and communicate deep under the ocean’s surface.

Sonar works much like radar – sound waves go out, and an antenna receives them to determine range, direction, and radial velocity. It can also be used for communication – Tom Clancy’s The Hunt For Red October shows two submarines talking with each other via sonar pings. It’s active, so it gives away your position, but it’s also the only way to “see” under the water. One of Crush Depth‘s main plot threads is an experimental quantum sensing (which I won’t get into here) system that offers a passive detection method that works underwater.
Lidar
Lidar is just like radar, but uses visible light rather than RF waves to generate an image. It offers higher spatial resolution than radar but lower range and doesn’t work quite as well in clouds or other adverse weather conditions. The most famous applications of lidar are Tesla’s self-driving cars, powered by lidar sensors, or the discovery of huge abandoned Mayan cities in Guatemala using airborne lidar sensors.

Hyperspectral
This is the newest form of remote sensing, and a lot of very smart people across the government, academia, and private industry are still figuring out the best ways to utilize it.
Hyperspectral imagery is passive, like EO or IR sensing, but takes in data across the entire EM spectrum when it collects a pixel of an image. This allows for the detection of specific elemental emissions, much like how astronomers are able to isolate distant stars and know their chemical composition. The image below shows that hyperspectral sensing can, with processing, show the composition of two gasses being released.

This is huge for remote sensing. For example, an oil company can detect the spectral signature of an oil field via a hyperspectral image, or a space mining company could send a satellite with a hyperspectral imager to the asteroid belt to look for rare metals. It’s still a very new technology, however, and there’s no legacy like with EO and radar – few analysts know how to use it, and the processing software isn’t quite as robust yet.
I hope you enjoyed this, and are able to incorporate the different methods of remote sensing into your writing. For example, the line “Captain, we can see on our sensors that the alien ship is adrift” could be an “electro-optical sensor” if the “alien ship” is near a light source, or “infrared sensor” if it is not. For hard sci-fi, accuracy matters!
Leave a comment