Most of my discussion to this point could be characterized as “science” – What am I trying to measure? What are some of the issues? The astute reader will notice I’ve left out many things: clouds for one. Will I be affected by multi-layer reflections? Does it matter?
The term the atmospheric scientists use is “optical depth” – which is simply the unitless natural log ratio of incident to transmitted power. This parameter is related to absorbance. The literature uses as the symbol for optical depth; I’ll use to avoid confusion with the use of for time.
Or, expressed another way:
where Z is the total optical path length, is the attenuation coefficient (in nepers – similar to bels), is the effective absorption cross-section area, is the number density of the material at z, and N is the total number density of the total path length.
Assuming linear relationships (perhaps not a valid assumption), one obtains:
But I’ve already gone over that.
For my point of view, optical depth is a measure of attenuation – loss of optical intensity regardless of the cause. Perhaps absorption – which I’m trying to measure; perhaps scattering or other factors. The point being that the only thing I actually can measure is the number of photons transmitted and the number of photons received. All else is conjecture.
From this measurement, scientists will try to define measurement environment factors such as pressure, temperature, humidity, spectral properties of the atmosphere, geographic features, instantaneous laser wavelengths, and other such.
Hm-m-m …
Let’s review some idealized basic measurement platform parameters: The instrument will be based on an airplane travelling 200 m/s (about 450 mph) in the x-direction at an altitude of 10 km (about 33,000 ft) in the z-direction. I’ll assume there is no y-axis variation. Now I know the aircraft altitude varies in all three directions – I’ve seen some specifications that altitude has an in-flight uncertainty of 5 m (I imagine it’s somewhat less than that while landing …)
The principal question: How much spatial resolution is necessary for aircraft-based measurements? The time length of the optical path is about 67 s; the beam diameter at the reflecting surface is about 10-15 ft (3-4 m). If I collect data every beam diameter, I have roughly 20 ms/sample. This is an effective sample rate of 50 Hz. If I collect 8 hours of data (not likely) at 50 samples/sec, I’ll have a total of 1.44 data points.
The measurement is inherently an integral which suggests a well-controlled integration circuit be the initial signal processing network. A current-input integrator can be considered a “transimpedance” amplifier: the transfer function is V/A which has equivalency to impedance (where V/A ≡ Ω).
This integration circuit is implemented in hardware with software control of the integration period.
The photodiode current dq/dt is (almost) a linear function of photons received; the output voltage is a linear function of capacitance C.
I can use this as:
So if my signal intensity is such that the photodiode current is say 100 pA and my integration period is 20 ms, the output voltage is adjusted by the proper selection of the integrating capacitor. Since this is a low-end signal, this should be a low-end voltage. Define this as 5% of the FSR of a 1V ADC … a capacitor of 40 pF works. However, this only allows a limited dynamic range: 2 nA would saturate the ADC.