Having the right information can save hours of work and frustration, but a lot of this valuable knowledge is not found in textbooks, taught in classes, or easily located by searching online sources. Much of this knowledge is gained through experience and trapped in the minds and lab notebooks of people working in the world of photonics.
Thorlabs is on a mission to collect these tips, tricks, guidelines, and practical techniques into a book of knowledge we call Insights. Click on the following links or browse the tabs on this page to read the Insights we have recorded as of today. This collection is always growing, so check back soon to see what new Insights have been added.
Photonics is the study and use of light. The word photonics is based on a “photon”, which is a single particle of light. This is similar to electronics where the electrons are single particles of charge that make up electric current.
In photonics, the photons are single particles of energy that make up light. The amount of energy provided by a photon depends on the color (wavelength). For example, a laser pointer that outputs 1 mW of red (640 nm) light provides 3 x 1016 photons/s. Comparing to electronics, a power supply that provides 1 Amp of current provides 6 x 1018 electrons/s.
Light is generated by a variety of sources. Some come from nature, like the sun, fire, or bioluminescence (lightning bugs). Manufactured sources include light bulbs, LEDs, and lasers.
Much like wires are used to transport electric current, photonics uses optical fiber to transport light from one location to another.
Similar to electronics using resistors and capacitors to modify the current flow through a circuit, photonics uses optics like lenses, mirrors, and prisms to direct and modify light paths.
Almost all analysis of light is done with the same measurement equipment used in electronics, but a device is required to first convert the photons into electrical current.
Common uses for photonics are to measure distance (laser radar), transmit/receive information (telecommunications), image objects that are difficult to see by the eye alone (microscopes / endoscopes / borescopes), and create sensors such as the amount of oxygen in the blood (pulse oximeters) and the quality of the air around us (particle size and trace gas detection).
Figure 2: More than half the total applied force (FTotal) holds the object, since L1 > L2. The height of the left leg of this CL2 clamp is variable to compensate for the object's height. This allows the clamp's top surface and the mounting surface to be made parallel.**
Figure 1: Less than half the total applied force (FTotal) holds the object, since L1 < L2. The clamp illustrated above is the CL5A.
Clamped objects can be fairly easy to move when the torqued screw in the clamp's slot is positioned too far from the object. Correct positioning of the screw protects clamped objects from being knocked out of position.
To maximize the clamping force, position the screw as close as possible to the object.**
This works since clamps like CL5A and CL2 (Figures 1 and 2, respectively) divide the torqued screw's applied force (FTotal) between two points.
Clamping force F2 is applied to the object. The value of F2 is a percentage of FTotal and depends on L1 and L2, as described below. The remainder (F1) of the total force is applied through the opposite end of the clamp.
The following equations can be used to calculate the two applied forces.
Force Applied to Object :
Force Applied to the Other Contact Point:
These equations show that the clamping force on the object increases as the distance between the object and screw decreases. The force supplied by the torqued screw is evenly divided between F1 and F2 when L1 and L2 are equal.
**Note that maximizing the clamping force also requires both the top surface of the clamp and the area it contacts on the object to be parallel with the mounting surface, as depicted in Figures 1 and 2.
If the tangent at the interface between the clamp and object is not parallel to the mounting surface, the force applied to the object will be divided between pressing it into and pushing it across the mounting surface. The force directed along the mounting surface may, or may not, be sufficient to translate the object.
To accommodate different object heights, clamps like the CL2 have one threaded, variable-length leg, which is shown on the left in Figure 2. The number of threads between the clamp and mounting surface should be adjusted to compensate for the height of the object and to keep the clamp's top surface level with the table.
Figure 3: The construction of a Nexus table / breadboard includes a (1) top skin, (2) bottom skin, (3) side finishing trim, (4) side panels, and (5) honeycomb core. The stainless steel top and bottom skins are 5 mm thick.
Figure 5: Torqueing the screw creates a force that pulls up on the table's top skin. The lifted skin tilts the mounting surface and can induce angular deviation of the object. This effect is exaggerated in the above image for illustrative purposes.
Figure 4: A standard clamping fork, such as the CL5A, contacts the table along only one edge. The opposite edge is in contact with the object to be secured. A bridge forms between the two. The screw that applies the clamping force is not shown.
Figure 6: The POLARIS-CA1/M clamping arm has a slot that accepts a mounting screw, a separate screw that applies a clamping force to an installed post, and identical top and bottom surfaces. Since a nearly continuous track around the surface of the clamping arm is in contact with the mounting surface, clamping arms cause negligible bridging effects.
Clamping forks are more rigid than the mounting surface of composite optical tables. It might be expected that the spine of the clamping fork would bend with the force exerted by the screw as the torque is increased. Instead, the screw will pull the skin of the table up and out of flat before the clamping fork deforms. Due to this, clamping forks should be used with care when securing components to optical tables. Clamping arms, which are discussed in the following, are alternatives to clamping forks that are less likely to deform the table's mounting surface.
Optical Table Construction Optical tables and breadboards with composite construction (Figure 3) are designed to be rigid while providing vibration damping. The 5 mm thick, stainless steel top skin is manufactured to be flat, but a localized force can deform it. When the top skin is deformed, optical components will not sit flat, and optical system alignment and performance can be negatively affected.
Clamping Forks Standard clamping forks are installed with one edge placed on the table's surface and the opposite edge on the object (Figure 4). Between these two edges, there is clearance between the bottom of the clamp and the surface of the table. This bridge makes it possible to use a single screw to both secure the clamp to the table and exert a holding force on the object.
When the clamp is secured by torqueing the screw, the screw pulls up on the top skin of the table (Figure 5).
As the torque on the screw increases, the top skin of the table rises. Not only does pulling up on the table surface risk permanently damaging the table, this can also disturb the alignment of the optical component the clamp is being used to secure. By lifting the table's skin, the mounting surface under the clamped object tilts.
Clamping Arms Clamping arms, such as the POLARIS-CA1/M, shown in Figure 6, are designed to secure a post while minimally deforming the mounting surface.
The clamping arm in Figure 6 differs from clamping forks in two significant ways. One is the surface area that makes contact with the optical table, which is highlighted in red, and the other is the method used to secure the post.
The area in contact with the optical table makes a nearly continuous loop around the base of the clamp. The contact area is flat and flush with the table when the clamp is installed. The only break in the loop is a narrow slot in the vise used to grip the post.
This design uses two screws, instead of the clamping fork's single screw. One screw (not shown) secures the clamp to the table, and the other (indicated) is tightened to grip the post. Since one screw is not required to perform both tasks, it is not necessary for this clamping arm to form a bridge between the clamped object and the optical table.
Although the contact area is a loop, and not a solid surface, this clamp causes negligible distortion of the mounting surface. This is due to the open area inside the contact surface being narrow and surrounded by the sides of the clamp, which resist the force pulling up on the table.
Figure 8: Install washers before inserting bolts into slots to protect the slot from damage. The rounded, smooth side of the washer should be placed against the slot, and the rough, flat side should be in contact with the bolt head. The smooth surface is designed to translate easily across the anodized surface, without harming it. The BA2 base is illustrated.
Figure 7: The diameter of the washer is 35% larger than that of the bolt head. This results in over a six fold increase in overlap area with the slot of a BA2 base. By distributing the force of the bolt over a larger area, the washer help prevent gouging of the slot.
The head of a standard cap screw is not much larger than the major diameter of the thread (Figure 7). For example, a ¼-20 screw has a head diameter between 0.365" and 0.375" and the clearance hole diameter for the threads is 0.264".
When the screw is tightened directly through the clearance hole to secure the device, the force is applied to the edge of the through hole, often cutting into the material (Figure 7).
Once the material is permanently deformed, the screw head will want to fall back into the gouged groove, thereby moving the device back to that location when attempting to make fine adjustments.
A device with a circular through hole is not meant to translate around the screw thread so the deformation is not expected to be a problem.
However, a slot should provide the ability to secure the device anywhere along the length for the lifetime of the part. Using a washer distributes the force away from the slot edge to decrease the chance of deforming the slot and extending the lifetime of the part. Figure 7 illustrates the difference a washer can make. The contact area between the slot of a BA2 base and a 0.27" diameter cap screw is 0.010 in2. When a 0.5" diameter washer is used the contact area is 0.064 in2, which is over six times larger.
When using a Thorlabs washer, there are two distinct sides (Figure 8). One side is flat and rough and the other is curved and polished. The curved and polished side should be placed against the device, which has an anodized surface.
As the screw tightens, the screw head can force the washer to spin against the anodized coating.
If the flat side is pressed down against the anodization, the friction created by the rough flat side can scratch the anodized aluminum. However, if the curved side is facing down, the smooth surface has less friction leading to less scratches and extending the visual appearance of the device.
Figure 9: The DC offset of a signal is its average value. Since the blue curve (AC Only) has an average amplitude of zero, it has a zero DC offset. The red signal (AC and DC) is identical to the blue, except the red signal has a non-zero AC offset. A DC coupling would pass the red signal unchanged. An AC coupling would remove the DC offset and attenuate low-frequency components of the signal.
When an instrument offers a choice between AC and DC coupled electrical inputs, it is not unusual for the DC coupling to be the better option for a modulated input signal.
AC and DC Couplings AC and DC couplings are interfaces between the input signal and the rest of the instrument's circuitry.
A DC coupling, which is called a direct coupling, is essentially a wire connected to the signal input. This conductive coupling transmits all of the signal's frequency components, the DC as well as the AC. The red curve in Figure 9 has a non-zero DC component.
In an AC coupling, the key feature is a capacitor placed in series with the signal input. The capacitor functions as a high-pass filter and is sometimes called a blocking capacitor. AC couplings strongly attenuate the DC and low-frequency signal components. This capacitive coupling is used to remove the DC offset from the input signal, so that only AC components are passed. The blue curve in Figure 9 has only AC frequency components.
Use the DC Coupled Input When Possible There are many reasons to prefer the DC coupled input. Its low-frequency response is very good, it allows the DC component of the signal to be monitored along with the AC, and it does not cause signal distortion since it does not affect the frequency content of the signal.
Use of the DC coupled input is recommended unless the DC offset is large or the filtering provided by the AC coupled input is required. One problem with a large DC offset is that it can reduce the resolution of the instrument to unacceptably low levels. In extreme cases, DC offsets can cause clipping and saturation effects.
Note that using the DC coupled input does not guarantee a signal free of distortion. Distortion can occur due to other reasons, such as insufficient device bandwidth or impedance mismatch at the termination.
Figure 11: Some modulated signals, including the blue curve plotted above, have no DC component, but they do have non-negligible low-frequency components. When this signal is high-pass filtered by an AC coupling, the resulting signal is distorted. The green curve is one example of this.
Figure 10: This frequency response magnitude plotted above models a capacitor-based high-pass filter. Its cutoff frequency (Fc) is 35 Hz, and it was used to filter the signal plotted in Figure 11. That signal has a repetition rate of 200 Hz.
Reasons to Use the AC Coupled Input By rejecting the signal's DC component, AC coupling can reduce the total amplitude of the signal. This can increase the measurement resolution provided by the instrument, as well as overcome saturation and clipping problems. AC coupling provides good results when information is carried by high frequency signal components and low frequency components are not of interest. AC coupling can also be preferred when the application does not tolerate DC frequency signal components, as is the case for some telecommunications applications.
When Using the AC Coupled Input If AC coupling is used, it is important to keep in mind that this coupling acts as a high pass filter and affects the frequency content of the signal.
As illustrated by Figure 10, this coupling does not just remove the DC offset, it can also attenuate low frequency components that may be of interest. Due to this, AC coupling can result in signal distortion. To illustrate the effects of high-pass filtering, Figure 11 plots a binary signal, with 200 Hz repetition rate, before and after it is filtered by the high-pass filter with 35 Hz cutoff frequency (Fc).
AC-coupled, digital telecommunications signals mitigate this problem by ensuring the signals are DC balanced, so that they have no DC offset. If the signals were not DC balanced, a series of ones could cause a sustained high signal level. This would introduce a non-zero DC level that would cause the signal to be affected by the capacitive filtering. The result could be bit errors due to high states being incorrectly read as low states.
Figure 12: The components shown above are joined using threaded interfaces. Since unscrewing the fiber connector could unintentionally loosen connections between the other components, Thorlabs suggests applying epoxy to the other two interfaces to immobilize them.
Fiber collimators are often used to introduce light into an optical setup from a fiber coupled source. Thorlabs offers a variety of fiber collimator packages, some only provide a smooth barrel (like the triplet collimators) and others have a metric thread at the end of the barrel (like the asphere collimators).
For both packages, Thorlabs typically suggests the use of an adapter with a nylon tipped set screw that holds the barrel against a two line contact.
Adapters for the external thread are available (AD1109F) that allow the user to thread the fiber collimator into a mount.
However, the use of these adapters results in a stack up of threaded interfaces (threaded fiber connector, threaded collimator, and threaded adapter). As a result, it is possible that unscrewing the fiber connector could inadvertently loosen another thread interface and create an unknown source of instability in the setup.
For this reason, Thorlabs suggests epoxying the threaded fiber collimators into the threaded mounts if that mounting mechanism is preferred.
Figure 2: Top view. The three contact locations between the post and post holder, highlighted in red, prevent the post from translating or rotating around the X or Y axes. Friction resists the post's translation and rotation around the Z axis.
Figure 1: A channel with sharp edges is machined into the inner bore of Thorlabs' post holders.
Figure 3: A broach, such as the one illustrated above, has a row of teeth, the next taller than the previous. With the teeth in contact with the material, a machine pulls the broach across the surface. Each tooth removes a small amount of material, and the depth of the channel created by the broach equals the overall difference in tooth height.
All of Thorlabs' post holders include a channel, with straight parallel edges, running the length of the inner bore (Figure 1). Tightening the setscrew pushes the post against the two edges of the channel (Figure 2). Since the edges of the channel are separated by a wide distance, approximately half the inner diameter of the post holder, the seating of the post against the channel's edges is stable and repeatable.
Contact with the two edges of the channel eliminates four of the post's six degrees of freedom, since the edges block the post from translating along or rotating around either the Y or Z axis. In addition, the friction between the side of the post and the edges of the channel resists the post's movement along and around the X axis, which are the post's two remaining degrees of freedom.
Without the channel in the inner bore, there would be a single line of contact between the post and post holder. The position of the post would not be stable, since the post would be free to rotate around the Z axis and shift along the Y axis.
Even if this instability resulted in submicron-scale unwanted shifts in each component's position in an optical setup, the cumulative effect could have a significant negative impact on system performance. In addition, more frequent realignment of the system could be required.
Broaching The channel's edges must be straight and free of bumps and roughness to hold the post stable. These post holders have straight, sharp edges when examined on a micron scale. If the edges are not completely linear, the post might rock in the holder, and / or it may not be possible to repeatably position the post in the holder.
The smooth, straight edges of the channel are achieved using a machining process called broaching. A broach (Figure 3) resembles a saw whose teeth increase in height along its length.
As the broach is pulled along a surface, each tooth removes a small amount of material. The total depth of the channel cut by the broach equals to the overall difference in tooth height (H2 - H1).
Compared with other approaches for creating channels, broaching is preferred due to its ability to provide straight profiles while being compatible with high-volume production.
Figure 7: Pads machined into Thorlabs' devices improve their stability when bolted in place. The pads are highly flat and project above the undercut region, which is highlighted red. The undercut limits the contact area with the table or breadboard.
Figure 6: The mounting platforms of stages and other devices do not feature pads.
An undercut is machined into the bottom surface of bases like the BA2 (Figures 4 and 5). The undercut creates feet, which are called pads. For maximum stability, the base should be oriented with its pads in contact with the table or breadboard.
The top surface of the base does not have an undercut and is the intended mounting surface for components.
Mounting the base upside down could result in the base rocking on the table or breadboard, or the base may exhibit other mechanical instability.
The Pads are Flatter than the Top Surface The undercut is key to the flatness of the pads. The pads are machined flat after the undercut is made.
Friction heats the pads during the processing step that provides them with a maximally flat profile. By reducing the surface area of the pads, the undercut reduces the amount of heat generated during this step.
It is beneficial to minimize the heat generated during machining. Metal expands when heated, and the uneven heating that occurs during machining can distort the dimensions of the part. If the dimensions of the part are distorted during machining, the part can be left with high spots and other undesirable features after it cools. This can cause instability and misalignment when using the part.
Precision Instruments and Devices have Pads Another example of a component with pads is the LX10 linear stage shown in Figures 6 and 7.
Figure 2: The behavior of the ray at the boundary between the core and cladding, which depends on their refractive indices, determines whether the ray incident on the end face is coupled into the core. The equation for NA can be found using geometry and the two equations noted at the top of this figure.
NA and Acceptance Angle Incident light is modeled as rays to obtain the relationship between NA and the maximum acceptance angle (θmax ), which describes the fiber's ability to gather light from off-axis sources. The equation at the top of Figure 1 can be used to determine whether rays traced from different light sources will be coupled into the fiber's core.
Rays with an angle of incidence ≤θmax are totally internally reflected (TIR) at the boundary between the fiber's core and cladding. As these rays propagate down the fiber, they remain trapped in the core.
Rays with angles of incidence larger than θmax refract at the interface between core and cladding, and this light is eventually lost from the fiber.
Geometry Defines the Relationship The relationship among NA, θmax , and the refractive indices of the core and cladding, ncore and nclad , respectively, can be found using the geometry diagrammed in Figure 2. This geometry illustrates the most extreme conditions under which TIR will occur at the boundary between the core and cladding.
The equations at the top of Figure 2 are expressions of Snell's law and describe the rays' behavior at both interfaces. Note that the simplification sin(90°) = 1 has been used. Only the indices of the core and cladding limit the value of θmax .
Angles of Incidence and Fiber Modes When the angle of incidence is ≤θmax , the incident light ray is coupled into one of the multimode fiber's guided modes. Generally speaking, the lower the angle of incidence, the lower the order of the excited fiber mode. Lower-order modes concentrate most of their intensity near the center of the core. The lowest order mode is excited by rays incident normally on the end face.
Single Mode Fibers are Different In the case of single mode fibers, the ray model in Figure 2 is not useful, and the calculated NA (acceptance angle) does not equal the maximum angle of incidence or describe the fiber's light gathering ability.
Single mode fibers have only one guided mode, the lowest order mode, which is excited by rays with 0° angles of incidence. However, calculating the NA results in a nonzero value. The ray model also does not accurately predict the divergence angles of the light beams successfully coupled into and emitted from single mode fibers. The beam divergence occurs due to diffraction effects, which are not taken into account by the ray model but can be described using the wave optics model. The Gaussian beam propagation model can be used to calculate beam divergence with high accuracy.
Figure 3 For maximum coupling efficiency into single mode fibers, the light should be an on-axis Gaussian beam with its waist located at the fiber's end face, and the waist diameter should equal the MFD. The beam output by the fiber also resembles a Gaussian with these characteristics. In the case of single mode fibers, the ray optics model and NA are inadequate for determining coupling conditions. The mode intensity (I ) profile across the radius ( ρ ) is illustrated.
As light propagates down a single mode fiber, the beam maintains a cross sectional profile that is nearly Gaussian in shape. The mode field diameter (MFD) describes the width of this intensity profile. The better an incident beam matches this intensity profile, the larger the fraction of light coupled into the fiber. An incident Gaussian beam with a beam waist equal to the MFD can achieve particularly high coupling efficiency.
Using the MFD as the beam waist in the Gaussian beam propagation model can provide highly accurate incident beam parameters, as well as the output beam's divergence.
Determining Coupling Requirements A benefit of optical fibers is that light carried by the fibers' guided mode(s) does not spread out radially and is minimally attenuated as it propagates. Coupling light into one of a fiber's guided modes requires matching the characteristics of the incident light to those of the mode. Light that is not coupled into a guided mode radiates out of the fiber and is lost. This light is said to leak out of the fiber.
Single mode fibers have one guided mode, and wave optics analysis reveals the mode to be described by a Bessel function. The amplitude profiles of Gaussian and Bessel functions closely resemble one another , which is convenient since using a Gaussian function as a substitute simplifies the modeling the fiber's mode while providing accurate results.
Figure 3 illustrates the single mode fiber's mode intensity cross section, which the incident light must match in order to couple into the guided mode. The intensity (I ) profile is a near-Gaussian function of radial distance ( ρ ). The MFD, which is constant along the fiber's length, is the width measured at an intensity equal to the product of e-2 and the peak intensity. The MFD encloses ~86% of the beam's power.
Since lasers emitting only the lowest-order transverse mode provide Gaussian beams, this laser light can be efficiently coupled into single mode fibers.
Coupling Light into the Single Mode Fiber To efficiently couple light into the core of a single-mode fiber, the waist of the incident Gaussian beam should be located at the fiber's end face. The intensity profile of the beam's waist should overlap and match the characteristics of the mode intensity cross section. The required incident beam parameters can be calculated using the fiber's MFD with the Gaussian beam propagation model.
The coupling efficiency will be reduced if the beam waist is a different diameter than the MFD, the cross-sectional profile of the beam is distorted or shifted with respect to the modal spot at the end face, and / or if the light is not directed along the fiber's axis.
References  Andrew M. Kowalevicz Jr. and Frank Bucholtz, Beam Divergence from an SMF-28 Optical Fiber(NRL/MR/5650--06-8996)(Naval Research Laboratory, Washington, DC, 2006).
Does NA provide a good estimate of beam divergence from a single mode fiber?
Significant error can result when the numerical aperture (NA) is used to estimate the cone of light emitted from, or that can be coupled into, a single mode fiber. A better estimate is obtained using the Gaussian beam propagation model to calculate the divergence angle. This model allows the divergence angle to be calculated for whatever beam spot size best suits the application.
Since the mode field diameter (MFD) specified for single mode optical fibers encloses ~86% of the beam power, this definition of spot size is often appropriate when collimating light from and focusing light into a single mode fiber. In this case, to a first approximation and when measured in the far field,
is the divergence or acceptance angle (θSM), in radians. This is half the full angular extent of the beam, it is wavelength () dependent, and the beam's waist diameter has been set equal to the fiber's MFD .
Figure 4: These curves illustrate the consequence of using NA to calculate the divergence (θSM ) of light output from a single mode fiber. Significant error in beam spot diameter can be avoided by using the Gaussian beam propagation model.
This plot models a beam from SM980-5.8-125. The values used for NA and MFD were 0.13 and 6.4 µm, respectively. The operating wavelength was 980 nm, and the Rayleigh range was 32.8 µm.
Gaussian Beam Approach Although a diverging cone of light is emitted from the end face of a single mode optical fiber, this light does not behave as multiple rays travelling at different angles to the fiber's axis.
The divergence angle of a Gaussian beam can differ substantially from the angle calculated by assuming the light behaves as rays. Using the ray model, the divergence angle would equal sin-1(NA). However, the relationship between NA and divergence angle is only valid for highly multimode fibers.
Figure 4 illustrates that using the NA to estimate the divergence angle can result in significant error. In this case, the divergence angle was needed for a point on the circle enclosing 86% of the beam's optical power. The intensity of a point on this circle is a factor of 1/e2 lower than the peak intensity.
The equations to the right of the plot in Figure 4 were used to accurately model the divergence of the beam emitted from the single mode fiber's end face. The values used to complete the calculations, including the fiber's MFD, NA, and operating wavelength are given in the figure's caption. This rate of beam divergence assumes a beam size defined by the 1/e2 radius, is nonlinear for distances z < zR, and is approximately linear in the far field (z >> zR).
The angles noted on the plot were calculated from each curve's respective slope. When the far field approximation given by Equation (1) is used, the calculated divergence angle is 0.098 radians (5.61°).
References  Andrew M. Kowalevicz Jr. and Frank Bucholtz, Beam Divergence from an SMF-28 Optical Fiber(NRL/MR/5650--06-8996)(Naval Research Laboratory, Washington, DC, 2006).
Date of Last Edit: Feb. 28, 2020 Content improved by our readers!
Figure 5 For maximum coupling efficiency into single mode fibers, the light should be an on-axis Gaussian beam with its waist located at the fiber's end face, and the waist diameter should equal the MFD.
Adjusting the incident beam's angle, position, and intensity profile can improve the coupling efficiency of light into a single mode optical fiber. Assuming the fiber's end face is planar and perpendicular to the fiber's long axis, coupling efficiency is optimized for beams meeting the following criteria (Figure 5):
Gaussian intensity profile.
Normal incidence on the fiber's end face.
Beam waist in the plane of the end face.
Beam waist centered on the fiber's core.
Diameter of the beam waist equal to the mode field diameter (MFD) of the fiber.
Deviations from these ideal coupling conditions are illustrated in Figure 6.
The Light Source can Limit Coupling Efficiency Lasers emitting only the lowest-order transverse mode provide beams with near-Gaussian profiles, which can be efficiently coupled into single mode fibers.
The coupling efficiency of light from multimode lasers or broadband light sources into the guided mode of a single mode fiber will be poor, even if the light is focused on the core region of the end face. Most of the light from these sources will leak out of the fiber.
The poor coupling efficiency is due to only a fraction of the light in these multimode sources matching the characteristics of the single mode fiber's guided mode. By spatially filtering the light from the source, the amount of light that may be coupled into the fiber's core can be estimated. At best, a single mode fiber will accept only the light in the Gaussian beam output by the filter.
The coupling efficiency of light from a multimode source into a fiber's core can be improved if a multimode fiber is used instead of a single mode fiber.
References  Andrew M. Kowalevicz Jr. and Frank Bucholtz, Beam Divergence from an SMF-28 Optical Fiber(NRL/MR/5650--06-8996)(Naval Research Laboratory, Washington, DC, 2006).
Is the max acceptance angle constant across the core of a multimode fiber?
It depends on the type of fiber. A step-index multimode fiber provides the same maximum acceptance angle at every position across the fiber's core. Graded-index multimode fibers, in contrast, accept rays with the largest range of incident angles only at the core's center. The maximum acceptance angle decreases with distance from the center and approaches 0° near the interface with the cladding.
Figure 7: Step-index multimode fibers have an index of refraction (n ) that is constant across the core. Graded-index multimode fibers have an index that varies across the core. Typically the maximum index occurs at the center.
Figure 9: Graded-index multimode fibers have acceptance angles that vary with radius ( ρ ), since the refractive index of the core varies with radius. The largest acceptance angles typically occur near the center, and the smallest, which approach 0°, occur near the boundary with the cladding (0 < ρ1 < ρ2 ). Air is assumed to surround the fiber.
Figure 8: Step-index multimode fibers accept light incident in the core at angles ≤|θmax | with good coupling efficiency. The maximum acceptance angle is constant across the core's radius ( ρ ). Air is assumed to surround the fiber.
Step-Index Multimode Fiber The core of a step-index multimode fiber has a flat-top index profile, which is illustrated on the left side of Figure 7. When light is coupled into the planar end face of the fiber, the maximum acceptance angle (θmax) is the same at every location across the core (Figure 8). This is due to the constant value of refractive index across the core, since the acceptance angle depends strongly on the index of the cladding.
Regardless of whether rays are incident near the center or edge of the core, step-index multimode fibers will accept cones of rays spanning angles ±θmax with respect to the fiber's axis.
Graded-Index Multimode Fibers The core of a typical graded-index multimode fiber, shown on the right side of Figure 7, has a refractive index that is greatest at the center of the core and decreases with radial distance (ρ). The equation included below the diagram in Figure 9 shows that the radial dependence of the core's refractive index results in a radial dependence of the maximum acceptance angle and numerical aperture (NA). This equation also assumes a planar end face, normal to the fiber's axis that is surrounded by air.
Cones of rays with angular ranges limited by the core's refractive index profile are illustrated Figure 9. The cone of rays with the largest angular spread (±θmax ) occurs on the fiber's axis (ρ = 0). The angular spread decreases as the radial distance to the axis increases.
Step-Index or Graded Index? A step-index multimode fiber has the potential to collect more light than a graded-index multimode fiber. This is due to the NA being constant across the step-index core, while the NA decreases with radial distance across the graded-index core.
However, the graded-index profile causes all of the guided modes to have similar propagation velocities, which reduces the modal dispersion of the light beam as it travels in the fiber.
For applications that rely on coupling as much light as possible into the multimode fiber and are less sensitive to modal dispersion, a step-index multimode fiber may be the better choice. If the reverse is true, a graded-index multimode fiber should be considered.
References  Gerd Keiser, Optical Fiber Communications(McGraw-Hill, New York, 1991), Section 2.6.
Figure 1: Typical yields at each wavelength are around four orders of magnitude lower than the excitation wavelength. 
The spectral fluorescence yield relates the intensity of the fluorescence emitted within the integrating sphere with the intensity of the excitation wavelength. The yield is calculated by dividing the wavelength-dependent, total fluorescence excited over the entire interior surface of the sphere by the intensity of the light excitation.
Data were kindly provided by Dr. Ping-Shine Shaw, Physics Laboratory, National Institute of Standards and Technology, Gaithersburg, MD 20899, USA.
A material of choice for coating the light-diffusing cavities of integrating spheres is polytetrafluoroethylene (PTFE). This material, which is white in appearance, is favored for reasons including its high, flat reflectance over a wide range of wavelengths and chemical inertness.
However, it should be noted that integrating spheres coated with both PTFE and barium sulfate, which is an alternative coating with lower reflectance, emit low levels of ultraviolet (UV) and blue fluorescence when irradiated by UV light. [1-3]
Hydrocarbons in the PTFE Fluoresce It is not the PTFE that fluoresces. The sources of the UV and blue fluorescence are hydrocarbons in the PTFE. Low levels of hydrocarbon impurities are present in the raw coating material, and pollution sources deposit additional hydrocarbon contaminants in the PTFE material of the integrating sphere during its use and storage. 
Fluorescence Wavelength Bands and Strength Researchers at the National Institute of Standards and Technology (NIST) have investigated the fluorescence excited by illuminating PTFE-coated integrating spheres. The total fluorescence output by the integrating sphere was measured with respect to fluorescence wavelength and excitation wavelength. The maximum fluorescence was approximately four orders of magnitude lower than the intensity of the exciting radiation.
The UV and blue fluorescence from PTFE is primarily excited by incident wavelengths in a 200 nm to 300 nm absorption band. The fluorescence is emitted in the 250 nm to 400 nm wavelength range, as shown by Figure 1. These data indicate that increasing the excitation wavelength decreases the fluorescence emitted at lower wavelengths and changes the shape of the fluorescence spectrum.
As the levels of hydrocarbon contaminants in the PFTE increase, fluorescence increases. A related effect is a decrease of the light output by the integrating sphere over the absorption band wavelengths, due to more light from this spectral region being absorbed. [1, 3]
Impact on Applications The UV and blue fluorescence from the PTFE has negligible effect on many applications, since the intensity of the fluorescence is low and primarily excited by incident wavelengths <300 nm. Applications sensitive to this fluorescence include long-term measurements of UV radiation throughput, UV source calibration, establishing UV reflectance standards, and performing some UV remote sensing tasks. 
Minimizing Fluorescence Effects Minimizing and stabilizing the fluorescence levels requires isolating the integrating sphere from all sources of hydrocarbons, including gasoline- and diesel-burning engine exhaust and organic solvents, such as naphthalene and toluene. It should be noted that, while hydrocarbon contamination can be minimized and reduced, it cannot be eliminated. 
Since the history of each integrating sphere's exposure to hydrocarbon contaminants is unique, it is not possible to predict the response of a particular sphere to incident radiation. When an application is negatively impacted by the fluorescence, calibration of the integrating sphere is recommended. A calibration procedure described in  requires a light source with a well-known spectrum that extends across the wavelength region of interest, such as a deuterium lamp or synchrotron radiation, a monochromator, a detector, and the integrating sphere.
References  Ping-Shine Shaw, Zhigang Li, Uwe Arp, and Keith R. Lykke, "Ultraviolet characterization of integrating spheres," Appl.Opt.46, 5119-5128 (2007).  Jan Valenta, "Photoluminescence of the integrating sphere walls, its influence on the absolute quantum yield measurements and correction methods," AIP Advances8, 102123 (2018).  Robert D. Saunders and William R. Ott, "Spectral irradiance measurements: effect of UV-produced fluorescence in integrating spheres," Appl. Opt.15, 827-828 (1976).  Ping-Shine Shaw, Uwe Arp, and Keith R. Lykke, "Measurement of the ultraviolet-induced fluorescence yield from integrating spheres," Metrologia46, S191 - S196 (2009).
Figure 2: Measuring diffuse sample transmittance and reflectance as shown above can result in a distorted sample spectrum due to sample substitution error. The problem is that the reflectivity over the sample area is different during the reference and sample measurements.
Figure 3: The above configuration is not susceptible to sample substitution error, since the interior of the sphere is the same for reference and sample measurements. During the reference measurement the light travels along (R), and no light is incident along (S). The opposite is true when a sample measurement is made.
Absolute transmittance and absolute diffuse reflectance spectra of optical samples can be found using integrating spheres. These spectra are found by performing spectral measurements of both the sample of interest and a reference.
Measurement of a reference is needed since this provides the spectrum of the illuminating light source. Obtaining the reference scan allows the spectrum of the light source to be subtracted from the sample measurement.
The light source reference measurement is made with no sample in place for transmittance data and with a highly reflective white standard reference sample in place for reflectance measurements.
Sample substitution errors incurred while acquiring the sample and reference measurement sets can negatively effect the accuracy of the corrected sample spectrum, unless the chosen experimental technique is immune to these errors.
Conditions Leading to Sample Substitution Errors An integrating sphere's optical performance depends on the reflectance at each point on its entire inner surface. Often, a section of the sphere's inner wall is replaced by the sample when its transmittance and diffuse reflectance spectra are measured (Figure 2). However, modifying a section of the inner wall alters the performance of the integrating sphere.
Sample substitution errors are a concern when the measurement procedure involves physically changing one sample installed within the sphere for another. For example, when measuring diffuse reflectance (Figure 2, bottom), a first measurement might be made with the standard reference sample mounted inside the sphere. Next, this sample would be removed and replaced by the sample of interest, and a second measurement would be acquired. Both data sets would then be used to calculate the corrected absolute diffuse reflectance spectrum of the sample.
This procedure would result in a distorted absolute sample spectrum. Since the sample of interest and the standard reference have different absorption and scattering properties, exchanging them alters the reflectivity of the integrating sphere over the samples' surface areas. Due to the average reflectivity of the integrating sphere being different for the two measurements, they are not perfectly compatible.
Solution Option: Install Sample and Reference Together One experimental technique that avoids sample substitution errors acquires measurement data when both sample and reference are installed inside the integrating sphere at the same time. This approach requires an integrating sphere large enough to accomodate the two, as additional ports.
The light source is located external to the integrating sphere, and measurements of the sample and standard reference are acquired sequentially. The specular reflection from the sample, or the transmitted beam, is often routed out of the sphere, so that only the diffuse light is detected. Since the inner surface of the sphere is identical for both measurements, sample substitution errors are not a concern.
Alternate Solution Option: Make Measurements from Sample and Reference Ports If it is not possible to install both sample and standard references in the integrating sphere at the same time, it is necessary to exchange the installed sample. If this must be done, sample substitution errors can be removed by following the procedure detailed in .
This procedure requires a total of four measurements. When the standard sample is installed, measurements are made from two different ports. One has a field of view that includes the sample and the other does not. The sample of interest is then subsituted in and the measurements are repeated. Performing the calculations described in  using these measurements removes the sample substitution errors.
References  Luka Vidovic and Boris Majaron, "Elimination of single-beam substitution error in diffuse reflectance measurements using an integrating sphere," J. Biomed.Opt.19, 027006 (2014).
Figure 1: This example of an L-I curve for a QCL laser illustrates the typical non-linear slope and rollover region exhibited by QCL and ICL lasers. Operating parameters determine the heat load carried by the lasing region, which influences the peak output power. This laser was installed in a temperature controlled mount set to 25 °C.
Figure 2: This set of L-I curves for a QCL laser illustrates that the mount temperature can affect the peak operating temperature, but that using a temperature controlled mount does not remove the danger of applying a driving current large enough to exceed the rollover point and risk damaging the laser.
The light vs. driving current (L-I) curves measured for quantum and interband cascade Lasers (QCLs and ICLs) include a rollover region, which is enclosed by the red box in Figure 1.
The rollover region includes the peak output power of the laser, which corresponds to a driving current of just under 500 mA in this example. Applying higher drive currents risks damaging the laser.
Laser Operation These lasers operate by forcing electrons down a controlled series of energy steps, which are created by the laser's semiconductor layer structure and an applied bias voltage. The driving current supplies the electrons.
An electron must give up some of its energy to drop down to a lower energy level. When an electron descends one of the laser's energy steps, the electron loses energy in the form of a photon. But, the electron can also lose energy by giving it to the semiconductor material as heat, instead of emitting a photon.
Heat Build Up Lasers are not 100% efficient in forcing electrons to surrender their energy in the form of photons. The electrons that lose their energy as heat cause the temperature of the lasing region to increase.
Conversely, heat in the lasing region can be absorbed by electrons. This boost in energy can scatter electrons away from the path leading down the laser's energy steps. Later, scattered electrons typically lose energy as heat, instead of as photons.
As the temperature of the lasing region increases, more electrons are scattered, and a smaller fraction of them produce light instead of heat. Rising temperatures can also result in changes to the laser's energy levels that make it harder for electrons to emit photons. These processes work together to increase the temperature of the lasing region and to decrease the efficiency with which the laser converts current to laser light.
Operating Limits are Determined by the Heat Load Ideally, the slope of the L-I curve would be linear above the threshold current, which is around 270 mA in Figure 1. Instead, the slope decreases as the driving current increases, which is due to the effects from the rising temperature of the lasing region. Rollover occurs when the laser is no longer effective in converting additional current to laser light. Instead, the extra driving creates only heat. When the current is high enough, the strong localized heating of the laser region will cause the laser to fail.
A temperature controlled mount is typically necessary to help manage the temperature of the lasing region. But, since the thermal conductivity of the semiconductor material is not high, heat can still build up in the lasing region. As illustrated in Figure 2, the mount temperature affects the peak optical output power but does not prevent rollover.
The maximum drive current and the maximum optical output power of QCLs and ICLs depend on the operating conditions, since these determine the heat load of the lasing region.
Figure 3: The external housing of HeNe lasers is mechanically coupled to the components of the lasing cavity. Stress applied to the external housing can misalign and potentially fracture lasing cavity components, which can negatively impact the quality and power of the output laser beam (red arrow) or lead to laser failure
High Reflector Optics
Glass Laser Bore
Metal Springs that Align and Stabilize Bore
Output Coupling Optics
HeNe lasers should be handled and mounted with care to protect them from damage.
Never apply a bending force to the laser housing. Stress applied to the laser's external housing can misalign or damage components in the laser cavity. This can:
Affect the output beam quality.
Result in reduced output power.
Affect the beam pointing.
Cause multimode effects.
Factory packaging protects the HeNe lasers from shocks and vibrations during shipping, but end users directly handle the bare laser housing. Due to this, HeNe lasers are in greater danger of experiencing dangerous stress during handling by the end user.
A result is that the primary cause of damage to HeNe lasers is rough handling after receipt of the laser. In extreme cases, shock and vibrations can shatter or fracture glass components internal to the laser.
To maintain the optimum performance of your HeNe laser, do not drop it, never use force when inserting it into fixture, and use care when installing it into mounts, securing it using cage components or ring accessories that grip the housing, transporting it, and storing it.
HeNe lasers will provide optimum performance over a long lifetime when they are handled gently.
Figure 6: Rise time (tr ) of the intensity signal is typically measured between the 10% and 90% points on the curve. The rise time depends on the wheel's rotation rate and the beam diameter.
Camera and scanning-slit beam profilers are tools for characterizing beam size and shape, but these instruments cannot provide an accurate measurement if the beam size is too small or the wavelength is outside of the operating range.
A chopper wheel, photodetector, and oscilloscope can provide an approximate measurement of the beam size (Figure 4). As the rotating chopper wheel's blade passes through the beam, an S-shaped trace is displayed on the oscilloscope.
When the blade sweeps through the angle θ , the rise or fall time of the S-curve is proportional to the size of the beam along the direction of the blade's travel (Figure 5). A point on the blade located a distance R from the center of the wheel sweeps through an arc length (Rθ ) that is approximately equal to the size of the beam along this direction.
To make this beam size measurement, the combined response of the detector and oscilloscope should be much faster than the signal's rate of change.
Example: S-Curve with Rising Edge The angle (θ = ftr ) subtended by the beam depends on the signal's rise time (Figure 6) and the wheel's rotation rate (f ), whose units are Hz or revolutions/s. The arc length (Rθ = R ⋅ ftr ) through the beam can be calculated using this angle. For a small Gaussian-shaped beam, a first approximation of the 1/e2 beam diameter (D ),
has a factor of 0.64 to account for measuring rise time between the 10% and 90% intensity points.
Figure 1: C-mount lenses and cameras have the same flange focal distance (FFD), 17.526 mm. This ensures light through the lens focuses on the camera's sensor. Both components have 1.000"-32 threads, sometimes referred to as "C-mount threads".
Figure 2: CS-mount lenses and cameras have the same flange focal distance (FFD), 12.526 mm. This ensures light through the lens focuses on the camera's sensor. Their 1.000"-32 threads are identical to threads on C-mount components, sometimes referred to as "C-mount threads."
The C-mount and CS-mount camera system standards both include 1.000"-32 threads, but the two mount types have different flange focal distances (FFD, also known as flange focal depth, flange focal length, register, flange back distance, and flange-to-film distance). The FFD is 17.526 mm for the C-mount and 12.526 mm for the CS-mount (Figures 1 and 2, respectively).
Since their flange focal distances are different, the C-mount and CS-mount components are not directly interchangeable. However, with an adapter, it is possible to use a C-mount lens with a CS-mount camera.
Mixing and Matching C-mount and CS-mount components have identical threads, but lenses and cameras of different mount types should not be directly attached to one another. If this is done, the lens' focal plane will not coincide with the camera's sensor plane due to the difference in FFD, and the image will be blurry.
With an adapter, a C-mount lens can be used with a CS-mount camera (Figures 3 and 4). The adapter increases the separation between the lens and the camera's sensor by 5.0 mm, to ensure the lens' focal plane aligns with the camera's sensor plane.
In contrast, the shorter FFD of CS-mount lenses makes them incompatible for use with C-mount cameras (Figure 5). The lens and camera housings prevent the lens from mounting close enough to the camera sensor to provide an in-focus image, and no adapter can bring the lens closer.
It is critical to check the lens and camera parameters to determine whether the components are compatible, an adapter is required, or the components cannot be made compatible.
1.000"-32 Threads Imperial threads are properly described by their diameter and the number of threads per inch (TPI). In the case of both these mounts, the thread diameter is 1.000" and the TPI is 32. Due to the prevalence of C-mount devices, the 1.000"-32 thread is sometimes referred to as a "C-mount thread." Using this term can cause confusion, since CS-mount devices have the same threads.
Measuring Flange Focal Distance Measurements of flange focal distance are given for both lenses and cameras. In the case of lenses, the FFD is measured from the lens' flange surface (Figures 1 and 2) to its focal plane. The flange surface follows the lens' planar back face and intersects the base of the external 1.000"-32 threads. In cameras, the FFD is measured from the camera's front face to the sensor plane. When the lens is mounted on the camera without an adapter, the flange surfaces on the camera front face and lens back face are brought into contact.
Figure 5: A CS-mount lens is not directly compatible with a C-mount camera, since the light focuses before the camera's sensor. Adapters are not useful, since the solution would require shrinking the flange focal distance of the camera (blue arrow).
Figure 4: An adapter with the proper thickness moves the C-mount lens away from the CS-mount camera's sensor by an optimal amount, which is indicated by the length of the purple arrow. This allows the lens to focus light on the camera's sensor, despite the difference in FFD.
Figure 3: A C-mount lens and a CS-mount camera are not directly compatible, since their flange focal distances, indicated by the blue and yellow arrows, respectively, are different. This arrangement will result in blurry images, since the light will not focus on the camera's sensor.
Figure 7: An adapter can be used to optimally position a CS-mount lens on a camera whose flange focal distance is less than 12.5 mm. This sketch is based on a Zelux camera and its SM1A10 adapter.
All Kiralux™ and Quantalux® scientific cameras are factory set to accept C-mount lenses. When the attached C-mount adapters are removed from the passively cooled cameras, the SM1 (1.035"-40) internal threads in their flanges can be used. The Zelux scientific cameras also have SM1 internal threads in their mounting flanges, as well as the option to use a C-mount or CS-mount adapter.
The SM1 threads integrated into the camera housings are intended to facilitate the use of lens assemblies created from Thorlabs components. Adapters can also be used to convert from the camera's C-mount configurations. When designing an application-specific lens assembly or considering the use of an adapter not specifically designed for the camera, it is important to ensure that the flange focal distances (FFD) of the camera and lens match, as well as that the camera's sensor size accommodates the desired field of view (FOV).
Made for Each Other: Cameras and Their Adapters Fixed adapters are available to configure the Zelux cameras to meet C-mount and CS-mount standards (Figures 6 and 7). These adapters, as well as the adjustable C-mount adapters attached to the passively cooled Kiralux and Quantalux cameras, were designed specifically for use with their respective cameras.
While any adapter converting from SM1 to 1.000"-32 threads makes it possible to attach a C-mount or CS-mount lens to one of these cameras, not every thread adapter aligns the lens' focal plane with a specific camera's sensor plane. In some cases, no adapter can align these planes. For example, of these scientific cameras, only the Zelux can be configured for CS-mount lenses.
The position of the lens' focal plane is determined by a combination of the lens' FFD, which is measured in air, and any refractive elements between the lens and the camera's sensor. When light focused by the lens passes through a refractive element, instead of just travelling through air, the physical focal plane is shifted to longer distances by an amount that can be calculated. The adapter must add enough separation to compensate for both the camera's FFD, when it is too short, and the focal shift caused by any windows or filters inserted between the lens and sensor.
Flexiblity and Quick Fixes: Adjustable C-Mount Adapter Passively cooled Kiralux and Quantalux cameras consist of a camera with SM1 internal threads, a window or filter covering the sensor and secured by a retaining ring, and an adjustable C-mount adapter.
A benefit of the adjustable C-mount adapter is that it can tune the spacing between the lens and camera over a 1.8 mm range, when the window / filter and retaining ring are in place. Changing the spacing can compensate for different effects that otherwise misalign the camera's sensor plane and the lens' focal plane. These effects include material expansion and contraction due to temperature changes, positioning errors from tolerance stacking, and focal shifts caused by a substitute window or filter with a different thickness or refractive index.
Adjusting the camera's adapter may be necessary to obtain sharp images of objects at infinity. When an object is at infinity, the incoming rays are parallel, and location of the focus defines the FFD of the lens. Since the actual FFDs of lenses and cameras may not match their intended FFDs, the focal plane for objects at infinity may be shifted from the sensor plane, resulting in a blurry image.
If it is impossible to get a sharp image of objects at infinity, despite tuning the lens focus, try adjusting the camera's adapter. This can compensate for shifts due to tolerance and environmental effects and bring the image into focus.
Why can the FFD be smaller than the distance separating the camera's flange and sensor?
Flange focal distance (FFD) values for cameras and lenses assume only air fills the space between the lens and the camera's sensor plane. If windows and / or filters are inserted between the lens and camera sensor, it may be necessary to increase the distance separating the camera's flange and sensor planes to a value beyond the specified FFD. A span equal to the FFD may be too short, because refraction through windows and filters bends the light's path and shifts the focal plane farther away.
If making changes to the optics between the lens and camera sensor, the resulting focal plane shift should be calculated to determine whether the separation between lens and camera should be adjusted to maintain good alignment. Note that good alignment is necessary for, but cannot guarantee, an in-focus image, since new optics may introduce aberrations and other effects resulting in unacceptable image quality.
Figure 9: Refraction causes the ray's angle with the optical axis to be shallower in the medium than in air (θm vs. θo), due to the differences in refractive indices (nm vs. no ). After travelling a distance d in the medium, the ray is only hm closer to the axis. Due to this, the ray intersects the axis Δf beyond the f point.
Figure 11: Tolerance and / or temperature effects may result in the lens and camera having different FFDs. If the FFD of the lens is shorter, images of objects at infinity will be excluded from the focal range. Since the system cannot focus on them, they will be blurry.
Figure 10: When their flange focal distances (FFD) are the same, the camera's sensor plane and the lens' focal plane are perfectly aligned. Images of objects at infinity coincide with one limit of the system's focal range.
A Case of the Bends: Focal Shift Due to Refraction While travelling through a solid medium, a ray's path is straight (Figure 8). Its angle (θo ) with the optical axis is constant as it converges to the focal point (f ). Values of FFD are determined assuming this medium is air.
When an optic with plane-parallel sides and a higher refractive index (nm ) is placed in the ray's path, refraction causes the ray to bend and take a shallower angle (θm ) through the optic. This angle can be determined from Snell's law, as described in the table and illustrated in Figure 9.
While travelling through the optic, the ray approaches the optical axis at a slower rate than a ray travelling the same distance in air. After exiting the optic, the ray's angle with the axis is again θo , the same as a ray that did not pass through the optic. However, the ray exits the optic farther away from the axis than if it had never passed through it. Since the ray refracted by the optic is farther away, it crosses the axis at a point shifted Δf beyond the other ray's crossing. Increasing the optic's thickness widens the separation between the two rays, which increases Δf.
To Infinity and Beyond It is important to many applications that the camera system be capable of capturing high-quality images of objects at infinity. Rays from these objects are parallel and focused to a point closer to the lens than rays from closer objects (Figure 9). The FFDs of cameras and lenses are defined so the focal point of rays from infinitely distant objects will align with the camera's sensor plane. When a lens has an adjustable focal range, objects at infinity are in focus at one end of the range and closer objects are in focus at the other.
Different effects, including temperature changes and tolerance stacking, can result in the lens and / or camera not exactly meeting the FFD specification. When the lens' actual FFD is shorter than the camera's, the camera system can no longer obtain sharp images of objects at infinity (Figure 11). This offset can also result if an optic is removed from between the lens and camera sensor.
An approach some lenses use to compensate for this is to allow the user to vary the lens focus to points "beyond" infinity. This does not refer to a physical distance, it just allows the lens to push its focal plane farther away. Thorlabs' Kiralux™ and Quantalux® cameras include adjustable C-mount adapters to allow the spacing to be tuned as needed.
If the lens' FFD is larger than the camera's, images of objects at infinity fall within the system's focal range, but some closer objects that should be within this range will be excluded. This situation can be caused by inserting optics between the lens and camera sensor. If objects at infinity can still be imaged, this can often be acceptable.
Not Just Theory: Camera Design Example The C-mount, hermetically sealed, and TE-cooled Quantalux camera has a fixed 18.1 mm spacing between its flange surface and sensor plane. However, the FFD (f) for C-mount camera systems is 17.526 mm. The camera's need for greater spacing becomes apparent when the focal shift due to the window soldered into the hermetic cover and the glass covering the sensor are taken into account. The results recorded in the table beneath Figure 9 show that both exact and paraxial equations return a required total spacing of 18.1 mm.
Figure 1: The DM713 digital micrometer (right) is included with and used to adjust the retardance provided by the SBC-VIS Soleil-Babinet compensator (left).
Digital micrometers, such as the DM713, are handy for moving a piece of optomech a specific distance. For example, a user might want to increment a translation stage holding a sample in front of an objective lens in order to focus the light to equally spaced points within the sample.
However, there are also times where the user might want to record the position of an event. One example could be making a distance measurement where the micrometer is set to a starting position, zeroed, and then translated the desired amount to display the distance.
Using the DM713 alone creates an extra step where the user has to read and record the display, which can be tedious in a dark lab where the display is not visible. One solution is to use Thorlabs' SBC-COMM, which includes an RS-232 interfacing cable. Thorlabs has created software application notes that walk the user through creating Visual C#® and LabVIEW® programs to continuously measure distances with the DM713.
Another solution is to purchase the Mitutoyo® 05CZA662 SPC cable and IT-016U USB input tool that provide a push button and USB interfacing cable. With this device the user can open any text entry software package, press the single push button, and the device acts like a keyboard to enter the number into the software.
Figure 1: Parabolic mirrors have a single focal point for all rays in a collimated beam.
Parabolic mirrors perform better than spherical mirrors when collimating light emitted by a point source or focusing a collimated beam.
Focusing Collimated Light Parabolic mirrors (Figure 1) focus all rays in an incoming, collimated light beam to a diffraction-limited spot. In contrast, concave spherical mirrors (Figure 2) concentrate incoming collimated light into a volume larger than a diffraction-limited spot. The size of the spherical mirror's focal volume can be reduced by decreasing the diameter of the incoming collimated beam.
Collimating Light from a Point Source A point source emits light in all directions. When this highly divergent light source is placed at the focal point of a parabolic mirror, the output beam is highly collimated. If the point source were ideal, all reflected rays would be perfectly parallel with one another.
When a point source is placed within a spherical mirror's focal volume, the output beam is not as well collimated as the beam provided by a parabolic mirror. Different rays from the point source are not perfectly parallel after reflection from the spherical mirror, but two reflected rays will be more nearly parallel when they reflect from more closely spaced points on the spherical mirror's surface. Consequently, the quality of the collimated beam can be improved by reducing the area of the reflective surface. This is equivalent to limiting the angular range over which the source in the focal volume emits light.
Choosing Between Parabolic and Spherical Mirrors A parabolic mirror is not always the better choice. Beam diameter, cost constraints, space limitations, and performance requirements of an application all influence selection. Beam diameter is a factor, since the performance of these two mirrors is more similar when the beam diameter is smaller. Parabolic mirrors are more expensive, since their reflective profiles are more difficult to fabricate. Parabolic mirrors are also typically larger. Improved performance may or may not be more important than the difference in cost and physical size.
Figure 3: The focal point of an on-axis parabolic mirror is close to the reflective surface, and typically surrounded by the reflective surface, which makes the focal point difficult to access.
One of the primary benefits of a concave parabolic mirror is its single focal point. All rays travelling parallel to the mirror's axis are reflected through this point. This is useful for a range of purposes, including imaging and manufacturing applications that require focusing laser light to a diffraction limited spot.
There are a few negatives associated using with using conventional parabolic mirrors, which are symmetric around the focal point (Figure 3). One is that the sides of the mirror generally obstruct access to the focus. Another is that when the mirror is used to collimate a divergent light source, the housing of the light source blocks a portion of the collimated beam. In particular, light emitted at small angles with respect to the optical axis of the mirror is typically obstructed.
An off-axis parabolic (OAP) mirror (Figure 4) is one solution to this problem. The reflective surface of this mirror is parabolic in shape, but it is not symmetric around the focal point. The reflective surface of the OAP corresponds to a section of the parent parabola that is shifted away from the focal point. The section chosen depends on the desired angle and / or distance between the focal point and the center of the mirror.
Figure 7: Choosing a section closer to the axis of the parabola results in a smaller off-axis angle.
The off-axis angle (θ ) of an OAP mirror is measured between the mirror's optical and focal axes. The OAP mirror in Figure 5 has a 90° angle.
The angle depends on the segment of the parent parabola used for the OAP mirror, as well as the width (Figure 6) of the parent parabola.
Proximity of Parabolic Segment and Focal Point Choosing a segment of the parent parabola closer to the focal point reduces the off-axis angle. The mirror in Figure 7 has a smaller angle than the one in Figure 6, but the only difference between them is that the section of the parabola selected for the OAP mirror in Figure 7 is closer to the focal point.
The location of the parabolic segment also controls the focal length. Choosing a parabolic segment closer to the focal point results in a shorter distance between the center of the mirror and the focal point.
Width of the Parent Parabola Increasing the width of the parent parabola decreases the off-axis angle. This inverse relationship is illustrated by Figures 7 and 8. The width of the parabola is larger in Figure 7, and this is also the mirror with a smaller angle.
The width of the parent parabola also affects the focal length. The wider the parabola, the longer the focal length.
Available Off-Axis Angles OAP mirrors are often designed to have a 90° off-axis angle, but OAP mirrors with angles less than 90° are also common.
Figure 10: When the collimated beam is parallel to the optical axis of a parabolic or OAP mirror, the light focuses to a diffraction-limited spot.
Parabolic and off-axis parabolic (OAP) mirrors will only provide the expected well-collimated beam or diffraction-limited focal spot when the correct beam type is incident along the proper axis. This due to the parabolic shape of these mirrors' reflective surfaces, which are not symmetric around their focal points.
Parabolic vs. Off-Axis Parabolic Mirrors The reflective surface of an OAP mirror is a section of the parent parabola that is not centered on the parent's optical axis (Figure 9). A conventional parabolic mirror is illustrated in Figure 10.
The optical axis of an OAP mirror is parallel to, but displaced from the optical axis of the parent parabola. The focal point of the OAP mirror coincides with that of the parent parabola.
The focal axis of the OAP mirror passes through the focal point and the center of the OAP mirror. The focal and optical axes of an OAP mirror are not parallel. In contrast, these axes coincide for parabolic mirrors whose reflective surfaces are centered on optical axis of the parent parabola.
Focus Collimated Light If a parabolic or OAP mirror is being used to focus a beam of collimated light to a diffraction-limited point, the light must be directed along the mirror's optical axis (Figures 9 and 10).
Collimated light that is not directed parallel to the optical axis will not focus to a unique point (Figure 11).
Thorlabs recommends against directing collimated light along the focal axis of OAP mirrors, or along any direction that is not parallel to the optical axis, since the light will not focus to a diffraction-limited spot.
Collimate Light from a Point Source To obtain highly collimated light from a point source, the point source should be located at the mirror's focal point.
Light from a point source will be poorly collimated if the point source is placed along the OAP mirror's optical axis, or anywhere else that is not the focal point.
An OAP mirror can also be used to collimate a spherical wave, if its origin coincides with the focal point of the mirror.
Figure 13: The orientation of the optical axis can be found by noting it is perpendicular to the base of the mirror's substrate. The location of the focal point can be estimated by considering collimated light rays that are directed parallel to the optical axis. These rays reflect symmetrically around the local surface normals and pass through the mirror's focal point.
Figure 12: OAP mirrors have a flat, round base and a side that varies in height around the circumference. The planar base is normal to the mirror's optical axis. Shown above is the MPD2151-P01.
When working with off-axis parabolic (OAP) mirrors, it can be challenging to identify the optical and focal axes. This is particularly true when the parabolic curvature of the surface is hard to see (Figure 12).
The physical characteristics and dimensions of the mirror's substrate can provide a useful guide when positioning and aligning the mirror.
The mirror's substrate has a flat, round base. The optical axis is oriented normal to this planar base. Therefore, collimated light should be directed normal to the surface of the base.
The substrate has a tall side and a short side, and the reflective surface is sloped between them. The surface normal at different points across the reflector can be roughly estimated by visually examining the surface (Figure 13).
The location of the focal point can be estimated by considering a ray of collimated light, parallel to the optical axis, that reflects from the surface of the mirror. The incident ray reflects symmetrically about the surface normal. The reflected ray will pass through the focal point. By mentally tracing two rays from positions close to the tall and short sides of the mirror, respectively, it should be possible to estimate the location of the focal point.
Mounting and Alignment Features on Thorlabs' OAP Mirrors Thorlabs' OAP mirrors have an alignment hole and three tapped mounting holes machined into the bottom surface of their bases. The pattern of tapped holes matches the vertices of an equilateral triangle, and the position of the smooth-bore alignment hole indicates the short side of the OAP mirror. The tapped holes are designed to secure the mirror to mounting adapters or mounting platforms.
Figure 15: A pair of OAP mirrors can be used to couple light out of one fiber and into another. This provides access to the beam when it is necessary to insert bulk optics into the optical path. Due to the small dimension of the fiber core, light emitted from the fiber end face is similar to a point source.
Figure 14: A pair of OAP mirrors can be used in imaging applications, and/or to relay a beam across a distance.
Relay an Image A single OAP mirror is not recommended for finite conjugate imaging applications, when neither light beam is collimated, but a pair of OAP mirrors can successfully be used for this purpose. An example setup is illustrated in Figure 14.
The dual OAP configuration facilitates the process of adjusting the distance between mirrors. The leg of collimated light is also convenient for inserting filters and other optical elements into the beam. Another benefit is that distance between the two mirrors can be adjusted to move the focal point across the source and/or target planes without disturbing the alignment of the system.
Provide Access to the Beam in a Fiber Network A pair of OAP mirrors can be used to create a free-space leg in an optical fiber system, which is one way to provide access to the light beam. The illustration in Figure 15 shows an example of this configuration, which can be useful when filters or other bulk optics need to be inserted into the beam path. The length of the free-space leg can be adjusted without disturbing alignment.
When setting up this system, the fibers' end faces must be aligned so that their cores coincide with the source and target focal points, respectively. The collimated beam paths of both mirrors should be co-linear and completely overlapping.
Figure 16: The shape of the OAP mirror's reflective profile matches a section of the parent parabola that is not centered on the focal point. Due to this, the OAP's reflective surface is not rotationally symmetric. When mounting the mirror, care should be taken to ensure the mirror does not rotate around its optical axis.
OAP mirrors are not rotationally symmetric. This is a consequence of their reflective surfaces being taken from sections of the parent parabola curve located away from the focal point (Figure 16). Due the asymmetry of the reflector, when an OAP mirror rotates, the position of its focal point also rotates. Since this could negatively impact the performance of an optical system, the mirror should be fixed so that the reflective surface cannot rotate around its optical axis.
The optical performance of the mirror is also sensitive to alignment drift with respect to the other five degrees of freedom. One way to protect against alignment drift is to use a fixed, rather than a kinematic, mount.
Using a shear plate interferometer can be helpful when aligning an OAP mirror to an input point source. The shear plate interferometer should intercept the output beam (Figure 17), to assess its collimation quality. Alignment is optimized when the quality of the collimated beam is optimized.
Figure 19: The reflective element of the collimator is an off-axis parabolic mirror. The mirror's substrate is highlighed in red. The shape of the reflective surface is a segment of the parabolic curve displaced from the vertex. The focal points of the parent parabola and the OAP mirror coincide.
Figure 18: Thorlabs offers reflective collimators that include a port for an optical fiber connector and a port for free space, collimated light that propagates parallel to the optical axis.
The two ports on Thorlabs' reflective collimators are not interchangeable. One port accepts an optical fiber connector and requires the highly divergent light of a point source. The other port is designed solely for collimated, free-space light (Figure 18).
Free Space Port Light input to this port should be collimated and directed parallel to the optical axis. Diverging light from a fiber end face, a laser diode, or other source should not be input. This light would not be collimated at the fiber connector port or coupled into the fiber connected to the fiber port.
Optical Fiber Connector Port This port aligns the fiber's end face with the focal point of the mirror. Since the fiber's end face approximates a point source placed at the focal point, a collimated beam is output from the free-space port. The alignment of the fiber end face with the focal point is also the reason that all light input to the free-space light port should be collimated and directed parallel to the optical axis.
Source of Directionality The collimator's directionality is a consequence of using a non-rotationally symmetric, off-axis parabolic (OAP) mirror as the reflective element (Figure 19). The cut-away view illustrates that the fiber's end face is positioned at the focal point of the parent parabola, which is also the focal point of the OAP mirror.
Figure 2: Typical absorption coefficients and penetration depths for silicon, germanium, and indium gallium arsenide (In0.53Ga0.47As) are plotted. The penetration depth is the reciprocal of the absorption coefficient.
Figure 1: Different wavelengths of light have different average penetration depths into the PN-junction based detector. The penetration depth is related to the wavelength-dependent absorption coefficient (Figure 2).
When light is incident upon a photodiode, the photons that do not reflect due to the Fresnel reflection from the air / semiconductor interface will travel through the semiconductor material.
A photon will continue to travel until it is absorbed or it reaches the end. When a photon is absorbed, a charge carrier pair will be generated.
Charge carriers generated within the depletion region can contribute almost immediately to photocurrent. However, carriers generated outside of the depletion region must take the extra step of traveling to the depletion region. The duration of this travel is the diffusion time. In Figure 1, the blue and red photons generate carriers in the P-type and N-type regions, respectively. These must diffuse to the depletion region.
The probability of a photon being absorbed once it enters the semiconductor is based on the absorption coefficient. The wavelength-dependent absorption coefficient and penetration depth for various detector materials is shown in Figure 2.
As the incident wavelength increases, the absorption coefficient decreases. This means a longer-wavelength photon can travel a longer average distance within the semiconductor before being absorbed and generating a charge carrier pair. The greater the distance a charge carrier needs to travel to reach the depletion region, the longer the rise time.
Figures 3 through 5 show the measured rise times for a selection of silicon, InGaAs, and germanium photodiodes. In the silicon plot, the slopes of the curves are nearly flat for wavelengths <800 nm. This suggests that the diffusion time for photons absorbed near the surface is negligible. After 800 nm, the rise time increases exponentially. Since the penetration depth for silicon at 800 nm is 9 µm (Figure 2), this suggests that the distance from the top of the sensor and the bottom of the depletion region is less than 9 µm.
Does PM fiber preserve every input polarization state?
Polarization maintaining (PM) fiber only preserves the polarization state of input light that is both linearly polarized and polarized parallel to one of the fiber's two orthogonal axes. The orientation of the linearly polarized light input to the PM fiber matters, since the refractive indices of its two orthogonal axes are different. Light polarized along the high-index direction (slow axis) travels more slowly than light polarized along the orthogonal direction (fast axis).
If the input polarization state does not meet these criteria, the light output from the fiber will be elliptically polarized. However, the elliptical polarization state cannot be predicted and is not stable, since it depends on the fluctuating temperature and stress conditions over the length of the fiber.
Figure 1: Polarimeter measurements of light output by a PM fiber patch cable are plotted on a Poincaré sphere. The points indicated by the arrows result when there is optimal alignment between the linearly polarized input and one of the fiber's axes. These input states are preserved by the fiber. All other points correspond to the elliptically polarized output states resulting when the input light's polarization direction is not parallel with one of the fiber's axes.
PM Fibers Do Not Polarize Light A PM fiber does not behave like a linear polarizer, and a PM fiber will not convert an arbitrary input polarization state into a linearly polarized output state.
A linear polarizer has two orthogonal axes, but these are not the slow and fast axes of a PM fiber. In the case of a linear polarizer, the light polarized parallel to one of the axes is attenuated, while the light polarized parallel to the other is transmitted. Since only one polarization component is transmitted, the output light is linearly polarized.
Because a PM fiber transmits both orthogonal polarization components, instead of attenuating one, PM fiber cannot be used as a linear polarizer.
Comparison with Wave Plates Since PM fibers and wave plates both have fast and slow axes, they have a lot in common. If the polarization axis of a linearly polarized light beam is aligned parallel to either the slow or the fast axis, both PM fibers and wave plates will preserve that polarization state. However, if the input beam has components polarized along both slow and fast axes, neither a PM fiber nor a wave plate will preserve the input polarization state.
Both PM fibers and wave plates change the polarization state of a light beam by delaying the component of light polarized parallel to the slow axis more than the component polarized parallel to the fast axis. But, a PM fiber cannot be used to replace a wave plate, since the delay induced by the PM fiber fluctuates unpredictably as the temperature and stress applied over the length of the fiber changes.
Output Polarization States The polarimeter measurements plotted on the Poincaré sphere in Figure 1 illustrate the range of elliptically polarized output states a PM fiber patch cable can provide, when the input is a linearly polarized beam with arbitrary orientation to the fiber's axes. The polarimeter measurement of the output light has one of the two values indicated by the black arrows, when the fiber preserves the input polarization state. These values result when there is optimal alignment between the polarization direction of the input polarization state and one of the fiber's axes. All other points on the sphere indicate elliptical output polarization states occurring when the input polarization state is not aligned parallel to either fiber axis.
Each data trace in the figure was generated by rotating the polarization direction of the linearly polarized input light once around the optical axis. The traces do not overlap, since the temperature of the fiber was changed after every rotation. Each temperature change resulted in a different set of elliptically polarized output states, due to the fiber's temperature sensitivity. Note that each data trace crosses the points indicated by the arrows. This indicates that when the linearly polarized input state is well-aligned to one of the fiber's axes, the output polarization state is not sensitive to changes in temperature and applied stress.
Labels Used to Identify Perpendicular and Parallel Components
Senkrecht (s) is 'perpendicular' in German. Parallel begins with 'p.'
TE: Transverse electric field. TM: Transverse magnetic field. The transverse field is perpendicular to the plane of incidence. Note that electric and magnetic fields are orthogonal.
⊥ and // are symbols for perpendicular and parallel, respectively.
The Greek letters corresponding to s and p are σ and π, respectively.
A sagittal plane is a longitudinal plane that divides a body.
Figure 1: Polarized light is often described as the vector sum of two components: one whose electric field oscillates in the plane of incidence (parallel), and one whose electric field oscillates perpendicular to the plane of incidence. Note that the oscillations of the electric field are also orthogonal to the beam's propagation direction.
When polarized light is incident on a surface, it is often described in terms of perpendicular and parallel components. These are orthogonal to each other and the direction in which the light is propagating (Figure 1).
Labels and symbols applied to the perpendicular and parallel components can make it difficult to determine which is which. The table identifies, for a variety of different sets, which label refers to the perpendicular component and which to the parallel.
The perpendicular and parallel directions are referenced to the plane of incidence, which is illustrated in Figure 1 for a beam reflecting from a surface. Together, the incident ray and the surface normal define the plane of incidence, and the incident and reflected rays are both contained in this plane. The perpendicular direction is normal to the plane of incidence, and the parallel direction is in the plane of incidence.
The electric fields of the perpendicular and parallel components oscillate in planes that are orthogonal to one another. The electric field of the perpendicular component oscillates in a plane perpendicular to the plane of incidence, while the electric field of the parallel component oscillated in the plane of incidence. The polarization of the light beam is the vector sum of the perpendicular and parallel components.
Normally Incident Light Since a plane of incidence cannot be defined for normally incident light, this approach cannot be used to unambiguously define perpendicular and parallel components of light. There is limited need to make the distinction, since under conditions of normal incidence the reflectivity is the same for all components of light.
Figure 3: As the electric field () propagates, the tip of the vector follows a helical path. In this case, propagation is along the z-axis, and the helicity of the path followed by the vector is positive (clockwise rotation).
Figure 4: If an observer looks into the beam propagating from the origin in Figure 3, the tip of the rotating electric field vector traces out an ellipse. The ellipse can be described in terms of angles Ψ and χ. The equations in this figure use () to represent (/λ - ωt), where λ is the material-dependent wavelength, ω is frequency, and t is time.
The polarization ellipse is a way to visualize the polarization state.
As a laser beam propagates, the tip of its electric field vector moves along a three dimensional path determined by the polarization state. If an observer looking into the beam could see the electric field advancing in real time, the vector's tip would appear to cycle around the propagation axis while following a two-dimensional, elliptical track.
The shape of this track is the polarization ellipse, which becomes a line for linearly polarized light and a circle for circularly polarized light.
Components of Light The electric field vector () can be described by its orthogonal components, Ex and Ey. Figure 2 illustrates a case of elliptically polarized light, in which the polarization is not linear or circular. The Ex and Ey components have different amplitudes, and the phase difference (δ ) between the Ex and Ey components is not an integer multiple of /2. The Ex and Ey components' values increase and decrease periodically, but they vary out of sync with one another and span different ranges.
If the orthogonal components were added together as vectors, the total field vector would rotate around the propagation axis as it traveled (Figure 3), and its length would vary with the rotation angle. Looking into the beam, perpendicular to the Ex - Ey plane, the tip of the vector would trace out the curve of the polarization ellipse (Figure 4).
Polarization Ellipse An observer looking into the beam will describe a different polarization ellipse than an observer facing the opposite direction. Due to this, it is necessary to specify the direction the observer faces. Here, the observer is assumed to be looking into the beam.
The polarization ellipse is bound by a rectangle whose sides are equal to twice the amplitudes, Eox and Eoy , of the Ex and Ey components, respectively. This rectangle provides information about the fraction of the light contained in each orthogonal component.
To determine the specific characteristics of the polarization ellipse corresponding to a polarization state, the phase delay between the Ex and Ey components must also be considered. Key characteristics of the ellipse providing polarization state information are the rotation of the major axis with respect to the Ex axis and the relative lengths of the minor and major axes.
The angle (ψ ) between the major axis of the ellipse and the Ex axis is known by many names, including orientation angle, angle of inclination, rotation, tilt, and azimuth. It varies between -90° and 90°, and it is ±45° when Eox and Eoy have equal magnitudes.
The ellipticity of the polarization ellipse is the ratio (ε ) between the lengths of the minor and major axes. Since the orientation is typically stated as an angle, it can be convenient to also express ellipticity as an angle ( χ ). The ellipticity has a range of values from zero ( χ = 0°) for linearly polarized light, which is the case for δ = 0, to one ( χ = 45°) for circularly polarized light, which is the case for δ = /2.
The tip of the electric field vector may rotate in a right-hand (clockwise) or left-hand (counterclockwise) direction as it propagates. This is known as the handedness or helicity of light, in which right-hand polarized light has positive helicity and left-hand polarized light has negative helicity. The direction can be determined using values of the E vector at time equal to zero (Et=0 ) and at a time one quarter of a period (T ) later (Et=T/4 ). If the cross product (Et=0 x Et=T/4 ) points in the direction of beam propagation, the rotation is counterclockwise (left handed). If not, the rotation of the E-field vector is clockwise (right handed).
Figure 5: The ellipticity and orientation of the polarization ellipse provides information about the phase shift (δ ) between the Ex and Ey components of the electric field. The ellipses shown above result when the peak amplitudes of both components are the same. The direction of the E vector's rotation is indicated by the direction of the arrow on the polarization ellipse. Click the image to see ellipticity and orientation angles for each case.
Figure 2: There are six possible sequences of reflections for a beam. The zone in which the first reflection occurs determines the sequence. These maps apply to beams approximately parallel with the retroreflector's normal axis. The beam paths are indcated by arrows, and dots mark reflections.
Figure 1: The three reflective faces of a corner-cube retroreflector are shown in false color and with numerical labels assigned to each half. Retroreflectors are designed to reflect an incident beam once from each face and provide an output beam parallel to the input.
Figure 4: Shifting the position of the first reflection to below the diagonal of the red face causes the next reflection to occur from the yellow face. After the third reflection, from the blue face, the beam exits the retroreflector travelling parallel to but shifted from the output beam in Figure 3.
Figure 3: When the first reflection occurs above the diagonal of the red face, and the beam is parallel to the retroreflector's normal axis, the second reflection occurs from the blue face. The beam then reflects from the yellow face before exiting the retroreflector.
Beams output from corner-cube retroreflectors travel parallel to the input beam, but in the opposite direction. The input beam can be aligned to the vertex or to a point on one of the three faces. The input and output beams are colinear if the input beam is aligned to the vertex. The two beams will be separated if the input beam spot does not overlap the vertex.
Input beams aligned to one of the retroreflector's faces will reflect from that face and then the other two before exiting the retroreflector. For a range of incident angles, there are six possibilities for the order in which the beam will reflect from the three different faces. lt can be useful to select the path through the retroreflector for reasons that include optimal beam positioning and minimizing polarization effects.
For a beam to follow a particular sequence of reflections, it is not sufficient to align the beam so that it is incident on a specific face. The beam must also be incident on the proper half of that face.
Tracing the Beam Path When looking into the vertex of the retroreflector, reflective effects make it possible to see the six halves of the three faces. Here, they are identified using dashed diagonal lines (Figure 1). In addition, the three faces of the retroreflector are shaded with false color for illustrative purposes. The normal axis is not shown, but it passes through the vertex and is equidistant from all three faces.
The six different possible reflection sequences can vary with angle of incidence. The maps in Figure 2 apply to beams nearly parallel with the normal axis. While a hollow retroreflector is used for these illustrations, these sequences of reflections also apply to prism retroreflecting mirrors.
The position of the first reflection determines which sequence of reflections the beam will follow through the retroreflector. The beam always exits from a different face than it entered.
Example Figures 3 and 4 illustrate the two orders of reflections that can occur when the first reflection occurs from the left-most vertical face. The incident beam is parallel to the retroreflector's normal axis.
When the first reflection occurs above the diagonal, as shown in Figure 3, the last reflection occurs from the horizontal (yellow) mirror. However, locating the first reflection below the diagonal results in a last reflection from the other vertical (blue) mirror. The output beams of these two cases are parallel to, but shifted from, one another.
Figure 6: Vertically polarized beams were input to a TIR solid prism retroreflector (PS975M) and a backside-gold-coated solid prism retroreflector (PS975M-M01B). The polarization ellipse of each output beam is shown in the zone that provided the beam's third reflection. For a plot of the ellipticity angle ( χ ) and orientation angles ( ψ ) with respect to the horizontal axis, click here.
Figure 5: Horizontally polarized beams were input to a TIR solid prism retroreflector (PS975M) and a backside-gold-coated solid prism retroreflector (PS975M-M01B). The polarization ellipse of each output beam is shown in the zone that provided the beam's third reflection. For a plot of the ellipticity angles ( ψ ) with respect to the horizontal axis, click here.
Figure 8: Retroreflectors convert some of the input light to the orthogonal polarization. Over 90% of the light output from the backside-gold-coated solid prism retroreflector (PS975M-M01B) remained polarized in the input state. In the case of the TIR solid prism retroreflector (PS975M), that percentage strongly depended on beam path and did not exceed 80%.
Figure 7: A retroreflector is designed to reflect an input beam once off of each face. When the beam is approximately normal to the viewing plane illustrated in Figures 5 and 6, the beam will follow one of six beam paths.
When the backsides of solid prism retroreflectors are coated with metal, polarization changes induced in the output beam are significantly reduced.
This is due to the difference between specular reflections, which occur from interfaces between glass and the higher refractive index metal, and reflections that occur due to total internal reflection (TIR), which require the backside material, like air, to have a lower refractive index.
Compared with TIR, a specular reflection from a glass-metal interface better preserves the input beam's polarization ellipticity.
Polarization and Beam Path Diagrams Beam paths through a retroreflector can be described by dividing its three reflective faces into six wedge-shaped zones (Figures 5, 6 and 7). Solid gray boundary lines mark physical lines of contact between reflective faces. Dotted gray lines indicate boundaries between the halves of each face.
The retroreflectors in these figures are oriented with one face-to-face interface aligned with the vertical axis. When the input beam is normal to these figures' viewing planes, Figure 7 describes the order in which the input beam reflects from the three faces before being output.
Output Polarization State Two sets of six measurements were made for both a PS975M TIR solid prism retroreflector and a PS975M-M01B backside-gold-coated solid prism retroreflector. Input light was linearly polarized, vertically for one set of measurements and horizontally for the other. In a set, each measurement was taken with the beam aligned to a different zone. At all three reflections, the beam was confined within a single zone.
In Figures 5 and 6, the polarization states of the output beam are represented using polarization ellipses. Each output beam's polarization ellipse is shown in the zone that provided the third reflection.
Ideally, the output beam would have the same polarization state as the input beam. However, these measurements indicate the retroreflectors converted some of the incident light to the orthogonal polarization. The plot in Figure 8 is a measure of the fraction of light in the output beam that was polarized parallel to the input.
The backside-gold-coated solid prism retroreflector was significantly more successful in maintaining the polarization state of these linearly polarized input beams.
Figure 10: Since the refractive indices of glass and air are different, the beam reflects at the front face. Reflected light can make multiple passes through the retroreflector before being output. Coherent overlapping beams produce interference effects.
Figure 9: The beam path through a corner-cube retroreflector includes a reflection from each of the three back faces, in an order determined by the position of the incident beam. The incident beam shown above has a 0° AOI and is displaced from the vertex.
The beam power output by solid prism retroreflectors may oscillate around an average value as the angle of incidence (AOI) varies. This is due to a multiple-beam interference effect that can occur when the coherence length of the light source is at least twice the optical path length through the retroreflector.
When the front face of a solid retroreflector has an anti-reflective coating, oscillation amplitudes for all AOIs are substantially reduced. Hollow metal-coated retroreflectors provide output beams whose power is approximately independent of AOI.
Beam Path These corner-cube retroreflectors provide an output beam that travels in a direction parallel and opposite to the incident beam. Figure 9 shows one beam path.
The AOI is determined using a reference axis normal to the front face of the retroreflector. This axis passes through the vertex and is equidistant from the three back faces.
Reflections from the Front Face As illustrated in Figure 10, light can make multiple passes through a solid prism retroreflector, depending on whether the light reflects from or is transmitted through interfaces between the front face and the surrounding medium.
When a glass retroreflector is surrounded by air, ~96% of the light is in the primary output beam, which makes a single pass through the retroreflector, and ~0.16% is in the beam that completes an additional round trip. In this work, light making additional round trips had negligible intensity.
Conditions for Interference Since the output of solid prism retroreflectors consists of beams that have travelled different optical path lengths, they will interfere if:
The beams overlap, which is more likely when the AOI of the incident beam is near 0° and the output is measured closer to the retroreflector. At larger distances, the beam deviation specified for the retroreflector and the AOI will more widely separate the first- and third-pass beams.
The coherence length of the source is longer than the difference in path length between the primary beam and the overlapping beam that has made more than one pass through the retroreflector.
Figure 12: Output power as a function of AOI differed depending on the type of corner-cube retroreflector. Data from measurements, made as described in Figure 11, were normalized to the same scale, and traces were vertically shifted as a visual aid. Oscillation amplitude was strongly suppressed when the front face was AR-coated (PS975-C). Oscillations were not observed for the hollow retroreflector (HRR201-M01).
Figure 11: The power output by a TIR solid prism retroreflector (PS975M) was measured as a function of AOI. The incident beam was provided by a DBR1064S 1064 nm laser source, whose coherence length was several meters. The largest-amplitude oscillations resulted around 0° AOI, where the first- and third-pass beams overlapped. The 1/e2 beam diameters did not overlap for AOIs larger than ±1° at a distance of 30 cm from the front face of the retroreflector.
Corner-Cube Retroreflectors Compared The variation of output power with small AOI was compared for four different types of corner-cube retroreflectors: a PS975M TIR solid prism retroreflector, a PS975M-M01B backside-gold-coated solid prism retroreflector, a PS975M-C TIR solid prism retroreflector with an antireflective-coated front face, and a HRR201-M01 that has a hollow construction. The input source was a DBR1064S 1064 nm laser diode with a coherence length of several meters, and the power detector was placed 30 cm from the front face of the retroreflectors. The beam size was small enough to ensure that each reflection occurred from a single face.
Figure 11 plots the normalized measurements made for the TIR solid prism retroreflector. As the AOI increased, the centers of the first- and third-pass beams shifted away from one another. At AOIs greater than around ±1°, the beams' 1/e2 diameters no longer overlapped. This resulted in the oscillation amplitude decreasing with AOI. The range of AOIs over which oscillations were significant would increase if the detector were located closer to the front face.
Figure 12 plots the trace from Figure 11, as well as traces measured for the other three retroreflectors, on the same scale but vertically shifted as a visual aid. These results indicate that an antireflective-coated front face suppresses power oscillations in beams output by solid prism retroreflectors. The power output by hollow retroreflectors does not oscillate, since there is no material boundary at the front face.
Figure 1: Visual C# and LabVIEW programs can be written to interrogate the DM713 Digital Micrometer. Examples are detailed in programming references available for download.
Programming references that provide introductions to communicating with the DM713 Digital Micrometer (Figure 1) are available. One reference has been developed for LabVIEW, and the other for Visual C#. Each reference includes a step-by-step discussion for writing the program, as well as a section that concisely provide the full program text without explanation.