Site Search:

How To Calibrate Strain Gage-Based Transducers Using DI-5B38 Strain Amplifiers

The proper calibration of strain gage-based transducers can be a tricky exercise. You need to consider the gage factor of the transducer (GT), the gage factor of the amplifier module (GA), excitation voltage requirements and gain. With this information at hand, you can accurately measure psi, microstrain, lbs or any other quantity. The goal of this application note is to clarify the steps necessary to successfully calibrate any strain gage, or strain based transducer using DATAQ Instruments' model DI-5B38 strain gage input modules and WinDaq data acquisition software.

Understanding Strain Gage and Strain Gage-Based Transducers

A strain gage based transducer is nothing more than a device which converts a physical quantity such as lbs, microstrain or pressure, for example, into a voltage output signal (usually in millivolts). This signal varies according to the amount of the physical quantity which is under measure. As the load placed upon the gage changes, the electrical resistance changes due to flexing of the gage under load. This is similar to what occurs when a wire bends, the cross-sectional area changes, and as a result its resistance changes. The signal produced by the gage is picked up by an amplifier, and then amplified and sent to a computer for display and recording. Some common strain gage based transducers are: pressure transducers, strain gages, and load cells. These types of gages are dependent upon a proper excitation voltage in order to produce a signal output. We will treat excitation as our next topic.

The Role of Excitation

A transducer cannot output anything without an excitation voltage across it. The supplied excitation is normally a fixed DC voltage. With an excitation voltage applied, you can convert some physical quantity into a low level signal, which can then be amplified for use in a data acquisition system. There is a limit to what excitation voltage you can apply. Each transducer has an acceptable excitation voltage that can be applied to it. It is important to be certain that you do not apply more excitation voltage than that specified by the manufacturer to avoid overheating the transducer. Operating in this manner will cause the transducer to loose its accuracy. Therefore, it is good practice to keep within these limits. DATAQ Instruments line of DI-5B Series amplifiers provide an excitation voltage of either 3.33V or 10V to interface to nearly all strain gage based transducers.

What is "Gage Factor"?

Another name for gage factor is sensitivity. It is a unitless number. The sensitivity specification of the transducer reveals what voltage the transducer will output with a supplied excitation voltage, given some physical quantity applied to the transducer. Normally, strain gage based transducers output a low level voltage (in millivolts) per volt of excitation applied to it. The strain gage's output at any given moment will be directly proportional to the physical quantity being input to the module. Let's look at a Sensotec Model S pressure transducer as an example. It has a sensitivity of 2mV/V and requires a 5V excitation. It has a pressure range from 0 - 100 psi. We know from our discussion thus far that we need to supply an excitation voltage in order to get an output. Let's assume for now that we will supply this 5V from a power supply. It is easy to see by doing the math that this transducer will output 2mv × 5V = 10mV at full scale. Further, this 10mV output will represent 100 psi, since this is the full scale limit which the transducer can measure. Dealing with gage factors is a relatively easy exercise if you keep in mind that there will be a voltage output from a strain gage or strain gage based transducer which will change according to the quantity under measure.

Now that we have covered some basics about sensitivity, let's talk a little about what we refer to as "ideal" and "real" gage factors. You might be wondering at this point why you should concern yourself with both of these quantities. Well, this can best be explained in terms of knowing why both exist. You will find in catalogs and other resources that gage factors are expressed in nice, clean numbers such as 2.00, 3.00, etc. This is an ideal number that serves well as a point for selecting your transducer from a catalog, but it does not take into account small differences which occur from one transducer to another. When you buy a battery from the store, it may be marketed as a 9v battery. However, if you were to put a battery tester on it, it might read 9.1V or some other voltage close to 9V. Although it may be true that we do not need a 9V battery to be exactly 9V at any given moment, we can illustrate just how different two batteries may be from one another. Likewise, it is simply not possible to make one transducer identical to another. Thus, each transducer manufactured has a more accurate value that is included with the strain gage. This is the "real" value which you must use to perform an accurate calibration, thus ensuring a more accurate measurement. You may see a strain gage based transducer gage factor that is along this line: 2.14mV/V, 2.07mV/V and so forth. These numbers do not cause any difficulty using a simple calculator. Let us consider for a moment just how much of an error is introduced if we were to ignore the "real" gage factor. Let's use 2.14mV/V as a "real" gage factor and compare it to 2mV/V which will serve asour "ideal" gage factor. If we now do the math: (2.14 - 2.00)/2.00 = 0.07 = 7% error. Simply put, we can avoid this potential error by using the “real” gage factor.

Matching an Amplifier to a Strain Gage or Strain Gage Based Transducer

Matching an amplifier to your strain gage or strain gage based transducer requires a little knowledge about how they relate to each other in terms of their gage factors. Ideally, we want the gage factor of the strain gage based transducer to be equal to the gage factor of the amplifier. With this being the case, we do not have to apply gain, and calibration will be set to the plus and minus full scale range of the transducer. However, you will likely encounter a mismatch between the two. Let's cover some of the possibilities of how these two can match up. If the gage factor of the transducer is less than that of the amplifier, steps can be followed to determine the full scale limit and selection of the appropriate gain. A procedure for this is covered in the table below. If the gage factor of the transducer is more than that of the amplifier, the transducer will be under utilized because of the amplifier's inability to amplify the entire range that the transducer is capable of. The same procedure mentioned above can also be applied in this situation.

Calibrate Any Strain Gage - Based Transducer in 3 Easy Steps

 
Definitions Application Example Application Example 2
GT = Transducer gage factor GT = 2mV/V GT = 2.14mV/V
GM = Amplifier gage factor GM = 3mV/V GM= 3mV/V
FT = Full scale of Transducer FT = 100 PSI FT = 30,000 microstrain
R = Gain Ratio = GT/GM
Procedure
1. Determine R R = 2/3 = 0.667 R = 2.14/3.00 = 0.7133
2. Determine +Full Scale = (FT/R)* +Full Scale = (100/0.667) = 150 psi +Full Scale = (30,000/0.7133) = 42,058 microstrain
3. Determine a baseline calibration value.†‡ Here the transducer outputs 0 volts at rest.‡ There is no strain at rest, set low calibration to 0.‡

* In WinDaq/Pro or WinDaq/Pro+, enter this value as the +Full Scale quantity in the Fixed Calibration dialog box (accessed from the Edit menu).

† In WinDaq/Pro or WinDaq/Pro+, enter this value as the Baseline quantity in the Fixed Calibration dialog box (accessed from the Edit menu).

‡ Unbalanced transducers (those with a voltage offset at calibrated zero) may be forced to zero with WinDaq/Pro or WinDaq/Pro+ software by selecting Low Calibration... from the Edit menu, entering zero in the Low Cal Value text box, and clicking OK.