When you are looking at a spec sheet for mass flow meters and mass flow controllers the amount of information presented can be a little overwhelming. Especially, if you are not familiar with what all of the terminology means or if you are not sure which of the specs will have the most impact on what you are trying to accomplish.
Listed below are some of the more important mass flow meter and mass flow controller specs and what they mean in plain english.
Accuracy is a mesurement of how accurately an instrument performs at different flow ranges.
Accuracy is generally measured in one of two ways: percentage of full scale flow or percentage of reading.
Error as a percentage of full scale is established by multiplying the error percentage by the full scale flow. The less you flow through the device the less accurate the reading will be. For that reason, you don’t want to get a larger device than you need. Devices with error expressed as a percentage of full scale are most accurate when flowing at full scale.
Error expressed as a percentage of reading expresses error as a percentage of what the device is actually flowing. Simply, if a instrument’s accuracy is rated to +/-1% of reading an instrument will be accurate to +/-1% of whatever the instrument is flowing. At 100SLPM the instrument will be accurate to within +/-1SLPM, and at 10SLPM of flow the unit will be accurate to within +/-.1SLPM.
Accuracy, regardless of measurement method, is generally dependant on operating conditions. Operating conditions are usually defined as the pressure and temperature of the gas flowing through the instrument. Manufacturers will rate their instrument’s error based on some predefined set of operating conditions, usually standard pressure and temperature. So, if your gas temperature and/or gas pressure do not meet those conditions specified by the manufacturer the accuracy of your unit could be off by quite a bit. Some units, like Alicat’s, are internally compensated which means that sensors inside the device measure temperature and pressure conditions and make real time corrections for variations in gas conditions. Real time corrections for variations in gas conditions take a lot of worry about maintaining consistent process conditions.
Repeatability measures an instrument’s ability to repeat flow functions accurately.
A unit’s repeatability is generally measured by monitoring a flow instrument’s reading at a given flow rate, turning off the flow allowing instrument to return to zero for a given period of time, and then resuming the same flow. The instrument’s repeatability is determined by examining the difference between the original flow reading and the flow reading after the flow has been turned off and resumed.
Simply, repeatability measures how repeatable an instrument’s reading will be at the same flow rate.
The turndown ratio of a flow instrument, such as a mass flow meter or MFC, is the ratio of minimum flow to full scale flow. Simply put, it is the minimum amount of fluid that can be measured by the device. For example, if a given flow meter or MFC has a 100:1 turndown ratio the flow meter or MFC is capable of measuring 1/100 of the full scale flow. So, suppose this flow meter or MFC has a full scale rating of 100SLPM the flow meter or MFC will measure down to 1SLPM of flow. It is very important to remember that a mass flow meter or a mass flow controller has a rated accuracy based in whole, or in part, on the full scale flow of the unit. So, even though said flow meter or MFC can flow to 1/100 of it’s full scale range you want to make absolutely certain that the accuracy at those low flow rates meet your requirements.
Warm up time
Warm up time measures the amount of time it takes for an instrument to become stable for use. Thermal units tend to have the longest warm up times. Some units can take up to 30 minutes to become stable to within 2%FS. This is an important specification if you turn your unit off at the end of the day.
Pressure drop describes the loss of pressure as a fluid travels through a pipe or channel. If you blew into a mile long pipe, it’s unlikely that anything would come out the other end. This is due to pressure drop. As the fluid flows through the pipe, friction with the pipe walls and between the fluid particles causes a loss of pressure. Pressure drop is approximately proportional to the distance the fluid travels. Every component that a gas comes in contact with, every fitting, every pipe wall, every bend, etc. there is pressure drop.
Since pressure drop is a flow killer, you most likely want to make sure that every component in your system generates as little pressure drop as possible.
Zero Shift or Offset error
Zero shift or offset shift is defined as how far from zero an instrument will move when pressure and/or temperature are changed. Offset error does not affect the slope of the calibration curve, any offset error will be the same throughout the flow range. Offset error is measured in %FS (or %reading)/degree change in temp (or psi change in pressure) Simply, for every change in degree temp or change in psi the calibration is offset by the percentage of error.
Span Shift or Span error
Span shift or span error is defined as a shift in the slope of the calibration curve with zero not changing. The calibration curve of the device will be affected differently at different flow ranges. Span error is measured in %FS (or %reading)/degree change in temp (or psi change in pressure) Simply, for every change in degree temp or change in psi the calibration is offset by the percentage of error
Zero shift or span shift can also be referred to as ‘Temperature coefficients’ or ‘Pressure Coefficients’ and will be measured the same way. Be sure to pay attention to the units of measure as some manufacturers will measure span or offset error percentages by measuring in degrees F or single psi instead of degrees C or atm’s of pressure.
Dead band as it relates to a pressure switch is the band in between which the switch trips (the setpoint) and where the switch resets.