ChromQuest 5.0.3.2.1 Reference Guide
ChromQuest 5.0.3.2.1 Reference Guide
0
Chromatography Data System
Reference Guide
Surveyor® is a registered trademark and ChromQuest is a trademark of Thermo Fisher Scientific. Microsoft®
and Windows® are registered trademarks of Microsoft Corporation. All other trademarks are the property of
Thermo Fisher Scientific and its subsidiaries.
Thermo Fisher Scientific Inc. provides this document to its customers with a product purchase to use in the
product operation. This document is copyright protected and any reproduction of the whole or any part of this
document is strictly prohibited, except with the written authorization of Thermo Fisher Scientific Inc.
The contents of this document are subject to change without notice. All technical information in this
document is for reference purposes only. System configurations and specifications in this document supersede
all previous information received by the purchaser.
Thermo Fisher Scientific Inc. makes no representations that this document is complete, accurate or error-
free and assumes no responsibility and will not be liable for any errors, omissions, damage or loss that might
result from any use of this document, even if the information in the document is followed properly.
This document is not part of any sales contract between Thermo Fisher Scientific Inc. and a purchaser. This
document shall in no way govern or modify any Terms and Conditions of Sale, which Terms and Conditions of
Sale shall govern all conflicting information between the two documents.
For Research Use Only. Not regulated for medical or veterinary diagnostic use by U.S. Federal Drug
Administration or other competent authorities.
C
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Related Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .ix
New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .ix
Safety and Special Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x
Contacting Us . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xi
ODBC Enabled. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
File Extensions Used for Data Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Automatic Export to Microsoft Excel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Select Parameters to Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Set up Excel Export User Program. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
ODBC Export to Microsoft Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Graphics Export . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .261
Preface
Welcome to ChromQuest™ 5.0. The ChromQuest chromatography data system is a member
of the Thermo Scientific family of LC data systems.
Whether you are a new ChromQuest user or are upgrading from a previous version of
ChromQuest, we think you will find the features of ChromQuest 5.0 both powerful and well
organized.
This ChromQuest Chromatography Data System Reference Guide contains technical details,
such as how integration is performed and equations are used.
For instructions on how to use the ChromQuest chromatography data system to acquire and
process data, refer to the ChromQuest User Guide.
Related Documentation
In addition to this guide, Thermo Fisher Scientific provides the following documents for the
ChromQuest chromatography data system:
• ChromQuest 5.0 Installation Guide (PDF on the ChromQuest CD)
• ChromQuest 5.0 Administrator Guide (PDF on the ChromQuest CD)
• ChromQuest 5.0 User Guide (PDF on the ChromQuest CD)
• ChromQuest 5.0 Quick Reference Guide (laminated card)
New Features
The ChromQuest 5.0 chromatography data system provides these new features:
• A navigation bar has been added to the Instrument window.
• The Move Baseline Start and Stop integration timed events have been replaced by the
Move Baseline integration timed event. See “Move Baseline ” on page 25.
• A custom peak parameter has been added to the sequence table.
• A new Graphics Export function has been added. See “Graphics Export” on page 91.
Contacting Us
There are several ways to contact Thermo Fisher Scientific for the information you need.
Phone 800-685-9535
Fax 561-688-8736
E-mail [email protected]
Knowledge base www.thermokb.com
Find software updates and utilities to download at www.mssupport.thermo.com.
Phone 800-532-4752
Fax 561-688-8731
Web site www.thermo.com/ms
Go to mssupport.thermo.com and click Customer Manuals in the left margin of the window.
There are two ways to add an integration timed event to a method: by manually adding the
event to the Integration Timed Event Table, or graphically by clicking on the chromatogram.
For details on how to add an event to your method, refer to Chapter 3: Method
Development of the ChromQuest Chromatography Data System User Guide.
Contents
• Integration Timed Events Tables
• Required Integration Timed Events
• Optional Integration Events
• Baseline Code Descriptions
If you select the Add to Table command, the timed event is inserted in either the Integration
Events Table, or the Manual Integration Fixes Table, depending on which of these is
selected.
The Integration Events Table contains all current Integration Timed Events for the current
method channel. You can add an event manually by selecting the event from the drop-down
list in the Events field, entering an appropriate Start and Stop time and a Value for the event
(if required). To remove an event’s effect from an analysis, yet keep the event in the table,
click the check mark next to the event. Only events with a red check mark have an effect on
subsequent analyses.
To remove an event entirely from the table, click the row number of the event, followed by
the Delete key.
A right-hand mouse click anywhere in the table produces a menu of commands for
manipulating cells and rows in the spreadsheet. The Insert Paste command is used to insert a
line at the same time you paste to the location. The Insert Line command simply inserts a
blank line where your cursor is located.
To view the Manual Integration Fixes Table, click the Manual Integration Fixes Table
button, or choose the Data > Manual Integration Fixes command from the menu bar.
Figure 3. Manual Integration Fixes table
The Manual Integration Fixes Table contains all current Manual Integration Fixes for the
current data file. You can add an event manually by selecting the event from the drop-down
list in the Events field, entering an appropriate Start and Stop time and a Value for the event
(if required). To remove an event’s effect from an analysis, yet keep the event in the table,
click the check mark next to the event. Only events with a red check mark have an effect on
subsequent analyses.
To remove an event entirely from the table, click the row number of the event, followed by
the Delete key.
A right-hand mouse click anywhere in the table produces a menu of commands for
manipulating cells and rows in the spreadsheet. The Insert Paste command is used to insert a
line at the same time you paste to the location. The Insert Line command simply inserts a
blank line where your cursor is located.
When you select an integration event, a blue dialog box appears with instructions for using
the event. The same instructions appear in the Status bar.
Figure 4. Move Baseline instructions
2. In the Toolbar options area, select Int Event, and then clear the Show toolbar check box.
3. Click OK to accept the change and close the dialog box.
2. In the Tooltips options area, clear the Show graphical programming tooltips check box.
3. Click OK to accept the change and close the dialog box.
Width
The Width event is used to calculate a value for bunching, or smoothing, the data points
before the integration algorithm is applied. Integration works best when there are 20 points
across a peak. If a peak is over sampled (that is, the sampling frequency was too high), the
Width parameter is used to average the data such that the integration algorithm sees only
20 points across the peak. In setting a Width value graphically, the narrowest peak in the
chromatogram should be used.
The Width parameter is only used to correct for over-sampling. It cannot correct for data that
was under-sampled (that is, sampling frequency too low causing fewer than 20 points
acquired across the narrowest peak.)
A Width event is applied to a given peak as long as it occurs before or on the apex of the peak.
Note In most circumstances, an initial Width value based on the narrowest peak in the
chromatogram is adequate for proper integration of all peaks. However, a new Width
timed event should be entered every time a peak width doubles.
Threshold
This parameter is the first derivative, used to allow the integration algorithm to distinguish
the start and stop of peaks from baseline noise and drift. When setting the Threshold value
graphically, you select a section of baseline. The recommended Threshold value is based on
the highest first derivative value determined in that section of the chromatogram.
The diagram below shows examples of how incorrect values for peak Width and Threshold
can affect the peak baseline.
Figure 7. Required integration events: Peak Width and Threshold
Fiber optic
Fiber optic
Note that extreme values of both Width and Threshold (too large or too small) can result in
peaks not detected.
Shoulder Sensitivity
This parameter is used to enable the detection of shoulders on larger peaks. A larger value
decreases shoulder sensitivity, while smaller values increase sensitivity to shoulder peaks.
When setting the Shoulder Sensitivity value graphically, you select a section of the baseline.
The recommended Shoulder Sensitivity value is based on the highest second derivative value
determined in that section of the chromatogram.
Figure 8. Shoulder Sensitivity dialog box
Click Add to Table, and the timed event is inserted in either the Integration Events Table, or
the Manual Integration Fixes Table, depending on which of these you select. Click Cancel
to ignore the timed event and cancel the operation. Click Analyze now to add the event to the
table and analyze the chromatogram using the event.
Figure 9. Shoulder Sensitivity events
Shoulder Sensitivity
value set too high
Shoulder Sensitivity
value set correctly
Integration Off
This event turns off the integration of your chromatogram during the range specified. This
event is useful if you are not interested in certain areas of your chromatogram, and do not
wish peaks to be reported for that section.
Figure 10. Integration Off dialog box
Click Add to Table, and the timed event is inserted in either the Integration Events Table, or
the Manual Integration Fixes Table, depending on which of these you select. Click Cancel
to ignore the timed event and cancel the operation. Click Analyze Now to add the event to
the table and analyze the chromatogram using the event.
The following example chromatograms illustrate the effect of using the Integration Off event
to turn integration off between 0 and 5 minutes.
Figure 11. Integration Off events
Default integration
Valley to Valley
This event causes the baselines of peaks that are not totally resolved (that is. do not return to
baseline) to be drawn to the minimum point between the peaks. If this event is not used, a
baseline is projected to the next point at which the chromatogram returns to baseline, and a
perpendicular is dropped for peaks that do not reach baseline.
Figure 12. Valley to Valley dialog box
Click Add to Table, and the timed event is inserted in either the Integration Events Table, or
the Manual Integration Fixes Table, depending on which of these you select. Click Cancel
to ignore the timed event and cancel the operation. Click Analyze Now to add the event to
the table and analyze the chromatogram using the event.
Figure 13. Valley to Valley events
Default integration
Integration with
Valley to Valley event
Horizontal Baseline
This event allows you to project the baseline forward horizontally between the times specified
for the event.
Figure 14. Horizontal Baseline dialog box
Click Add to Table, and the timed event is inserted in either the Integration Events Table, or
the Manual Integration Fixes Table, depending on which of these you select. Click Cancel
to ignore the timed event and cancel the operation. Click Analyze Now to add the event to
the table and analyze the chromatogram using the event.
Figure 15. Horizontal Baseline events
Integration without
Horizontal Baseline event
Integration with
Horizontal Baseline between
1.8 and 3.6 minutes
Click Add to Table, and the timed event is inserted in either the Integration Events Table, or
the Manual Integration Fixes Table, depending on which of these you select. Click Cancel
to ignore the timed event and cancel the operation. Click Analyze Now to add the event to
the table and analyze the chromatogram using the event.
Figure 17. Backward Horizontal Baseline events
Default integration
Integration after
Backward Horizontal Baseline
event
Click Add to Table, and the timed event is inserted in either the Integration Events Table, or
the Manual Integration Fixes Table, depending on which of these you select. Click Cancel
to ignore the timed event and cancel the operation. Click Analyze Now to add the event to
the table and analyze the chromatogram using the event.
Figure 19. Lowest Point Horizontal Baseline events
Integration before using
Lowest Point Horizontal Baseline
event
Tangent Skim
This event is used to integrate a small peak located on the tailing edge of a larger peak. The
baseline of the small peak becomes a tangent drawn from the valley of the larger peak to the
tangent point on the chromatogram.
Figure 20. Tangent Skim dialog box
Click Add to Table, and the timed event is inserted in either the Integration Events Table, or
the Manual Integration Fixes Table, depending on which of these you select. Click Cancel
to ignore the timed event and cancel the operation. Click Analyze Now to add the event to
the table and analyze the chromatogram using the event.
Figure 21. Tangent Skim events
Integration without
Tangent Skim event
Integration with
Tangent Skim event
Click Add to Table, and the timed event is inserted in either the Integration Events Table, or
the Manual Integration Fixes Table, depending on which of these you select. Click Cancel
to ignore the timed event and cancel the operation. Click Analyze Now to add the event to
the table and analyze the chromatogram using the event.
Figure 23. Front Tangent Skim events
Before
Front Tangent Skim event
After
Front Tangent Skim event
Minimum Area
This event allows you to enter an area limit for peak detection. Peaks whose areas fall below
this minimum area is not be integrated and reported as peaks. This event is useful for
eliminating noise or contaminant peaks from your report.
Figure 24. Minimum Area dialog box
Click Add to Table, and the timed event is inserted in either the Integration Events Table, or
the Manual Integration Fixes Table, depending on which of these you select. Click Cancel
to ignore the timed event and cancel the operation. Click Analyze Now to add the event to
the table and analyze the chromatogram using the event.
Figure 25. Minimum Area events
Integration without
Minimum Area event
Integration with
Minimum Area event
Negative Peak
This event causes portions of the chromatogram that drop below the baseline to be integrated
using the normal peak logic and reported as true peaks. This event is useful when using
detectors such as Refractive Index types that give a negative response to certain compounds.
Figure 26. Negative Peak dialog box
Click Add to Table, and the timed event is inserted in either the Integration Events Table, or
the Manual Integration Fixes Table, depending on which of these you select. Click Cancel
to ignore the timed event and cancel the operation. Click Analyze Now to add the event to
the table and analyze the chromatogram using the event.
Figure 27. Negative Peak events
Default integration
Integration with
Negative Peak event
Click Add to Table, and the timed event is inserted in either the Integration Events Table, or
the Manual Integration Fixes Table, depending on which of these you select. Click Cancel
to ignore the timed event and cancel the operation. Click Analyze Now to add the event to
the table and analyze the chromatogram using the event.
Figure 29. Disable End Peak Detection events
Default integration
Reassign Peak
This event allows you to graphically designate a different peak as the calibrated peak in place
of the peak that has been identified. When you use the Reassign event, the expected retention
time of the named peak is adjusted in the Peak Table to the retention time of the designated
peak.
Figure 30. Reassign Peak dialog box
Click Add to Table, and the timed event is inserted in the Manual Integration Fixes Table.
Click Cancel to ignore the timed event and cancel the operation. Click Analyze Now to add
the event to the table and analyze the chromatogram using the event. In the following
example, Peak 2 has been reassigned to a new peak.
Figure 31. Reassign Peak events
Before
Peak 2 reassignment
After
Peak 2 reassignment
Manual Baseline
This event allows you to change the way the baseline for a peak is drawn without changing the
integration parameters. This is convenient when you want to change where a baseline is
drawn for a peak without changing how the baseline is drawn for other peaks in the
chromatogram.
The Manual Baseline event was used to “draw” a new baseline for the second peak. To draw
the new baseline, choose the Manual Baseline command, and then click your mouse at the
start of the desired baseline, and again at the end of the desired baseline.
Figure 32. Manual Baseline dialog box
Click Add to Table, and the timed event is inserted in the Manual Integration Fixes Table.
Click Cancel to ignore the timed event and cancel the operation. Click Analyze Now to add
the event to the table and analyze the chromatogram using the event. Manual Baseline events
are stored in the Manual Integration Fixes table by default.
Figure 33. Manual Baseline events
Default integration
Integration with
Manual Baseline between
0.765 and 1.43 minutes
Manual Peak
This command allows you to graphically define a peak that was not previously detected. This
is convenient when you want to force integration of a peak, but do not want to change your
overall integration parameters.
Figure 34. Manual Peak dialog box
Click Add to Table, and the timed event is inserted in the Manual Integration Fixes Table.
Click Cancel to ignore the timed event and cancel the operation. Click Analyze Now to add
the event to the table and analyze the chromatogram using the event.
In the following example, the Manual Peak event was used to force the integration of the
smaller peak on the right. To use this event, click the Manual Peak button from the Graphical
Integration toolbar. Click once the start of the peak to be defined. Again, click the end of the
peak to be defined. Manual Peak events are stored in the Manual Integration Fixes table by
default.
Figure 35. Manual Peak events
Default integration
Split Peak
This event is used to force a perpendicular drop-line integration in a peak. The perpendicular
is dropped at the point where the event is inserted.
Figure 36. Split Peak dialog box
Click Add to Table, and the timed event is inserted in the Manual Integration Fixes Table.
Click Cancel to ignore the timed event and cancel the operation. Click Analyze Now to add
the event to the table and analyze the chromatogram using the event.
Figure 37. Split Peak events
Integration before
Split Peak event
Integration after
Split Peak added at
5.3 minutes
Click Add to Table, and the timed event is inserted in the Manual Integration Fixes Table.
Click Cancel to ignore the timed event and cancel the operation. Click Analyze Now to add
the event to the table and analyze the chromatogram using the event.
Figure 39. Force Peak Start events
Default integration
Integration after
Force Peak Start to
33.1 minutes
Move Baseline
This event allows you to move the start and stop of a baseline by clicking and dragging them
to a new location.
1. When you choose Move Baseline, you are prompted to click the baseline segment you
want to modify. The start and end points of the baseline segment appears highlighted
with boxes. See Figure 42.
2. When you move the cursor to a location within range of the start or stop point, it turns
into an “anchor”. Left-click and drag the baseline start-point to the new location, and
then let go. Left-click and drag the baseline stop-point to a new location, and then let go.
3. You can continue to click and drag the baseline in this manner until it is in the correct
location. Then press the “Esc” key. A dialog box appears. See Figure 40.
Figure 40. Move BL Start dialog box
Click Add to Table, and the timed event is inserted in the Manual Integration Fixes Table.
Click Cancel to ignore the timed event and cancel the operation. Click Analyze Now to add
the event to the table and analyze the chromatogram using the event.
Figure 41 shows the initial start and stop times for a baseline segment (5.46 and
5.96 minutes). Figure 42 shows the selected baseline segment. Figure 43 shows the new
baseline segment produced by dragging the start time to the left and the stop time to the right.
By default, the move baseline events are added to Manual Integration Fixes Table. See
Figure 44
Figure 41. View of the initial start and stop times for a baseline segment
Figure 44. Manual Integration Fixes table with the new baseline start and stop times
Reset Baseline
This event causes the baseline to be reset to the point you click. Baseline segments before this
time point are not affected. This event is functionally equivalent to performing an Integration
Off and Integration On event at the same time.
Figure 45. Reset Baseline dialog box
Click Add to Table, and the timed event is inserted in either the Integration Events Table, or
the Manual Integration Fixes Table, depending on which of these you select. Click Cancel to
ignore the timed event and cancel the operation. Click Analyze Now to add the event to the
table and analyze the chromatogram using the event.
Note The event should be placed after the start of the peak first peak in the cluster;
otherwise the start of the peak is identified as the valley
Click the Add to Table button, and the timed event is inserted in either the Integration
Events Table, or the Manual Integration Fixes Table, depending on which of these you select.
Click Cancel to ignore the timed event and cancel the operation. Click Analyze Now to add
the event to the table and analyze the chromatogram using the event.
Baseline after
Reset Baseline at Valley
event
To add the Retention Time Window annotation to your chromatogram, right-click the
chromatogram to open the shortcut menu. Then, choose Annotations to open the Trace
Annotation Properties dialog box. In the Trace Annotation Properties dialog box, select the
RT Window check box.
1. Click the Adjust Retention Time Window button in the integration toolbar.
Alternatively, right-click the chromatogram to open the shortcut menu. Then, choose
Graphical Programming > Adjust Retention Time Window.
The following message appears in the Status box at the bottom of the screen: Click an
RT Window, then move/size it. Hit <ESC> to finish.
Figure 48. Adjust Retention Time Window
2. Click the RT Time Window that you want to adjust. Move it or resize it. Then, press
ESC.
The Adjust Retention Time Window dialog box appears.
Figure 49. Adjust Retention Time Window dialog box
3. Verify the modified Retention Time Window. Then, do one of the following:
• Click Update RT to update the Retention Time and Window for the selected peak in
the peak table.
• Click Analyze Now to add the event to the peak table and analyze the chromatogram
using the updated Retention Time and Retention Time Window.
• Click Cancel to ignore the operation.
1 Examples of Integration Timed Events
Optional Integration Events
1. Right-click a chromatogram to open the Trace Annotation Properties dialog box. Then,
ensure that the Group Range check box is selected.
2. Click the Adjust Group button in the integration toolbar.
3. Click the group range annotation that you want to adjust. A grab bar appears.
4. Drag the bar to resize the Group Range definition.
5. Press ESC to finish. The Adjust Group Range dialog box appears. See Figure 51.
Figure 51. Adjust Group Range dialog box
B Baseline
f Force Peak Start or Stop (user defined)
I Peak ended by Integration Off event
N Begin negative peak
P End negative peak
H Forward horizontal
h Backward horizontal
M Manual baseline or Manual peak
m Move baseline Start/Stop
S Shoulder
T Tangent skim
V Valley
v Forced valley point
x Split peak
E End of chromatogram encountered before the end of peak was found. End of
chromatogram used as peak end.
R Reset
LL Lowest
RN Reset Baseline at Next Valley
RB Reset Baseline
Contents
• Calibration Curves
• Determining Concentrations for Uncalibrated Peaks
• Internal Standard Amounts
• Calibration Curve Calculations
• Report Calculations
• Performance Calculations
Calibration Curves
A calibration curve relates the component amount to detector response, (or for an Internal
Standard calibration, the amount ratio to the area or height ratio). ChromQuest fits a curve to
the calibration points, according to the fit type, scaling, and weighting factors you select. The
resulting calibration curve is used to calculate component concentrations in unknown
samples, and is generally defined by a least squares calculation.
y = ∫ (x)
where
ƒ = point to point
linear (with or without force through zero)
quadratic (with or without force through zero)
cubic (with or without force through zero)
average RF fit
If you select Amount/Area for your response factor definition, the calibration curve (which
can be viewed in Review Calibration) is defined where y = area or height and x = amount.
(For internal standard calibrations, y = area or height ratio and x = amount ratio.) Figure 54 is
an example calibration curve using Amount/Area response factor definition.
Figure 54. Calibration Curve with Amount/Area Response Factor Definition
The external calibration curve (for Response Factor definition Area/Amount) is calculated as:
Amt unk ( u )
Conc = ------------------------------- × MF
Sample Amt u
where
Conc = the concentration (in the same units used for calibration) of the unknown analyte
of interest
Amtunk(u) = the uncorrected amount of the unknown component
Sample Amtu= the amount of the unknown sample taken from the Sequence Table or
Single Run dialog
MF = multiplication and dilution factors applied = M1*M2*M3/D1*D2*D3
Internal Standards
Figure 56 is an example of an Internal Standard calculation, the ratio of component amount
to Internal Standard amount is plotted on the X-Axis and the ratio of component area to
Internal Standard area is plotted on the Y-Axis (Amount/Area Response Factor definition):
Figure 56. Internal Standard Calibration Curve
Amt IS
Conc u = ------------------------------- × MF × Amt Ratio unk
Sample Amt u
where
Concu = Concentration (in the same units used for calibration) of the analyte of interest
AmtIS = Amount of the internal standard
Sample Amtu = Amount of the unknown sample from sequence or at start of single run
MF = multiplication and dilution factors applied = M1*M2*M3/D1*D2*D3 entered for
the unknown sample
Amt Ratiounk = Amount ratio value taken from the calibration curve at the given area
ratio for the unknown sample
For an unknown run, the Internal Standard Amount is entered in the Single Run Acquisition
dialog box, or in the Sequence Table. It is used as a multiplier in calculation of the unknown
concentration.
To force the concentration of the internal standard component to be reported as zero (so it is
not contribute to analyte concentration totals), enter a Manual RF value of zero for the
internal standard components in the peak table.
The “Uncorrected Amount” is the amount (or amount ratio) of a component represented by
a given response (or response ratio). The term “Uncorrected Amount” is used because factors
such as sample amount and multiplication factors have not been applied.
The “Response Factor” for a component is calculated from the calibration curve. It can be
reported as either Amount/Area or Area/Amount. This is selected as part of the
Method > Properties tab.
Note When a calibration contains replicates, the average of the replicates is calculated
prior to the fit calculation.
Point-to-Point Fit
A point-to-point calibration fit connects a series of calibration points with lines. The result for
point-to-point calculations is the same regardless of Response Factor definition. The equation
for calculating the uncorrected amount is:
Y = aX + b
External Standard:
Y = Uncorrected Amount (With scaling factor applied, i.e. 1/x if applicable)
a = Slope of the calibration line segment
X = Area or height value from Y-Axis
b = Y-Axis intercept of the calibration line segment
Internal Standard:
Y = Uncorrected Amount Ratio
a = Slope of the calibration line segment
Note For points beyond the last calibration point, the line segment between the last two
calibration points is extrapolated. If the value falls below the lowest calibration point, then
the line segment is constructed between zero and the first calibration point
Linear Fit
A linear calibration fit determines the best line (linear regression) for a series of calibration
points. A minimum of two calibration points is required to determine a linear fit. The
equation for calculating the uncorrected amount is:
Y = aX + b
External Standard:
Y = Component area or height
a = Slope of the calibration line
X = Uncorrected Amount (With scaling factor applied, i.e. 1/x if applicable)
b = Y-Axis intercept of the calibration line
Internal Standard:
External Standard:
Y = Uncorrected Amount (With scaling factor applied, i.e. 1/x if applicable)
a = Slope of the calibration line
X = Component area or height
b = Y-Axis intercept of the calibration line
Internal Standard:
Y = Uncorrected Amount Ratio
a = Slope of the calibration line
Quadratic Fit
A quadratic calibration fit determines the best quadratic curve fit for a series of calibration
points. A minimum of three calibration points is required to determine a quadratic fit. The
equation for calculating the uncorrected amount is:
2
Y = aX + bX + c
External Standard:
Y = Uncorrected Amount (With scaling factor applied, i.e. 1/x if applicable)
a = Calibration Curve Coefficient
b = Calibration Curve Coefficient
c = Y-Axis intercept
X = Component area or height
Internal Standard:
Y = Uncorrected Amount Ratio
a = Calibration Curve Coefficient
b = Calibration Curve Coefficient
c = Y-Axis intercept
External Standard:
Y = Component area or height
a = Calibration Curve Coefficient
b = Calibration Curve Coefficient
c = Y-Axis intercept
X = Uncorrected Amount (With scaling factor applied, i.e. 1/x if applicable)
Internal Standard:
Cubic Fit
A cubic calibration fit uses a least squares calculation to determine the best curve fit for a series
of calibration points. A minimum of four calibration points is required to determine a cubic
fit. The equation for calculating the uncorrected amount is:
3 2
Y = aX + bX + cX + d
Note For cubic fits, only Response Factor definition of Area/Amount applies.
External Standard:
Y = Uncorrected Amount (With scaling factor applied, i.e. 1/x if applicable)
a = Calibration Curve Coefficient
b = Calibration Curve Coefficient
c = Calibration Curve Coefficient
d = Y-Axis intercept
X = Component area or height
Internal Standard:
Y = Uncorrected Amount Ratio
a = Calibration Curve Coefficient
b = Calibration Curve Coefficient
c = Calibration Curve Coefficient
d = Y-Axis interceptt
a = ΣWX ΣW –1
× ΣWY
b 2
ΣWX ΣWX ΣWXY
where
a = the slope of the calibration line
b = the Y-Axis intercept of the calibration linet
1 1
W is the weighting term = --- or -----2- or 1 if no weighting is used,
X X
where X = Response or Amount. This is selected as the “Weighting Method” in the peak
table.
For Internal Standard calculations, X is the uncorrected amount ratio of the component of
interest in the calibration sample Cu.
peak area-
Y is the corrected relative area = ------------------------
int std area
For External Standard calculations, X is the uncorrected amount of the component of interest
in the calibration sample Cu.
The modified least squares calculation can be extended to higher order fits. As an example, the
following formula is used to determine the calibration curve coefficients for weighted
quadratic fits:
–1
a ∑ WX2 ∑ WX ∑ W ∑ WY
×
b = ∑ WX3 ∑ WX2 ∑ WX ∑ WXY
c
∑ WX4 ∑ WX3 ∑ WX2 ∑ WX2 Y
The following formula is used to determine the R-squared value for a series of values:
n
ˆ i )2
∑ ( Yi – Y
2 i=1
R = 1 – ------------------------------
n
-
∑ ( Yi – Y )
2
i=1
where
Y is an ordinate of the least squares line.
Σ Y
Y = ----------
n
Matrix Operations
The following example illustrates the matrix operations used to determine curve coefficients
for quadratic calibration curve fits.
y = ax 2 + bx + c
For a series of amount/area pairs (x, y) representing calibration points (or averaged calibration
points):
( x 1, y 1 ) ( x 2, y 2 ) ( x 3, y 3 )..... ( x n, y n )
These points produce “n” quadratic equations, which can be solved for the coefficients a, b,
and c by writing the equations in matrix notation as follows.
y1 x 12 x 1 1
a
y 2 = x 22 x 2 1 •
b
: : : : c
yn x n2 x n 1
or,
Y = M•Z
where
MT is matrix M transposed
x 12 x 22 … x n2
x1 x2 … xn
1 1 … 1
then
T –1 T
Z = (M M) M Y
If the curve is forced through zero, then c=0, and M becomes a 2-column matrix which is
solved for coefficients a and b.
Average RF
If the Average RF fit type is selected, the slope of the calibration line between each calibration
point and zero is calculated independently. These values (the Response Factors, or RFs) are
then averaged to give an Average RF value. The Average RF is then used to calculate the
uncorrected amount of the unknown component as follows:
where
Cu = Area/RF if Response Factor is set to Area/Amount
Cu = Area ∗ RF if Response Factor is set to Amount/Area
Calibration Averages
The Replace/Wt Average Calib Flags in the Peak Table allow you to select whether or not
calibration the calibration area is averaged with previous replicates (Last Area). In general,
when Replace is selected, the current calibration area replaces any existing calibration area or
averaged area in the method. When WtAverage is selected, replicates are averaged, and then
weighted with the Last Area value (if applicable).
• For External Standard calibrations:
When Wt Average is selected, the current peak area/height replicates are averaged. When
Replace is selected, each calibration run replaces the previous value in the method.
• For Internal Standard calibrations:
When Wt Average is selected, the individual replicate ratios are calculated first, then the
average of the ratios is taken. When Replace is selected, each calibration run replaces the
previous value in the method.
For example, for a calibration component area, U, and its associated internal standard
component area, I, the average ratio, Y, for three replicates is calculated as follows:
Note In the Review Peak Calibration window, if you eliminate a replicate from the
calibration curve of an internal standard peak by highlighting it with the mouse, and then
the associated replicates for peaks using that internal standard are ignored when
calculating the average ratio.
If averaging, replicates for each peak level are saved in the method until they are cleared.
Replicate 1 is the most recent replicate.
Automatic Averaging
When you have the Automatic Averaging turned On for your method (in Method
Properties), averaging takes place for all peaks designated with the WtAverage flag in the peak
table. Replicates continues to save in the method until a new level is calibrated for the
method. When a new level is encountered, the replicates for the previous level is cleared, and
the average at that point is saved in the method as “Last Area”.
If you want replicate areas to be continuously saved in the method, whether or not a new level
is encountered, turn the Automatic Averaging OFF. You must then designate in your
sequence where you want averaging to take place by designating “Average Replicates” in the
Run Type of the sample.
Calib Weight
You can designate a “Calib Weight” in the Peak Table for the average of the replicates with
the method Last Area. Note that a Calib Weight of 100 causes the Last Area value to be
ignored.
Aw = ( Xc ∗ W ) + [ Xo ∗ ( 1 – W ) ]
where
Aw is the weighted average result
Xc is the true average of replicates (if any) with current run area/height
W is the Calib Weight / 100
Xo is the “Last area” from the method
Enter a weight factor of 50 to give equal weight to the “Last Area” average and the new
calibration replicates.
Note For Internal Standard calibrations, each replicate represents a ratio of the
component area/height to internal standard area/height.
Scaling
This parameter allows you to apply a scaling factor to the calibration curve. This factor is
applied to the entered amounts prior to computing the calibration curve. The purpose of
using a scaling factor is to create a relationship between areas (or heights) and amounts that
can be approximated by a polynomial fit. A scaling factor can be applied to any fit type. The
available scaling operations are:
• None
• 1/X
• 1/X2
• ln[X]
• 1/ln[X]
• sqrt[X]
• x2
Report Calculations
The following calculations are used to determine the concentrations reported.
Area % Report
Normalization Report
C u ∗ 100
Conc = ----------------------------------------------------------------------------------------
Sum of C u for named peaks + Sum CR
where
Conc = Corrected Amount of component
Cu = Uncorrected Amount of component
SumCR = Sum of calibrated range groups
where
Conc = Corrected Amount of component.
Cu = Amount value from calibration curve for a given unknown area
Samp. Amt. = Sample amount
MF = multiplication and dilution factors applied = M1*M2*M3/D1*D2*D3
where
Conc = Corrected Amount of component
ISTDu = Amount of Internal Standard
Sample Amt. = Sample amount
MF = multiplication and dilution factors applied = M1*M2*M3/D1*D2*D3
Cu = Amount ratio taken from the calibration curve for the given area/height ratio
Performance Calculations
ChromQuest calculates the following values that can be used to assess overall system
performance:
• Relative Retention
• Theoretical Plates
• Capacity Factor
• Resolution
• Peak Asymmetry
• Plates per Meter
Figure 60 shows the parameters used to calculate these system performance values for the
separation of two chromatographic components.
Figure 60. Separation of Two Chromatographic Components
Injection
Unretained peak
W1 W2
ta
t1
t2
Time
Note To accurately calculate suitability values, the sampling frequency (set in Acquisition
Setup) must be set to provide at least 20 data points for the narrowest peak of interest.
t2 – ta
α = --------------
t1 – ta
where
α = Relative retention.
t2= The retention time measured from point of injection
ta = The retention time of an inert component not retained by the column, taken from
“Unretained Peak Time” in the Performance Options section of the method.
t1 = The retention time from point of injection for reference peak defined in the peak
table. If no reference peak is found, this value becomes zero.
t
k ′ = ---2- – 1
ta
where
k' = Capacity Factor
t2 = The retention time measured from point of injection
ta = The retention time of an inert component not retained by the column, taken from
“Unretained Peak Time” in the Performance Options section of the method.
Plates/Meter
n
N = ---
L
where
N = Plates per meter
n = Theoretical plates in the column
L = Column length, in meters. This value is taken from the Performance Options
section of the method.
∑ ( Ei – E )
2
rms noise =
i=1
-------------------------------
n–1
where
Ei = individual voltage readings
Drift Test
The drift test measures the change in voltage over a given period.
y2 – y1
drift = ----------------
x2 – x1
where
y2 = voltage (μv) at time x2 (drift test start time in minutes)
y1 = voltage (μv) at time x1 (drift test stop time in minutes)
2
n = 16 ⎛ -----⎞
t
⎝W ⎠
where
n = theoretical plates
t = The retention time of the component
W = The width of the base of the component peak using tangent method.
W 0.05
T = -------------
2f
where
T = Peak asymmetry, or tailing factor
W0.05 = The distance from the leading edge to the tailing edge of the peak, measured at a
point 5% of the peak height from the baseline
f = The distance from the peak maximum to the leading edge of the peak at the position
of 5% peak height
Note For peak asymmetry at 10%, the values for W and f are measured at 10% of
peak height.
Peak Peak
h
front tail
W 0.05
f 0.05h
Peak maximum
Resolution
2 ( t2 – t1 )
R = ----------------------
W2 + W1
where
R = Resolution between a peak of interest (peak 2) and the peak preceding it (peak 1)
t2 = The retention time measured from point of injection of peak 2
t1 = The retention time measured from point of injection of peak 1
W2 = The width of the base of the component peak 2
W1 = The width of the base of the component peak 1
2
t -
----------
W0.1
N = 41.7 × -------------------------
b 0.1
-------- + 1.25
a 0.1
where
N = The number of theoretical plates
t = The retention time of the component
W0.1 = The width of the peak at the position of 10% peak height
a0.1 = The width of the first half (start to top) of peak at the position of 10% peak height
b0.1 = The width of the second half (top to end) of peak at the position of 10% of peak
height
W 0.05
T = -------------
2f
where
T = Peak asymmetry, or tailing factor
W0.05 = The distance from the leading edge to the tailing edge of the peak, measured at a
point 5% of the peak height from the baseline
f = The distance from the peak maximum to the leading edge of the peak at the position
of 5% peak height
Note For peak asymmetry at 10%, the values for W and f are measured at 10% of
peak height.
Resolution
( t2 – t1 )
R = 2.15 × -------------------------------
W 0.1 + W p0.1
where
R = Resolution between a peak of interest (peak 2) and the peak preceding it (peak 1)
t2 = The retention time measured from point of injection of peak 2
t1 = The retention time measured from point of injection of peak 1
W0.1 = The width of peak at the position of 10% peak height
Wp0.1 = The width of previous peak at the position of 10% peak height
Theoretical Plates
2
t
N = 5.54 × -----------
W 0.5
where
N = Theoretical plates
t = The retention time of the component
W0.5 = Width of peak at the position of 50% peak height
W0.05
T = -------------
2f
where
T = Peak asymmetry, or tailing factor
W0.05 = The distance from the leading edge to the tailing edge of the peak, measured at a
point 5% of the peak height from the baseline
f = The distance from the peak maximum to the leading edge of the peak at the position
of 5% peak height
Note For peak asymmetry at 10%, the values for W and f are measured at 10% of
peak height.
Resolution
( t2 – t1 )
R = 1.18 × -------------------------------
W0.5 + Wp0.5
where
R = Resolution between a peak of interest (peak 2) and the peak preceding it (peak 1)
t2 = The retention time measured from point of injection of peak 2
t1 = The retention time measured from point of injection of peak 1
W0.5 = The width of the component peak at 50 % peak height
Wp0.5 = The width of the previous component peak at 50 % peak height
where
N = Theoretical plates
t = The retention time of the component
W0.5 = Width of peak at the position of 50% peak height
W 0.05
T = -------------
2f
where
T = Peak asymmetry, or tailing factor
W0.05 = The distance from the leading edge to the tailing edge of the peak, measured at a
point 5% of the peak height from the baseline
f = The distance from the peak maximum to the leading edge of the peak at the position
of 5% peak height
Note For peak asymmetry at 10%, the values for W and f are measured at 10% of
peak height.
Resolution
( t2 – t1 )
R = 1.18 × -------------------------------
W 0.5 + W p0.5
where
R = Resolution between a peak of interest (peak 2) and the peak preceding it (peak 1)
t2 = The retention time measured from point of injection of peak 2
t1 = The retention time measured from point of injection of peak 1
W0.5 = The width of the component peak at 50 % peak height
Wp0.5 = The width of the previous component peak at 50 % peak height
where
N = theoretical plates
t = The retention time of the component
W = The width of the base of the component peak.
W=4×σ
1 A
σ = ---------- × ----
2π H
A
= 0.399 × ----
H
where
A= Peak area
H= Peak height
W0.05
T = -------------
2f
where
T = Peak asymmetry, or tailing factor
W0.05 = The distance from the leading edge to the tailing edge of the peak, measured at a
point 5% of the peak height from the baseline
f = The distance from the peak maximum to the leading edge of the peak at the position
of 5% peak height
Note For peak asymmetry at 10%, the values for W and f are measured at 10% of
peak height.
Resolution
2 ( t2 – t1 )
R = ----------------------
W2 + W1
where
R = Resolution between a peak of interest (peak 2) and the peak preceding it (peak 1)
t2 = The retention time measured from point of injection of peak 2
t1 = The retention time measured from point of injection of peak 1
W2 = The width of the base of the component peak 2
W1 = The width of the base of the component peak 1
where
Peak Height = the standard peak height determined during integration.
Noise = the standard deviation of the signal derived from n measurements.
1⁄2
n
2
Noise = 6 × ∑ ( E i – ∫ ( Ei ) )
i=1
----------------------------------------
n–1
where
Ei = data point
f ( Ei ) = the point on the linear regression line of all the data points.
The n measurements are made between the time limits entered.
SN M = H ⁄ N D
LOD = C ∗ SN ⁄ SN M
where
SN = S/N ratio for LOD entered in Peak Table
SNM = Calculated S/N ratio
H = Peak height
ND = Measured noise level
C = Concentration result of peak being evaluated
SN = H ⁄ N D
LOQ = C ∗ SN ⁄ SN M
where
SN = S/N ratio for LOQ entered in Peak Table
SNM = Calculated S/N ratio
H = Peak height at concentration C
ND = Measured noise level
C = Concentration result of peak being evaluated
Savitsky-Golay Smoothing
A 9-point digital filter is applied as a sliding filter to the data points as shown in the following
example for data points a1 through a3.
a 1 f 1 + a 2 f 2 + …a 9 f 9 a 2 f 1 + a 3 f 2 + …a 10 f 9 a 3 f 1 + a 4 f 2 + …a 11 f 9
---------------------------------------------- , ------------------------------------------------ , ------------------------------------------------
norm norm norm
where
a1….ax = the data points
f1…fx = the filtering factors
norm = the normalization factor. The filtering factors and normalization factor are given
below.
f0: -21
f1: 14
f2: 39
f3: 54
f4: 59
f5: 54
f6: 39
f7: 14
f8: -21
The normalization factor is 231.0.
Contents
• User Programs
• Custom Parameter Programs
User Programs
User Programs are programs that are run before an acquisition, or before or after an analysis.
These programs are meant to synchronize actions between ChromQuest and an instrument or
another data processing program you may be running. A User Program may be an executable
(.EXE) or a dynamic link library (.DLL) file.
If the User Program is a .DLL, it has access to all data and parameters of ChromQuest. This is
the recommended method for User Programs. A User Program .DLL should implement the
“RunUserProg” function.
Contents
• Data Export
• Automatic Export to Microsoft Excel
• ODBC Export to Microsoft Access
• Graphics Export
Data Export
The Method > Advanced Method Options > Export function allows you to save results
automatically in an ASCII file after each analysis is completed. You select the type of data to
export: Peaks, Groups, Standard Reports, or Chromatogram. For Peaks and Groups, you can
choose the parameters you want to export. When you export Standard Reports, only the
contents of the report is exported.
When you choose the Method > Advanced > Export tab, a dialog box appears. You then
select what type of data export you want.
Figure 62. Advanced Method Options dialog box
Export Enabled
Select the Export Enabled checkbox to turn data export on for the method. While this option
is enabled, data export occurs after each Analysis of the data. Since the export of data occurs
when the data is analyzed automatically at the end of a run and each time the data is manually
analyzed, you should turn this option off while you are developing methods.
Select the type of information to export from the drop-down list. For each type of export
chosen, you can select parameters for export. If you have defined any Custom Parameters,
they appear in the appropriate list of items you can choose to export.
Exporting Peaks
When you select Peaks to be exported, a list of available peak export parameters is displayed.
Select a parameter for export by double-clicking on it, or clicking on it to highlight it, then
click the Green (Top) arrow button to move it to the list of export items in the right-hand
box. To remove an item from the export box, highlight it, and then click the Red (bottom)
arrow button.
Exporting Groups
If you select Groups for export, a list of group parameters is presented in the left-hand box.
To select an item for export, double-click it, or click it to highlight it, then click the Green
(Top) arrow. To remove an item from the export list, highlight it, and then click the Red
(Bottom) arrow.
Exporting Chromatograms
If you select Chromatograms to be exported, you are given the option to export in either/or
AIA (*.CDF) file format, or in ASCII format.
Selecting the AIA option causes ChromQuest to create a *.CDF (Chromatograph Data File)
in the standard format specified by the Analytical Instrument Association (AIA). AIA Level 2
file export is supported. This includes the raw chromatogram, and integration results. This is
also called ANDI file format (Analytical Data Exchange). This allows ChromQuest results to
be read by other chromatography data systems.
Note AIA Level 2 support is for export only. Import of AIA files is supported only at
Level 1 (raw data).
Export Options
• Field Separator
Select the type of separator you want to include between data fields in your export file.
Choices include Tab, Space, and Comma. Your choice is determined by what you wish to
do with your exported file. If you want to import the file into a Microsoft Excel
spreadsheet, for example, choose Tab.
• Path for export files
Enter a path name for the directory where you want to save your export files. If you do
not know the name of the directory, you can select it from existing paths by clicking the
File button adjacent to the field.
ODBC Enabled
Select this box if you want to use ODBC format for your data export. Open Database
Connectivity (ODBC) is an industry-standard method of sharing data between databases and
other programs. ODBC drivers use the standard Structured Query Language (SQL) to gain
access to data.
Note Microsoft Data Access Components (MDAC) 2.5 or higher is required for ODBC
export. A version of this is on the ChromQuest CD-ROM under \Updates.
To create a new data source, click New. The Add Data Source dialog box appears.
Table name
Enter the name of the table you wish to use for your data export.
Export Files
Data exported using ChromQuest are saved in files using the following conventions.
For each parameter selected, a file is created containing that value for each named peak, along
with file and method name information. Each time the method is used to acquire or process
data, a row is appended to the file containing the new calculated value for that file. Each file is
saved with the method name, with an extension representing the type of value selected. For
example, Figure 65 is an example of a file created for export of area for five runs.
Figure 65. Export file
Note If you export data as part of a sequence, and you want to view the export file in
another application while data is being acquired, you must make a copy of the file and
save it with another name before you use it. Otherwise, a file sharing violation might
occur when ChromQuest tries to update the file with data from a new run.
When you select a Standard Report option, a file is created each time an analysis is run. The
file contains the information in the selected report.
Unlike peak and group export, Standard Report Export files are not appended with
information from additional runs. Each time the method is used for acquisition or processing,
a new file is created. The new file name is based on the data filename and it uses the export
extension given below for the type of export data selected.
Figure 66 shows an example of a file created for Standard Export of an External Standard
report.
Figure 66. Standard Export of an External Standard report
Example:
Method Name - multi level.met
Data File - calib std 3.dat
Channel - TCD
Note For Group export files, the file naming convention is as described above, except that
each file begins with a "G". In the example above, the group export file for NORM
becomes Gmulti level-tcd.normconc.
Example:
Method Name - multi level.met
Data File - calib std 3.dat
Channel - TCD
Chromatogram Export
AIA calib std 3 dat-tcd.CDF
ASCII calib std 3.dat.ASC
The following is an example of the format for an exported chromatogram file. The numbers
at the end of the file are the individual data points from the chromatogram, with channel 1
data first, followed in order by data from additional channels if present.
Figure 67. Format for an exported chromatogram file
The format for ASCII data files to be imported into ChromQuest must follow the example
shown above. If data from more than one channel are involved, they are appended to the
string. In this example, channel B starts at data point 2649.
The Field Separator must be set to Tab, and the Path for export files must contain a valid
path. Figure 68 shows an example for exporting “Area” and “Area Percent” parameters.
Figure 68. Data Export page of the Advanced Method Options dialog box
The Additional Parameters field must contain entries that specify the following items (in the
order specified), each in quotation marks, separated by a comma:
• Path of exported ChromQuest data
• Parameter Name
• Excel Macro File
• Excel Macro Name
Figure 69. Files page of the Advanced Method Options dialog box
The Parameter Name entry may be repeated to specify additional parameters. All parameters
must be enclosed in quotes, and must be separated by spaces and/or commas. For example, to
export “Area” and “Area Percent” from the C:\ChromQuest\Export directory, and then run
the “FormatData” macro from the C:\excel\xlstart\personal.xls spreadsheet, the entry would
look like the following:
“C:\ChromQuest\export”,”Area”,”AreaPerc”,”C:\excel\xlstart\personal.xls”,”Form atData”
Note The names of the parameters are not the same as the names shown on the Export
tab. Instead, the names used must be the same as the name used for the file extension for
the exported parameter file. For example, “Area Percent” becomes “AreaPerc”. For a list of
these file extensions see File Extensions Used for Data Export in help or in the “File
Extensions Used for Data Export” on page 77 of the manual.
The data is organized in excel as follows: each parameter is in a separate Excel workbook and
each wavelength channel of a parameter is in a separate sheet of a workbook.
Note Microsoft Data Access Components (MDAC) 2.5 or higher is required for ODBC
export. A version of this is on the ChromQuest CD-ROM under Updates.
Access automatically creates a database file named db*.mdb, but you can enter your
own database name if you prefer.
b. In the File New Database dialog box, click Create.
Access creates a new blank database and display the following screen.
Figure 73. Microsoft Access – Blank database window
b. From the ODBC Data Source Administrator dialog box, click the System Data
Source Name (DSN) tab.
Figure 75. ODBC Data Source Administrator – System DSN dialog box
c. In the System DSN page, click Add to open the Create New Data Source dialog box.
d. From the Create New Data Source list box, select the Microsoft Access Driver. Then,
click Finish.
e. In the ODBC Microsoft Access Setup dialog box, type a unique Data Source Name
in the Data Source Name text box.
Figure 77. ODBC Microsoft Access 97 Setup dialog box
g. In the Select Database dialog box, select the .mdb file that you created earlier. Then,
click OK to return to the ODBC Microsoft Access Setup dialog box.
Figure 79. Select Database dialog box
h. Enter a description, if desired, (helpful for later use), and click OK to return to the
System DSN page of the set up.
Figure 80. ODBC Microsoft Access 97 Setup dialog box
The New System Data source has now been created, and appears in the list of System
Data Sources in the System DSN page.
i. In the System DSN page, click OK to exit the ODBC setup on the Server.
6. On the client PC, launch ChromQuest. Then, open the instrument that is networked to
the above Server.
7. Create a method that exports data to Microsoft Access:
a. From the Instrument window, choose Method > Peaks/Groups and ensure that the
method has a peak table.
b. Choose Method > Advanced, and then click the Data Export tab.
c. In the Export page, select the Export Enabled checkbox.
d. Select your export parameters.
e. In the Path for Export Files list box, browse to the appropriate directory on your
Server. The Server name is listed in the Instrument Configuration.
f. Select the Enable ODBC checkbox.
g. Select the Data Source Name that you created on the Server in step 5.
h. Type a name in the Table Name text box where you want the export parameters to be
entered.
If this table does not exist in your database, it is created during export. For details, see
the “ODBC Export to Microsoft Access” on page 82 and Help.
Figure 82. Data Export page of the Advanced Method Options dialog box
Once you save the method, each time you analyze a data file using this method, the
parameters specified is automatically exported to your Access database.
8. To view the exported parameters after the data file has been analyzed:
a. Open Microsoft Access.
b. Use Explorer to locate the appropriate folder.
c. Locate the *.mdb file you selected for Data Source Name. (In the above example, this
is db4.mdb, associated with Data Source Name Hank 2.)
d. Open this database file.
A list of existing tables in your database appears. In Figure 83, Hank4 was the table
name used to export parameters.
e. Select the appropriate table, and then click Open to view the exported parameters.
Figure 84. Microsoft Access – Table window
Graphics Export
Graphics export is a new feature provided in ChromQuest 5.0. Using this feature, you can
export images of
Y To export graphics
1. In the Instrument window, choose Method > Advanced. Then click the Graphics
Export tab.
2. The Graphics Export page appears. See Figure 85.
Figure 85. Graphics Export page of the Advanced Method Options dialog box
3. Select the traces to be exported as WMF files each time a data file is analyzed with the
method.
a. In the spreadsheet, select the Export check box and then type the filename to be used
for the export.
b. Follow the instructions below the spreadsheet.
The graphics window displays the current graphic for export. To change the graphic,
right-click the window and make a selection from the shortcut menu.
4. Select the path to be used for export of the graphics file.
ChromQuest saves the exported graphics with the file name <Data file name> + “ _ “ +
<Export Name> + “.wmf ”. See Figure 86.
Figure 86. Viewing the WMF file
Contents
• Sequence Header
• Sequence Records
• Action Record
• Example of ASCII Sequence File
• Example of ASCII Dual Tower Sequence File
Sequence Header
The next part of the text file contains sequence “header” information of the form:
<keyword>=<value>
For example:
DATAPATH=C:\CHROMQUEST\DATA
The following keywords are recognized as elements of the sequence header (Keywords are case
sensitive and should be typed as shown below):
CREATIONDATE=
LASTCHANGEDATE=
DESCRIPTION=
None of these keywords are mandatory; if a keyword does not exist in the text file, a default
value is used.
Sequence Records
After all sequence header elements have been specified, the rest of the text file consists of lines
specifying the records of the sequence table.
Record Elements
The records contain comma-delimited elements detailing the operation of that line of the
sequence. These lines are of the form:
RECORD = a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p
where
a = Sample ID (63 characters maximum)
b = Method name (63 characters maximum)
c = File name (63 characters maximum)
d = Calibration level (0 - x, 0 = no calibration)
e = Sample amount (real number > 0)
f = Internal standard amount (real number > 0)
g = Dilution factor (a.k.a. "Mult."; real number > 0)
h = Injection vial
i = Injection volume
j = Pretreatment file name
k = Fraction collector file name
l = Reserved; do not specify anything between the commas
m = Run type (see “Run Types” on page 96)
n = Action (Leave blank. Not used in ChromQuest)
o = Run description
p = Repetitions per vial (1 - 9)
Note If you wish to use a delimiter other than the comma, set the List Separator field in
the International settings in the Control Panel to the character that you wish to use. This
setting is consulted before the ASCII sequence is read in. You may only specify a single
character, such as the semicolon or the dash. You may use the space or tab character, but
remember that any character that is used for the separator can not be recognized as a
record element.
Run Types
Decimal Hex Name
4 0x0000004 UnSpiked
8 0x0000008 Spiked
16 0x0000010 Spike 1 of 2
32 0x0000020 Spike 2 of 2
64 0x0000040 Duplicate 1
128 0x0000080 Duplicate 2
256 0x0000100 System Suit Start
512 0x0000200 System Suit End
1024 0x0000400 System Suit Std.
2048 0x0000800 Shutdown
4096 0x0001000 Begin Calibration
8192 0x0002000 End Calibration
16384 0x0004000 QC Standard
32768 0x0008000 Summary Start
65536 0x0010000 Summary End
131072 0x0020000 Summary Run
262144 0x0040000 Clear All Response Factors
524288 0x0080000 Clear Response Factors for this Level
1048576 0x0100000 Print Response Factors
2097152 0x0200000 Average Replicates
4194304 0x0400000 Clear Replicates
8388608 0x0800000 Begin Loop
16777216 0x1000000 End Loop
Action Record
Action records relate to the previous Sequence Record only. Form:
ACTION=a,b,c,d,e
where
a = CONDITION:
0 Any Condition
1 Calibration
2 QC
3 System Suitability
4 Hardware Status
5 Conc. Limit
b = RESULT:
0 Pass
1 Fail
2 Below Limit
3 Above Limit
c = ACTION:
0 Abort
1 Pause
2 Reinject
3 Run User Program
4 Run Shutdown
5 Alarm
6 Goto
7 Restart System Suit
d = PARAMETER 1:
For Reinject and Goto - Rep Count
For Run User Program - Program Path and Name
e = PARAMETER 2:
For Goto - Goto Record Number
Note Regarding method name, file name, pretreat name and fraction collector file name:
If these items do not have path names embedded, the appropriate path from the header
used (fraction collector files are assumed to reside in the pretreatment file path). All
elements of the record line MUST be present; however, any record element not of interest
can be skipped by specifying nothing in-between the commas. If you are creating the
ASCII file using Microsoft Excel, save the file in the *.CSV format. Sample ID must be in
the same cell as RECORD= to prevent a comma after the (=). Method and data paths
must be selected manually using Sequence Properties because the .CSV adds a comma to
the path.
ASCII Sequence 1
CREATIONDATE=
LASTCHANGEDDATE=
METHODPATH= D:\System\Methods
PRINTREPORTS= NO
DATAPATH= D:\System\data
PRETREATPATH=
SUMMARYPATH=
RECORD=Samp,MULTICAL.MET,Multical001.dat,0,1.2,2.91,1.1,1,12,,,,16384,1,,3
RECORD=Samp,MULTICAL.MET,Multical002.dat,0,1.2,2.91,1.1,2,12,,,,,2,Record
Desc,3
RECORD=Samp,MULTICAL.MET,Multical003.dat,3,1.2,2.91,1.1,1,12,,,,16384,,,3
ACTION = 1,1,3,D:\CHROM\Program.EXE
ACTION = 1,0,2,4
RECORD=Samp,MULTICAL.MET,Multical004.dat,0,1.2,2.91,1.1,2,12,,,,,4,Record
Desc,3
RECORD=Samp,MULTICAL.MET,Multical005.dat,0,1.2,2.91,1.1,1,12,,,,16384,,,3
ACTION = 3,1,6,7 ,2
ACTION = 3,0,2,4
RECORD=Samp,MULTICAL.MET,Multical006.dat,6,1.2,2.91,1.1,2,12,,,,16384,3,,3
RECORD=Samp,MULTICAL.MET,Multical007.dat,0,1.2,2.91,1.1,1,12,,,,,2,,3
RECORD=Samp,MULTICAL.MET,Multical008.dat,0,1.2,2.91,1.1,2,12,,,,76,,Record
Desc,3
ACTION = 4,1,5
ACTION = 4,0,2,4
RECORD=Samp,MULTICAL.MET,Multical009.dat,9,1.2,2.91,1.1,1,12,,,,16384,1,,3
RECORD=Samp,MULTICAL.MET,Multical010.dat,0,1.2,2.91,1.1,2,12,,,,16384,2,,3
ASCII Sequence 1
CREATIONDATE=
LASTCHANGEDDATE=
METHODPATH = \\Qa-glisowski01\transfer
PRINTREPORTS =
NO DATAPATH = \\Qa-glisowski01\transfer\public
PRETREATPATH =
SUMMARYPATH =
TOWER=0
JL01.MET,FrontMultical001.dat,0,1.2,2.91,1.1,1,1,,,,16384,1,,3
JL01.MET,FrontMultical002.dat,0,1.2,2.91,1.1,2,1,,,,,2,Record Desc,3
JL01.MET,FrontMultical003.dat,3,1.2,2.91,1.1,3,1,,,,16384,,,3
ACTION = 1,1,3,D:\CHROM\Program.EXE
ACTION = 1,0,2,4
JL01.MET,FrontMultical004.dat,0,1.2,2.91,1.1,4,1,,,,,4,Record Desc,3
JL01.MET,FrontMultical005.dat,0,1.2,2.91,1.1,5,1,,,,16384,,,3
ACTION = 3,1,6,7 ,2
ACTION = 3,0,2,4
JL01.MET,FrontMultical006.dat,6,1.2,2.91,1.1,6,1,,,,16384,3,,3
JL01.MET,FrontMultical007.dat,0,1.2,2.91,1.1,7,1,,,,,2,,
JL01.MET,FrontMultical008.dat,0,1.2,2.91,1.1,8,1,,,,76,,Record Desc,3
ACTION = 4,1,5,
ACTION = 4,0,2,4,
JL01.MET,FrontMultical009.dat,9,1.2,2.91,1.1,9,1,,,,16384,1,,3,
JL01.MET,FrontMultical010.dat,0,1.2,2.91,1.1,10,1,,,,16384,2,,3,
CREATIONDATE=
LASTCHANGEDDATE=
METHODPATH = \\Qa-glisowski01\transfer
PRINTREPORTS =
NO DATAPATH = \\Qa-glisowski01\transfer\public
PRETREATPATH =
SUMMARYPATH =
JL01.MET,RearMultical001.dat,0,1.2,2.91,1.1,11,1,,,,16384,1,,3
JL01.MET,RearMultical002.dat,0,1.2,2.91,1.1,12,1,,,,,2,Record Desc,3
JL01.MET,RearMultical003.dat,3,1.2,2.91,1.1,13,1,,,,16384,,,3
ACTION = 1,1,3,D:\CHROM\Program.EXE
ACTION = 1,0,2,4
JL01.MET,RearMultical004.dat,0,1.2,2.91,1.1,14,1,,,,,4,Record Desc,3
JL01.MET,RearMultical005.dat,0,1.2,2.91,1.1,15,1,,,,16384,,,3
ACTION = 3,1,6,7 ,2
ACTION = 3,0,2,4
JL01.MET,RearMultical006.dat,6,1.2,2.91,1.1,16,1,,,,16384,3,,3
JL01.MET,RearMultical007.dat,0,1.2,2.91,1.1,17,1,,,,,2,,3
JL01.MET,RearMultical008.dat,0,1.2,2.91,1.1,18,1,,,,76,,Record Desc,3
ACTION = 4,1,5
ACTION = 4,0,2,4
JL01.MET,RearMultical009.dat,9,1.2,2.91,1.1,19,1,,,,16384,1,,3
JL01.MET,RearMultical010.dat,0,1.2,2.91,1.1,20,1,,,,16384,2,,3
Contents
• System Suitability Overview
• ChromQuest System Suitability Module
• System Suitability Setup
• Running a Suitability Test
• Suitability Reports
System suitability basically involves running a mixture containing the analytes you are
running in your samples and examining various parameters that describe the ability of the
chromatograph and column to separate those analytes. The chromatographer, based on the
method and analysis to be performed, must determine the importance or need to examine any
given parameter. Some important suitability parameters are described briefly below.
Calculations used in the system suitability test are given at the end of this manual.
Resolution
This checks the ability of the column to separate one analyte from others in the mixture. Two
peaks are considered resolved if the calculated R factor is greater than 1.5.
Repeatability
Injection of the same sample multiple times lets you determine if your chromatograph is
providing reproducible results. Generally, 5 to 6 repeatability samples are necessary to provide
adequate data for meaningful results. Repeatability can be determined through examination
of the % relative standard deviation (%RSD) for such parameters as peak area, height, or
concentration.
Theoretical Plates
Calculation of plate count is an important indication of column efficiency. Many
chromatographers like to monitor plate count as an indication of when to replace the column.
1. Complete the System Suitability Setup dialog. For each peak of interest, select one or
more Parameters, and set Min, Max, and % RSD values for each. Additionally, set Noise
test and/or Drift test, enter start and end times for the test, along with threshold values.
2. Enter values for parameters such as Unretained Peak Time, required for calculation of
performance options. Select a method for calculation of the performance values. This is
done by choosing the Method > Advanced Method Options > Performance command.
Select Calculate Performance Parameters to enable the suitability calculations.
3. Create and run a sequence where system suitability standards are included at the
beginning of the sequence. For example, following USP protocols, five replicate standards
are run at the beginning of the sequence. These are designated as system suitability
standards in the sequence. At the end of the sequence, suitability calculations are made
and ChromQuest generates a system suitability report.
Note If you are performing a multi-wavelength analysis with one detector (UV2000,
UV3000, UV6000, Surveyor PDA, or Surveyor PDA Plus), you might want to label your
peaks (in the Peak Table) with additional information pertaining to their analysis
channels.
1. Choose the Method > System Suitability command to open the System Suitability
Setup dialog.
Figure 87. System Suitability Setup dialog box
2. Select the first peak to be used for calculations by highlighting it with the mouse in the
Compound list.
3. In the adjacent spreadsheet click the first field in the Parameter column. A drop-down
list of available parameters is displayed. Select a parameter from the list. If you do not
wish to perform a suitability test on any given peak, simply leave the Parameter fields
blank. Similarly, if you do not wish to have one of the test criteria used (for example,
%RSD), leave it blank. Some parameters have a choice of calculation methods. These
parameters have the calculation method displayed after the parameter in parenthesis. (for
example, Plates/Meter (JP) indicates Plates/Meter calculated using the Japanese
Pharmacopoeia calculation.) For details on these calculations, see the equations section.
4. For each parameter selected, enter a minimum value (Min), maximum value (Max), and
maximum allowed percent relative standard deviation (%RSD).
5. If you want a noise test to be performed, click the lower spreadsheet in the column
labeled Test. Select one of the noise calculations Noise (rms)/ ASTM Noise (rms), or
6-sigma Noise (rms) from the drop-down list. If you select this box, you must enter a
Start and Stop time for the test, and a Threshold value to determine acceptable limit.
The RMS noise value for the portion of chromatogram between the Start and Stop times
are calculated and compared to the Threshold value to determine whether the test Passed
or Failed. Note that the times you enter for the noise test should be representative of a
baseline area of your chromatogram where no peaks elute.
6. If you want a drift test to be performed, click the lower spreadsheet in the column labeled
Test. Select Drift (mV/min) from the drop-down list. If you select this box, you must
enter a Start and Stop time for the test, and a Threshold value (in μV/Min) for the
acceptable limit.
7. After highlighting sections of the spreadsheet that you want to copy, press:
• CTRL+C to copy the highlighted section of the spreadsheet to the clipboard.
• SHIFT+INSERT to paste the clipboard item to where the cell is highlighted.
• SHIFT+ARROWS highlights the spreadsheet cells, and can move them.
8. When you have completed the System Suitability Setup, close the dialog box.
1. Choose Method > Advanced. Click the Performance tab to open the
Column/Performance page.
2. Enter your column information.
Figure 88. Column/Performance page of the Advanced Method Options dialog box
3. Select the Calculate performance parameters for this channel check box to enable the
suitability calculations for the method.
Note If you are performing a multi-detector analysis, you can select a different
calculation method for each detector.
For example, following USP standards, five replicate standards are run at the beginning of the
sequence. These are designated as system suitability standards in the sequence. At the end of
the sequence, suitability calculations are made, and ChromQuest generates a system
suitability report.
Figure 89. Sequence Suitability report
Run Type is selected by clicking on the Run Type field, then selecting the appropriate sample
type from the choices provided.
The first sample in your suitability set should be designated as Begin System Suitability.
Additional suitability standards should be designated as System Suitability Standards, and
the final sample in your suitability set should be designated as End System Suitability.
Multiple run types can be selected for a given sample.
A System Suitability Report is generated at the end of the set of suitability standards when the
sequence is run.
To view the Suitability Report on screen, choose Reports > View > Sequence Custom
Report from the menu bar. The System Suitability Report appears in a list of available
Sequence Custom Reports.
To print the Suitability Report, choose Reports > Print > Sequence Custom Report, and
select the System Suitability Report choice. Note that in order to have your report printed
automatically at the end of the sequence, you must have selected the Print Sequence Reports
option in the Sequence Properties dialog.
Suitability Reports
ChromQuest provides a default suitability report template. This report template can be used
as-is, or it can be customized in the Sequence Custom Report editor.
Figure 91. Default suitability report
Standard Reports
ChromQuest has a very powerful custom report editor. However, it is not necessary to create
a custom report in order to print results. The following standard report templates are
provided with the system to enable you to print any type of report you wish. However, these
report templates can be modified by the user from the Custom Report editor.
Contents
• Error Condition Flags
• Area % Report
• External Standard Report
• Internal Standard Report
• Normalization Report
• Summary Report
• Calibration Report
• QC Check Standard Report
• Duplicate Report
• Spike 1 Report
• Spike 2 Report
• System Suitability Report
In cases where “0.00” is reported, the “0.00” is reported because the nature of the error
prevents further calculation.
Area % Report
This is an example of the Area % (Area%.srp) report template output.
Normalization Report
This is an example of the Normalized Report (Normalization.Srp) template output.
Summary Report
This is an example of a Sequence Summary (Summary.brp) report template output. This
template is included for compatibility with ChromQuest 2.x versions only.
Calibration Report
This is an example of the Calibration (Calibration .brp) report template output.
Duplicate Report
This is an example of the Duplicate (Duplicate.brp) report template output.
Spike 1 Report
This is an example of the Spike/Unspike (Spike 1.brp) report template output.
Spike 2 Report
This is an example of the Spike 2 of 2 (Spike 2.brp) report template output.
Spectral Options
ChromQuest provides special features to optimize your chromatographic analyses when
collecting UV-Visible scanning data from the UV3000 high-speed scanning detector, the
UV6000LP photodiode array detector (PDA), the Surveyor PDA, or the Surveyor PDA Plus
detector. The features are collectively referred to as Spectral Analysis. These features include
peak purity assessment for determining the presence of co-eluting impurities and spectral
library searches for confirming the identity of an analyte.
With a standard UV-Vis detector, the absorbance is measured at one wavelength (or
sometimes two wavelengths). The two-dimensional array of data plotted as absorbance versus
time is commonly called a chromatogram. The UV3000 scanning detector measures the
absorbance at multiple wavelengths in a fraction of a second. The photodiode array detectors
measure the absorbance at multiple wavelengths, simultaneously. Therefore, these detectors
can produce a 3-dimensional plot of absorbance versus time and wavelength.
Contents
• Spectral Method Setup
• Spectral Views
• Custom Report
• Peak Table
• Spectral Analysis and Calculations
• Library Search Calculations
• Ratio Chromatogram Calculation
• Similarity Calculations
• Lambda Max/Min Calculations
• Peak Purity Calculations
Library
On the Library page of the Spectral Options dialog box, you can enter library parameters for
the spectral library searches that is performed automatically as part of your method. When
you perform a spectral library search “using method parameters”, these parameters are used.
Figure 92. Spectral Options – Library dialog box
Parameter Description
Library/Enabled Enter the spectral library to be searched or select from available libraries by clicking the
Library field followed by clicking on the file button . You can select more than one
library. To enable the library for searching, select the Enabled checkbox. If this list is
empty, or no library is enabled, no hits are returned when a search is performed.
Search parameters Enter the parameters to use for the search.
Wavelength range Enter the wavelength range to search.
Wavelength step Enter a step number for the search. Larger numbers makes the search faster, but if you use
too large of a step, spectral details might not be picked up. Because the spectral comparison
program compares spectra on a per wavelength basis, it is best to collect the library spectra
with a step size of one.
Max hits Specify the number of hits that are reported in the results of a library search. Note that this
works in conjunction with the Similarity Threshold parameter to limit the number of hits
reported.
Similarity threshold Enter a number for threshold of similarity. The library search results only displays matches
whose similarity to the unknown exceeds this value.
The options in this group allow you to specify search pre-filters that is performed on library
spectra prior to the test for similarity. All pre-filters are optional.
Pre-filters
Retention time range When values are entered in these fields, ChromQuest limits its search to library spectra
obtained from peaks whose apex are within the specified retention time range. This
pre-filter is optional and may be left blank.
Lambda max When one or more of these values is specified, library search is restricted to those library
entries containing lambda max value(s) within 5nm of all of the specified values. Entries
without matching lambda max value(s) are automatically excluded from the search (no
similarity calculation is made). Entering values for lambda max is optional.
Compound name filter If you are searching only for spectra that contain a certain name, enter the name here. Only
spectra containing that name is searched.
Purity
The Purity page allows you to set the parameters necessary to perform manual peak purity
calculations and peak purity calculations that occur automatically as part of analysis.
Figure 93. Spectral Options – Purity dialog box
Parameter Description
Purity calculations The Purity calculations group box contains the following parameters:
Wavelength range Specify the wavelength range over which the purity calculations is performed, for example,
from 200nm to 400nm.
Wavelength step Specify the wavelength spacing (in nm) used for purity calculations.
Scan threshold This value is the minimum absorbance value that a spectrum must have to be included in
the purity calculations
Peak coverage The Peak Coverage parameter indicates the fraction of the peak used for purity calculations.
It is possible to allow the calculation to be done over the entire width of the peak (100%),
which ensures the greatest possibility of detecting an impurity. However, this has the
disadvantage of including data that has a relatively low signal to noise ratio, and for which
spurious results might be calculated. The default value is a peak coverage of 95%.
Background Selecting this checkbox causes spectra to be corrected for background baseline absorbance
compensation using the peak baseline prior to being used in the calculation of purity.
Per-peak spectrum The result of these calculations is available in reports and as chromatogram annotations.
calculations
Deselecting these checkboxes speeds up analysis. If a checkbox is not selected and the field
appears in a run report, it is reported as zero.
Total purity Selecting this checkbox causes total peak purity to be calculated on a per-peak basis during
analysis. The result of this calculation is available in reports and as chromatogram
annotations. If this box is not selected and a report contains the field, a value of zero is
reported.
3 point purity Checking this box causes 3-point peak purity to be calculated on a per-peak basis during
analysis. The result of this calculation is available in reports and as chromatogram
annotations. If this box is unchecked and a report contains the field, a zero is reported.
Spectrum
The Spectrum page allows you to specify the types of filtering and processing to be performed
on spectra that are extracted from the 3D data during analysis as well as to spectra displayed in
the Spectrum View.
Note The processing specified on this page is performed prior to any use of the spectra in
the software, including display, searching and reporting.
Figure 94. Spectral Options – Spectrum dialog box
Parameter Description
Spectral filtering In this group box, you designate how spectral filtering is performed.
Filtering type Select the type of filtering for the spectral plot. Choices are: None, Smooth, 1st Derivative,
and 2nd Derivative.
Background correction If this checkbox is selected, a correction for the spectral background is made, for the
following features:
• Library search
• Adding a spectrum from the current file to a library
• Per-peak similarity calculations
• Spectral confirm
Parameter Description
Per-peak spectrum Selecting any of these checkboxes allows the indicated value to be calculated on a per-peak
calculations basis during analysis. The result of this calculation is then available in reports and as
chromatogram annotations.
Deselecting options that are not of interest speeds up analysis. If a checkbox is not selected
and the field appears in a run report, it is reported as zero
Similarity Checking this box causes the Peak Apex Similarity to the reference spectrum to be
calculated on a per-peak basis during analysis. The result of this calculation is then available
in reports and as chromatogram annotations. If this box is unchecked and a report contains
the field, a zero is reported.
Peak Apex Similarity applies only to named peaks and its use requires that a reference
spectrum be included in the peak table.
Up slope similarity Checking this box causes upslope similarity to be calculated on a per-peak basis during
analysis. Upslope similarity compares the peak apex spectrum to the spectrum at the peak
inflection point on the upslope side (to the left of the apex). The result of this calculation is
then available in reports and as chromatogram annotations. If this box is unchecked and a
report contains the field, a zero is reported.
Down slope similarity Checking this box causes downslope similarity to be calculated on a per-peak basis during
analysis. Downslope similarity compares the peak apex spectrum to the spectrum at the
peak inflection point on the downslope side (to the right of the apex). The result of this
calculation is then available in reports and as chromatogram annotations. If this box is
unchecked and a report contains the field, a zero is reported.
Lambda max Checking this box indicates causes the lambda max (wavelength at which the highest
absorbance occurs) to be computed for each peak of each multi-chromatogram extracted
from the 3D data during analysis. The result of this calculation is then available in reports
and as chromatogram annotations. If this box is unchecked and a report contains the field, a
zero is reported.
Multi-Chromatogram
The Multi-Chromatogram page allows you to specify wavelength channels within the 3D data
for integration and quantitation. In addition, it allows you to select wavelengths to display in
the Multi-Chromatogram view. To create a Multi-Chromatogram channel, select an Enabled
checkbox. Then, type a wavelength in the Wavelength text box and a bandwidth in the
Bandwidth text box. The wavelength must be part of the scan data. After you have set up a
multi-chromatogram channel, you can disable it temporarily by deselecting the Enabled
checkbox. Choosing View > Spectral View > Multi-Chromatogram displays The
Multi-Chromatogram view. The Multi-Chromatogram view is not functional unless one or
more valid wavelengths are enabled in the Multi-Chromatogram Definition tab.
Figure 95. Spectral Options – Multi-Chromatogram dialog box
Parameter Description
Enabled To enable a wavelength, click the checkbox. To disable a
wavelength, click the checkbox again to remove the checkmark.
Only enabled wavelengths appears in the multi-chromatogram
view.
Wavelength Enter the wavelength you want to view or use as an analysis
channel or both.
Bandwidth Specify the nm range to be averaged in generating the analog
signal for each channel.
Subtract baseline To enable baseline subtraction, select this check box, and then
chromatogram type a baseline wavelength in the Wavelength box and a
bandwidth for this wavelength in the Bandwidth box.
Ratio
The Ratio page allows you to set the displays of the Ratio Plot View. The Ratio View displays
two PDA wavelength channels and the ratio of those two channels. These may be viewed
during real-time acquisition as well as during post-run analysis. The flat tops on the ratio
peaks are a preliminary indication of peak purity.
Figure 96. Spectral Options – Ratio dialog box
Parameter Description
Ratio plot These controls specify the wavelength of the ratio multi-chromatogram channels. The
extracted chromatograms is centered about the specified wavelength.
Channel 1, • Wavelength
Channel 2
Enter the wavelength of the chromatogram channel.
• Bandwidth
Enter the nm range to be averaged in generating the chromatogram channel.
Threshold Enter the ratio threshold. The threshold is the minimum absorbance value required in the
chromatograms of both channels for calculation of the ratio. It is a method of eliminating
ratio calculation at low absorbance values to prevent calculation for baseline noise. If the
threshold value is not met, the ratio plot is zero.
Max Plot Lambda This allows the user to restrict the wavelength range over which the Spectrum Max Plot is
Max Range calculated. The default is the detector acquisition range.
Spectral Views
The Spectral Views menu provides a variety of ways to view the scan data. This menu is
accessed by choosing View > Spectral View from the Instrument window menu, or by
clicking on Views in the Spectral View window menu bar. A right-mouse click within any of
the view windows provides a unique menu of options for that view. The following views are
available:
• Mixed View
• Chromatogram View
• Max Plot
• Multi-Chromatogram View
• Spectrum View
• Ratio View
• 3D View
• Contour View
Mixed View
This first, second, and third quadrants of this view display the Contour, Chromatogram, and
Spectrum views, respectively. By default, the fourth quadrant of this view displays the peak
purity view. You can change the view in the fourth pane by choosing a view from the Options
menu that is accessed by clicking on Options in the Mixed View toolbar.
Figure 97. Spectral View – Mixed View window
To zoom in on a portion of the contour plot, chromatogram or spectrum, hold the left mouse
button down, move the mouse over the plot until the area of interest is highlighted, and
release the mouse button.
To unzoom to the full plot after multiple zooming operations, use CTRL+Z, or use
SHIFT + double-click in the window, or click the right-mouse button anywhere in the
window and select Full Unzoom from the pop-up menu.
This button presents options that enable you to toggle the view for the lower right pane of the
mixed view.
Select Peak
To select the reference spectrum for the purity calculation, make sure the data has been
analyzed. (If you are not sure, click the analyze button on the toolbar.) Then, click Actions >
Select Peak for Similarity / Purity Display from the Mixed View toolbar. A dialog appears
that instructs you to select a peak by holding down the Ctrl button and then clicking on a
peak on the Contour Plot or the Chromatogram view. The retention time of the reference
spectrum is displayed along with a value indicating peak purity.
Figure 100. Select Peak for Similarity / Purity Display dialog box
You can continue to select or change the peak. When you are finished, click Close.
The Peak Purity view displays the purity profile for a chromatogram extracted from the 3D
data. The view displays purity information for the peak that has been chosen as well as the
peak data itself. The shape of the purity profile (“hat”) indicates the presence or absence of
impurity in the peak of interest. For example, the less flat the profile is the more likely there is
an impurity present. However, a symmetrical profile, mostly flat in the middle, but steeply
falling off at the edges can indicate a fairly pure peak. The fall off at the edges is due to the
lower signal-to noise ratio of the spectra there. The pane is blank until the data has been
analyzed, and a peak has been selected for the purity calculation.
Figure 101. Peak Purity view
Chromatogram View
The Chromatogram View can be displayed alone or as part of the Mixed View display.
Generate a chromatogram from the Contour plot as described in the section on contour plot.
Right-click within the chromatogram window to display the pop-up menu. This menu
contains the same options as the basic chromatogram window for all detector types, and in
addition enables you to overlay chromatograms from different wavelengths and change the
Gallery view.
Overlay Chromatograms
When this option is available when you have a chromatogram view and a contour plot
displayed simultaneously (Mixed View). Select Overlay Chromatograms to add
chromatograms from different wavelengths to the view whenever you slide the wavelength
selection cursor on the contour plot to a new wavelength.
Max Plot
A Max Plot is a chromatogram with each point plotted at its maximum absorbance. This plot
gives an indication of the appearance of the chromatogram when the wavelengths are
optimized for each peak.
Figure 102. Max Plot view
Multi-Chromatogram View
This displays multiple chromatographic plots of absorbance versus time, each at a different
wavelength. Maximize the Spectral display and choose View > Spectral Views >
Multi-Chromatogram (or use the Spectral toolbar/Views) to display all of the
chromatograms in their respective channels that were specified in the Method > Spectral
Options > Multi-Chromatogram tab.
Spectrum View
This view displays the spectrum associated with a time point on the chromatogram. The
spectrum displayed can be changed from the Mixed View by dragging the X-axis handle
across the Contour plot.
Figure 104. Spectrum view
Right-clicking in the Spectrum view opens a shortcut menu that includes the following
options:
• Spectrum Properties
Right-click in the Spectrum view to display a shortcut menu. Then, choose properties to
access the Data Graph Properties dialog box. The Data Graph Properties dialog box
enables you to add another trace to the view, or change scaling. It also lets you selectively
remove overlaid traces from the view.
• Spectrum Background Correction
Select the right mouse click/Operations menu item to toggle the background correction
on and off.
If the current spectrum has been extracted from an integrated peak, select this option to
correct the displayed spectrum for background based on the working chromatogram.
Refer to the “Spectral Analysis and Calculations” on page 168 for details on how this
calculation is performed.
Background Correction may only be performed once on a spectrum. If a second
background correction is attempted on an already corrected spectrum, a message box is
displayed and the operation ignored.
If the current spectrum has not been extracted from an integrated peak of the working
chromatogram, selecting this menu item will have no effect on the spectrum.
Background subtraction must always be the first operation performed on a spectrum. If a
background correction is attempted on a spectrum after another operation has already
been performed (including operation specified in the Spectrum tab of Spectral Options), a
message box is displayed and the background correction request ignored.
• Interpolate Spectrum
Select the right-mouse click > Operations menu item to toggle the interpolate option
on and off.
Selecting this menu item causes a 10:1 interpolation to be performed on the spectrum.
Refer to the “Spectral Analysis and Calculations” on page 168 for details on how the
values are computed.
• Export
Selecting this menu item will display a File Save As dialog. When a valid filename is
entered and Ok is pressed, the currently displayed spectrum is exported as an ASCII data
file. Refer to Chapter 4, “Data and Graphics Export,” for details on the data file format.
• Overlay Spectra
You can overlay spectra using the Actions > Overlay Spectra command from the
Mixed View toolbar. When you select this option, a dialog appears where you can enter
the retention time from which you want to extract a spectrum. You can also overlay
spectra quickly by moving the wavelength cursor in the Contour or Chromatogram view
to the desired wavelength. To remove the spectra you have added to the view, use the
Clear Overlays command from the right mouse click menu in the Spectrum view. You
can also clear selected spectra using the Trace Setup dialog accessed by doing a right
mouse click in the view and then selecting Properties.
Figure 105. Overlay Spectra dialog box
Ratio View
The Ratio view displays two spectral wavelength channels and the ratio of those two channels.
These may be viewed during real-time acquisition as well as during post-run analysis. The flat
tops on the ratio peaks are a preliminary indication of peak purity. The ratio wavelengths and
parameters are set in the Method >Spectral Options > Ratio tab.
Right-click within the window to display the pop-up context menu. The context menu is the
same as for a standard chromatogram graph window.
3D View
The 3D View provides a three-dimensional view of absorbance versus time and wavelength.
Wavelengths of appreciable absorbance and interference, which may be invisible in a single
wavelength plot, are easy to locate with the 3D View. The plot can be elevated and rotated
around its axis for display from any angle.
Figure 107. 3D view
Note During a run, the user must manually refresh this display. It does not update
automatically.
Right-click inside the 3D view to display the pop-up menu that gives you access to plot
rotation and axis setup features.
Hold down the left mouse button and move the cursor in the direction you wish to move the
plot. The plot moves as you move the cursor. The movement option remains in effect until
you turn it off. When finished, click the right mouse button, and de-select the movement
option. If you wish to return to the original view, select the Reset command.
Note These values can be viewed or changed using the 3D > Properties dialog.
3D Properties
This dialog allows you to set the properties of the 3D plot.
Figure 109. 3D Data Graphic Properties – General dialog box
Parameter Description
Style
Color or Grayscale Select Color or Grayscale for the plot.
Wire Mesh Checking the Wire Mesh box causes the data to be rendered as a wire frame plot, rather
than as a solid fill plot.
Colors
Range Select how you want the coloration on the plot to appear. When Light & Dark Range is
selected, alternating light and dark bands are used. When Full Spectrum is selected, then a
continuous color spectrum is used.
Background Select the color for the background of the plot.
Axes Select the color for the plot axes.
Display This specifies the relative quality of the displayed contour plot. Lower Display Detail results
in faster drawing of the plot.
Print This specifies the relative quality of the printed contour plot. Lower Print Quality results in
faster printing.
Viewing Use these fields to view current rotation settings, or to set them manually.
Light source Selecting this option causes the plot to be shaded as if an external light source were shining
upon it.
Rotation This reports the current level of rotation (front to back) in the aspect position of the plot.
When a new value is entered, the plot is redrawn to reflect the new value upon exiting the
dialog.
Elevation This reports the current level of tilt (forward or backward) in the aspect position of the plot.
When a new value is entered, the plot is redrawn to reflect the new value upon exiting the
dialog.
Roll This reports the current level of roll (side to side) in the aspect position of the plot. When a
new value is entered, the plot is redrawn to reflect the new value upon exiting the dialog.
Zoom This reports the current level of magnification in the plot. When a new value is entered, the
plot is redrawn to reflect the new value upon exiting the dialog.
Performance
Use zoom/rotate When this box is checked, the plot is temporarily replaced by a box during zoom and
bounding box rotation operation. When the operation is completed, the plot is redrawn. Checking this
box increases performance on computers with slower graphic subsystems.
3D Properties Axis
This dialog lets you set up axis limits for your 3D plot.
Figure 111. 3D Data Graphic Properties – Axis Setup dialog box
Parameter Description
Limits Enter the limits for the 3D plot.
Time Select the Autoscale checkbox if you want to have the software
automatically scale the time axis to the maximum values. To enter
a manual range, de-select the autoscale box, and add your own
limits, or click the Get Limits button to enter the limits displayed
on the current 3D graph.
Wavelength Select the Autoscale checkbox if you want to have the software
automatically scale the wavelength axis to the maximum values.
To enter a manual range, de-select the autoscale box, and add
your own limits, or click the Get Limits button to enter the limits
displayed on the current 3D graph.
Absorbance Select the Autoscale checkbox if you want to have the software
automatically scale the absorbance axis to the maximum values.
To enter a manual range, de-select the autoscale box, and add
your own limits, or click the Get Limits button to enter the limits
displayed on the current 3D graph.
Labels You can customize the labeling of the 3D plot using the
parameters in this area. You can select font, size, color and style
using the selections provided.
While the option is active, you can move the plot in the designated way by holding down the
left mouse button and moving the mouse in the desired direction on the plot. When you have
the plot in the desired position, click the right mouse button and turn off the option by
selecting it again (checkmark removed). If you want to return to the original plot view, click
the right mouse button and select Reset.
Contour View
The Contour Plot (also referred to as an Isoabsorbance Plot) provides an aerial view of the
absorbance of the sample at each wavelength versus time. The contour view supplies quick
and easy-to-assimilate information about those wavelengths at which the sample exhibits
appreciable absorbance. With contour view, it is also possible to generate a Chromatogram
View for an individual wavelength and a Spectrum View for a given point in time.
Figure 114. Contour view
Right-click inside the window to display the pop-up menu. Select Properties to display the
Contour Properties dialog box.
Y To generate a spectrum view from contour view of Mixed View or Mixed View w/ 3D
1. Choose View > Spectral Views > Mixed View to display the Contour Map,
Chromatogram, and Spectrum.
2. Move the cursor to the triangle-shaped handle located on the time axis of the Contour
Map and press the left mouse button.
3. Drag the cursor to the desired peak and release the mouse button.
The Spectrum associated with the specified retention time value is displayed in the
Spectrum View.
Contour Properties
A right mouse click anywhere on the contour plot, followed by selecting the Properties
button displays a dialog where you can select the way the contour plot is displayed.
Figure 115. Contour Graph – General dialog box
Parameter Description
Style Select how you want the plot to appear, Grayscale or Color.
Colors
Range Use this to select how the coloration of the plot is to be displayed.
When Light & Dark Range is selected, alternating light and dark
bands are used. When Full Spectrum is selected, then a
continuous color spectrum is used.
Background Select the color to be used for the background of the plot.
Display
Detail This specifies the relative quality of the displayed contour plot.
Less Display Detail results in faster rendering of the plot.
Print
Quality This specifies the relative quality of the printed contour plot.
Coarse Print Quality results in faster printing.
Cursors The values in these boxes reflect the current contour cursor
positions. When you change the values, the cursors on the plot is
changed when you exit the dialog or when you click the Apply
button.
Time This specifies the time position (X Value) of the cursor. When a
value is entered, the X cursor is updated to the new position upon
exiting the dialog.
Wavelength This specifies the wavelength position (Y Value) of the cursor.
When a value is entered, the Y cursor is updated to the new
position upon exiting the dialog.
Bandwidth This control specifies the wavelength band that is averaged when a
chromatogram is extracted from the contour plot. The extracted
chromatogram is an average of the absorbances at each wavelength
in the wavelength band. The wavelength band is equal to the
selected wavelength (see above) +/- one half of the bandwidth.
Parameter Description
Limits
Time Select the Autoscale checkbox if you want to have the software
automatically scale the time axis to the maximum values. To enter
a manual range, de-select the autoscale box, and add your own
limits, or click the Get Limits button to enter the limits displayed
on the current contour graph.
Wavelength Select the Autoscale checkbox if you want to have the software
automatically scale the wavelength axis to the maximum values.
To Contour Axis Setup
Absorbance Select the Autoscale checkbox if you want to have the software
automatically scale the absorbance axis to the maximum values.
To enter a manual range, de-select the autoscale box, and add
your own limits, or click the Get Limits button to enter the limits
displayed on the current contour graph.
Labels You can customize the labeling of the contour plot using the
parameters in this area. You can select font, size, color and style
using the selections provided.
Spectral Utilities
The Utilities menu allows displayed spectra to be printed, copied, saved or exported. The
Utilities menu is available from the right mouse click menus of various Spectral views.
• Print
Select Print to automatically print the currently displayed spectra.
• Copy to Clipboard
Select Copy to Clipboard to copy the displayed spectra to the Clipboard. The contents of
the Clipboard may then be pasted into other software.
• Save trace
Select Save trace to save the spectrum to a file with an .spc extension for later inclusion in
a library or report.
Parameter Description
Spectrum Name This column displays identifications for each of the spectra added
to the table. For spectra extracted from the 3D data, the
identification includes the time. For spectra extracted from a
chromatogram based on a peak name, the identification includes
the peak name. For spectra loaded from a file, the identification
includes the retention time.
A check mark next to the colored line for a spectrum indicates that
the spectrum serves as the Reference Spectrum for the similarity
calculation.
Until a reference spectrum has been selected the similarity for all
spectra in the table is zero.
Parameter Description
Spectrum Source This column displays the source of the spectrum. (Current Data)
refers to working spectra extracted form the current 3D data.
When spectra come form a stored data file, this column displays
the source filename.
Set Reference Pressing this button designates the spectrum in the currently
highlighted row of the table as the new reference spectrum. The
similarities for all of the rows is recalculated based on the new
reference spectrum.
Print Pressing this button outputs a simple text report representation of
the table to the default printer.
Properties Pressing this button displays a dialog with parameters related to
the calculation of similarity.
Note During analysis, it is possible that a portion of this wavelength range is outside the
range of the acquired data. In that case, the wavelength range is truncated to the limits of
the acquired data.
Parameter Description
Spectrum Data Source Click the arrow in this field to select Current Data,
Spectrum File, or Named Peak.
Note Fields in existing library entries may be edited by selecting the fields with the cursor.
Right-click a row in the library table to display a pop-up menu used to cut, paste, copy,
and insert and delete lines from the library as necessary.
Spectrum Information
If you chose to enter a spectrum into your library from the current data file, this dialog
appears where you must select the retention time at which you wish to select the spectrum.
Figure 120. Spectrum Information – Spectrum at Retention Time dialog box
Note To add the 1st or 2nd Derivative of a spectrum to the library, select the appropriate
filter in the Method > Spectral Options > Spectrum tab and repeat steps 3 and 4.
1. When all of the spectra are saved as .spc files, choose File > Library > New to display the
Library Definition dialog box and create a new library. Alternatively, choose
File > Library > Open to add spectra to an existing library.
2. Click in the Spectrum File cell of row 1 to display the Open dialog box. Double-click the
appropriate .spc file in the list box. The .spc file name is entered into that cell and the
associated spectrum is simultaneously displayed. A component name and comment may
be entered in the appropriate columns. Repeat this procedure with subsequent rows in the
table until all spectrum files are entered into the library.
3. Choose File > Library > Save As and enter a name for the library. The .lib extension is
automatically appended.
1. In the Contour map of the Mixed View window, drag the vertical cursor to the apex of
the peak of interest to display the corresponding spectrum in the spectrum pane.
2. Choose Actions > Search library. Click either Method to use the library parameters
from the method, or Quick to enable you to modify the search parameters. Before you do
the search, make sure you have either selected a library in your method, or have opened a
library to do a quick search.
Click Search Now to display the Library Search Results window, showing the three
closest matches in the specified library. When appropriate, click the >> or << button to
display additional matches.
Parameter Description
Hits Displayed Select the number of hits you wish to display. Hits is displayed in order of similarity.
Search Select Method if you want to use the parameters from the current method for your search.
Select Quick if you want to enter or change the parameters yourself before the search.
Search Now When you click this button, a library search is performed on the selected spectra using the
parameters, and the number of hits requested is displayed.
Display Parameters Click this button if you want to display the parameters. If you always use the method
parameters, you may not wish to display them when you search.
Search Spectrum Click the arrow to select a new spectrum to search on. You can choose from the current
data, or from a stored spectrum file. The current spectra source is displayed.
Parameter Description
Search Parameters This tab contains search parameters. If you have elected to use the Method search
parameters, the parameters displayed are those in your current method in the Spectral
Options > Library tab. If you have elected to do a Quick search, the parameters are default
parameters.
• Wavelength range
These specify the wavelength range over which the library search is performed.
• Wavelength step
This specifies the data point spacing to be used when a library search is performed.
• Max hits
This specifies the number of hits that is reported in the results of a library search. Note
that this works in conjunction with the Similarity Threshold parameter to limit the
number of hits reported.
• Similarity threshold
Specifying a value for this control will cause the library search results to only display
matches whose similarity to the unknown exceeds this value. Note that this control
works in conjunction with the Max Hits parameter to limit the number of hits reported.
• Library
Displays the library from the method or for Quick enables you to select a library to be
used.
Parameter Description
Pre-Filters When Quick is selected as the Search Mode, the items on this tab allow the user to specify
search pre-filters that is performed on library spectra prior to the test for similarity. All
pre-filters are optional. If Method is selected as the Search Mode, the items on this tab are
read-only and reflect the parameters values on the Library tab of Spectral Options.
• Retention time range
When a Retention Time Range is specified, library search is restricted to those library
entries whose retention time is within the specified range. Entries outside this range are
automatically excluded from the search (no similarity calculation is made). Entering a
value for this is optional.
• Lambda max
When one or more of these values is specified, library search is restricted to those library
entries containing a lambda max within 5 nm of one of the specified values. Entries
without a matching lambda max are automatically excluded from the search (no
similarity calculation is made). Entering values for this is optional.
• Compound name filter
When a Compound Name Filter is specified, library search is restricted to those library
entries whose name contains the specified string as a case-insensitive sub-string. Entries
without a matching sub-string are automatically excluded from the search (no similarity
calculation is made). Entering a value for this is optional.
Custom Report
A variety of spectral information can be placed in a custom report. These items are inserted in
the report by placing the cursor at the location where you want to insert the item, then do a
right mouse click to access the custom report menus.
3D Data Graph
The 3D View provides a three-dimensional view of absorbance versus time and wavelength.
Wavelengths of appreciable absorbance and interferences, which may be invisible in a single
wavelength plot, are easy to locate with the 3D View. Select this function to automatically
enter the 3D map into the report. Click the right-mouse button within the 3D map and select
Properties to display the 3D Properties dialog box and enter appropriate changes
The 3D map can be elevated and rotated around its axis for display from any angle. These
functions work the same as in the 3D view window.
Changing parameters for the contour graph in a custom report works exactly as in the
Contour view.
To modify the parameters of the report, right-click in the Library Search Report table, and
select Properties. Figure 123 is an example of the dialog box that appears.
Figure 123. Library Search Report Properties dialog box
Parameter Description
Search Libraries for This specifies what peaks should be searched for the report.
Apex spectra of peaks When selected, the Specific Spectrum button is disabled.
• Detected on
This allows the user to select the channel or channels from
which peaks are detected.
• Limit to named peaks only
This allows the user to restrict the peak selection (made
above) to a time range. When checked, only peaks within the
specified time range is searched When this box is unchecked,
the Time Range edit fields are disabled.
• Limit to time range
This allows the user to restrict the peak selection (made
above) to a time range. When checked, only peaks within the
specified time range is searched. When this box is unchecked,
the Time Range edit fields are disabled.
• Time range
This allows the user to restrict the peak selection (made
above) to a time range. When the Limit to time range box is
unchecked, these fields are disabled.
A specific spectrum When selected, the other controls of this group are disabled.
Enter or select the library you wish to include in the report, and then click OK.
Purity Report
Selecting this option inserts a Purity Report into your custom report. To specify parameters
for the report, do a right-click in the report table, and then choose Properties. Figure 125 is
an example of the dialog box that appears.
Figure 125. Purity Report Properties dialog box
Parameter Description
Report peak purity
Detected on This allows you to select the channel or channels from which
peaks are detected.
Limit to named peaks This allows you to restrict the peak selection (made above) to a
only time range. When checked, only peaks within the specified time
range is searched When this box is unchecked, the Time Range
edit fields are disabled.
Limit to time range This allows the user to restrict the peak selection (made above) to
a time range. When checked, only peaks within the specified time
range is searched. When this box is unchecked, the Time Range
edit fields are disabled.
Time range This allows the user to restrict the peak selection (made above) to
a time range. When the Limit to time range box is unchecked,
these fields are disabled.
Graph
Graph Properties Pressing this button displays the graph properties dialog as found
on the standard graph.
Height This specifies the relative height of a search results graph. A value
of 100% corresponds to a standard sized graph. A larger value may
be selected to provide more a detailed plot.
Spectrum Report
The Spectrum report displays a table showing spectra extracted from peaks of the
chromatogram. Based on the selection on the Spectrum tab of Spectral Options, the report
contains either the apex spectrum, an average of the upslope, apex and downslope spectra, or
several averaged spectra. To set parameters for this report, right-click the table. Then, choose
Properties. Figure 126 is an example of the dialog box that appears.
Figure 126. Spectrum Report Properties dialog box
Parameter Description
Report peak spectra
Detected on This allows you to select the channel or channels from which
peaks are detected.
Limit to named peaks This allows you to restrict the peak selection (made above) to a
only time range. When checked, only peaks within the specified time
range is searched When this box is unchecked, the Time Range
edit fields are disabled.
Limit to time range This allows the user to restrict the peak selection (made above) to
a time range. When checked, only peaks within the specified time
range is searched. When this box is unchecked, the Time Range
edit fields are disabled.
Time range This allows the user to restrict the peak selection (made above) to
a time range. When the Limit to time range box is unchecked,
these fields are disabled.
Parameter Description
Report information Checking any of these controls indicates that the indicated value is
calculated and printed to the right of the graph. Deselecting
options that are not of interest speeds up analysis.
Peak area Checking this indicates that the peak area should be printed to the
right of the spectrum graph.
Lambda max Checking this indicates that the three largest lambda max values
should be printed to the right of the spectrum graph.
Lambda min Checking this indicates that the three largest lambda min values
should be printed to the right of the spectrum graph.
Purity Checking this indicates that total peak purity should be printed to
the right of the spectrum graph.
3 Point Purity Checking this indicates that 3-point peak purity should be printed
to the right of the spectrum graph.
Similarity Checking this indicates that peak apex similarity to the reference
spectrum should be printed to the right of the spectrum graph.
The use of this value requires that a reference spectrum be
included in the peak table. See Spectral Analysis and Calculations
for details.
Spectral display The controls in this group specify the content and labeling of the
spectrum graphs.
Show lambda maxima Checking this box annotates each spectrum graph with the three
largest lambda max absorbance.
Show lambda minima Checking this box annotates each spectrum graph with the three
largest lambda minima absorbance.
Background correction Checking this option causes each apex spectrum to be corrected
for background using the chromatographic baseline prior to being
used elsewhere in analysis. Refer to “Spectral Analysis and
Calculations” on page 168 for details on the formula used.
Filtering type This allows the user to specify a mathematical filtering function to
be performed on all spectra extracted from the 3D data during
analysis. Refer to “Spectral Analysis and Calculations” on
page 168 for details on the formula used for each filter.
Graph height This specifies the relative height of the spectrum graph. A value of
100% corresponds to a standard sized graph. A larger value may
be selected to provide more a detailed plot.
Graph properties Pressing this button displays the graph properties dialog as found
on the standard Spectrum graph.
Peak Table
When using the Spectral option, certain columns are added to the peak table that enable you
to analyze peaks from the PDA or scanning detector.
There is one Peak/Group Table per detector. To analyze a peak at multiple wavelengths, you
must enter the peak multiple times in the Peak Table, and then select the appropriate Analysis
Channel for each entry.
For convenience, you might want to label each entry for a peak with a distinguishing label
that helps you identify its associated Analysis Channel.
Figure 127. Peak table
Parameter Description
Detection Select the basis for the identification of the peak. If you select Ret Time, only the retention
time is used for identification of the peak. If you select Ret Time with Spectral Confirm,
the Similarity of the peak’s spectrum to that of a designated reference spectrum is used in
addition to the retention time as the basis of peak identification.
Spectrum If you want Similarity to be used as a basis for peak identification, then click the arrow to
the right of this field to specify the stored reference spectrum to be used for comparison.
During identification, this reference spectrum is compared to the peak apex spectrum and a
similarity index is computed. A peak is considered identified if this calculated similarity
index is at least the value specified in the Similarity column of the peak table.
If Similarity is not specified as a basis for peak identification, then this field is ignored.
Similarity If Similarity is specified as a basis for peak identification, then this field specifies required
minimum similarity index for a peak to be considered identified. During identification, the
reference spectrum (see previous section) is compared to the peak apex spectrum and a
similarity index is computed. A peak is considered identified if the calculated similarity
index is at least the value in this field.
If Similarity is not specified as a basis for peak identification, then this field is ignored.
Parameter Description
Analysis Channel Specify which wavelength channel is to be used for analysis of the peak. The choices are
those specified in method for the detector and shown in the Analysis Channel list box. For
the PDA detectors, you can analyze data on any of the multi-chromatogram channels
within the scan range, as well as the three discrete channels.
General
In order to maintain accuracy during the application of multiple operations, all calculations
are performed using double-precision floating-point numbers.
Multi-Chromatogram Channels
The following apply to multi-chromatogram channels:
• In the Channel Selection drop down list, each multi-chromatogram channel has a unique
identifier that includes the wavelength and bandwidth of the channel.
• Each Multi-Chromatogram Channel has a separate Integration Events Table and Manual
Integration Fixes Table associated with it.
• All Multi-Chromatogram Channels and discrete channels share a common peak and
group table, a common Export tab, and a common Column/Performance tab. Within the
peak and group tables, any channel can be selected as the analysis channel for quantitative
information.
• When an analysis is performed, all multi-chromatogram channels are automatically
analyzed.
• Each multi-chromatogram channel is an average of the absorbances monitored at each
wavelength in the wavelength range. The wavelength range is equal to the selected
wavelength +/- one half of the bandwidth.
Working Chromatogram
The following apply to the working chromatogram:
• In the Channel Selection drop down list, a single entry is added that identifies the
channel as applying to the working chromatogram.
• This channel has a separate Integration Events Table and Manual Integration Fixes
Table.
• When an analysis is performed, the working chromatogram is automatically analyzed.
• The working chromatogram is an average of the absorbances monitored at each
wavelength in the wavelength range. The wavelength range is equal to the selected
wavelength +/- one half of the bandwidth.
• The working chromatogram is not available as a trace in Custom Reports.
Specific Analysis-related capabilities are for these spectra are detailed below.
Analysis Spectra
The following applies to the analysis spectra:
Prior to being used elsewhere, all spectra extracted from the 3D data are filtered according to
the settings on the Spectrum tab of Spectral Options.
Working Spectrum
The following apply to the working spectrum:
Working spectra extracted from the 3D data are never filtered according to the settings on the
Spectrum tab of Spectral Options.
Spectrum Operations
The following table describes what operations can be done on what type of spectra.
Background Correction
Background Correction may only be performed on a spectrum by using the settings on the
Spectrum tab of Spectral Options, or the settings on the Purity tab of Spectral Options, by
selecting Operations > Background Correction from the Spectrum View context menu or
by setting the properties of the Spectrum Report.
1. The spectra from the baseline start and baseline stop times for the peak are extracted from
the 3D data. The Max Plot is used to determine the peak that is used.
2. For each spectrum in the peak, a corresponding background spectrum is generated by
linear interpolation between the baseline start and baseline stop spectra.
3. These background spectra are subtracted from the original spectra.
Spectrum Interpolation
Spectrum interpolation can be performed on a spectrum by choosing the appropriate settings
on the Spectrum tab of Spectral Options or by choosing Operations > Interpolate from the
Spectrum View context menu.
The calculation is performed by doing a 10:1 interpolation of the spectrum data points using
a cubic spline curve fit. This interpolation is performed after the applying any spectral
filtering option (1st derivative, 2nd derivative or smooth) to the spectrum.
Spectrum Smoothing
Spectrum smoothing can be performed on a spectrum by choosing the appropriate settings on
the Spectrum tab of Spectral Options or by choosing Operations > Smooth from the Spectrum
View context menu.
The calculation is performed by doing a 9-point Savitsky-Golay smooth on the spectrum data
points.
Spectrum Derivatives
Calculation of the 1st and 2nd derivatives of a spectrum can be performed on a spectrum by
choosing the appropriate settings on the Spectrum tab of Spectral Options or by choosing
Operations > Smooth from the Spectrum View context menu.
The absorbance values of the 1st derivative of a spectrum are computed by calculating the
differences between adjacent absorbance values to create a new spectrum. The 2nd derivative
of a spectrum is defined as the 1st derivative of the 1st derivative of the spectrum.
Note Unless a Library Search Results object is part of the method custom report, no
automated library searching will done when analysis is performed.
In this section, a Query Spectrum is defined the unknown spectrum that is being searched. A
Reference Spectrum is defined as a spectrum from a spectrum library file.
During a search, the apex spectrum of the peak (the query spectrum) is compared to each
spectrum contained in the libraries (reference spectra) to determine the similarity of the query
spectrum to the reference spectrum. The similarity is quantified through the calculation of a
Similarity Index for each query/reference pair. The Similarity Indices are used to generate a
hit list of the best matching entries. A perfect match will have a Similarity Index of 1.0000.
Similarity indices less than 1 indicate differences in the spectral patterns.
If the query spectrum and the reference spectrum have different wavelength ranges, then the
intersection of the two ranges is used in the similarity calculation.
If the query spectrum and the reference spectrum have different wavelength steps, then the
higher resolution spectrum is de-resolved to match the other spectrum before being used in
the similarity calculation.
Pre-Filters
A pre-filter is a criterion on a reference spectrum that must be met before that spectrum is
used in similarity calculations. One or more pre-filters may be specified in the search
parameters. When multiple pre-filters are specified, a reference spectrum must meet all of the
individual pre-filters in order to be considered for similarity calculations.
Reference spectra that do not meet all of the pre-filter criteria are automatically excluded from
calculations and from being a candidate for a hit list. No similarity calculation is performed
on these spectra.
where
abs1 = the absorbance in chromatogram 1 at this wavelength
abs2 = the absorbance in chromatogram 2 at this wavelength
If the absorbance of any point in chromatogram 1or chromatogram 2 is less than the
threshold, the Ratio Pt. for that wavelength is set to 0.
Similarity Calculations
The Similarity Index (SI) compares two spectra across the wavelength range defined in
Method > Spectral Options > Library. A Library Search is performed using the Similarity
Index, determined as follows:
,
Similarity = Sum ( C i ∗ C i ) ⁄ ( n – 1 )
where
Ci = (Si – Avg(S)) / StdDev(S)
C’i = (S’i – Avg(S’)) / StdDev(S’)
n is the number of points, S is one of the spectra, and S’ is the other spectrum.
Note that each spectral data point is first reduced by the amount of the average of all data
points in the spectrum. This is called mean-centering. Each data point is also divided by the
standard deviation of all data points in the spectrum, which has the effect of normalizing the
resulting values (scaling them to the range of zero to one). So, in the above equations, C and
C’ are the normalized mean-centered spectra.
To compute n lambda max (min) values for a spectrum, the software finds the n local maxima
(minima) with the largest (smallest) absorbance values.
If the peak is pure, or at least homogenous, all spectra across the peak should be identical after
normalization. In this case, the peak purity value would be 1.000. Normally “pure” peaks
have a peak purity value of 0.990 to 1.000, but this is only a probability as the purity value
depends on a number of factors. These factors include the similarity of the spectra of the
compound of interest and any impurity, the concentration levels of each impurity, the
resolution between possible peak components and the accuracy of the integration as well as
the relative molar absorbtivity of between the compound and each impurity.
Peaks with purity values below 0.950 have a high probability of being impure. Peaks with
values between 0.950 and 0.990 can be suspected of being impure, but more experiments are
needed. Peaks with values above 0.990 are probably pure, subject to the limitations of the
technique.
Background Correction
Background Correction may only be performed on a spectrum by using the settings on the
Spectrum tab of Spectral Options, or the settings on the Purity tab of Spectral Options, by
selecting Operations > Background Correction from the Spectrum View context menu or
by setting the properties of the Spectrum Report.
1. The spectra from the baseline start and baseline stop times for the peak are extracted from
the 3D data. The Max Plot is used to determine the peak that is used, if there is on other
relevant trace. For the Spectrum view, the peaks are taken from the current working
chromatogram that appears in the Chromatogram pane. For the Spectrum Report, the
trace that is specified in the report properties is used.
2. For each spectrum in the peak, a corresponding background spectrum is generated by
linear interpolation between the baseline start and baseline stop spectra.
3. These background spectra are subtracted from the original spectra.
Note While compensation for background provides more precise purity calculations, it
can slow down re-analysis of large data files.
1. Each spectrum in the peak is evaluated to see if it meets the peak threshold and peak
coverage criteria.
2. A similarity index is calculated on all the spectra that pass step 1 compared to the apex
spectraum.
3. The lowest similarity index value is determined.
4. That value is reported as the purity index.
5. The purity index has a range of between 0.000000 and 1.000000
Similarity indices are generated for the up-slope and down-slope spectra, relative to the apex.
Since spectra taken at different points along a pure peak will look identical, they will have
high similarity indices. The closer the Similarity Index is to 1.0000, the more similar or more
pure the peak is deemed to be.
When the PC1000 data is archived into an .arc file, all these various files are put inside one
large file, for ease in transporting the data.
By contrast in ChromQuest, the data files (.dat) is associated with only one injection, and the
conditions used to collect the data are also embedded in the file. Additionally, there are
method files (.met) and spectral library files (.lib); the library files contain spectral files (.spc).
This organizational difference requires you to specify a particular PC1000 injection within
the .arc archive file, when you perform the data conversion to ChromQuest.
Contents
• Data Conversion
• Data Conversion with Calculation Result
• Method Conversion
• Spectra Library Conversion
• PC1000 Archive File Run-Time Exception Handling
• Important Note on Data Conversion
Data Conversion
To convert the original discrete channel and scan data (located in an .aqr file located inside
the PC1000 .arc-file), to a ChromQuest .dat file, simply go to the Open Data dialog box,
select the desired archive file, and click Open. The Select Injections dialog box appears
showing all available injections in the selected archive file.
You may highlight an item, double click it or press the Select button. ChromQuest displays
acquisition results of the chosen injection. The figure shown below in ChromQuest is the
converted data of an injection (one .aqr file) with scan data on the top and data from discrete
channels displaying at the bottom. If your original .aqr file has only scan or discrete data, then
only the data you have in the file, scan or discrete, are converted and displayed.
Figure 128. Select Injections dialog box
Method Conversion
To access your calculation and acquisition method developed with PC1000, go to the
Open Data dialog box (not the Open Method dialog box), select the archive file which
contains the method, and check the Open with Method check box. You may select any
injection from the Select Injections dialog box. Your original calculation method (in .cam
file), such as peak names and expected retention times are displayed in Peak / Group Tables.
The instrument setup (in .aqm file) is merged into the Instrument Setup property sheets.
Note If your LC system configuration does not have all the modules in your acquisition
method, you will see the following type of message box.
Consequently, the portion of the method which is associated with those missing modules, will
not be opened. To complete your acquisition method conversion, you need to reconfigure
your LC system by adding the missing modules. Once you are satisfied with the opened
method, make sure to save it as a ChromQuest method (.met) file for future use.
However, if the selected archive file has no spectra library in it, you will see the following
message boxes:
Figure 130. Spectra Library Conversion - Missing message
You can continue to work on other tasks, for instance, convert another PC1000 archive, after
these error messages.
Contents
• Spreadsheet Formulas
• Built-in Functions
• Quick-Reference Guide to Built-in Functions
• Using Spreadsheet Built-in Functions
• Spreadsheet Error Messages
Spreadsheet Formulas
Formulas are the backbone of the spreadsheet, establishing and calculating mathematical
relationships between elements of the spreadsheet. Whereas numeric entries remain the same
until you change them, cells defined by formulas are automatically changed to reflect changes
in referenced cells - even where there are complex interdependencies among cells.
Spreadsheet formulas can calculate with numbers, text, logical values, cell references, and
other formulas. For example, you can easily calculate the sum of a series of cells, the total of
values in a column, a minimum or maximum value within a range, the rounded result of
another formula, or the absolute value of a cell entry. Formulas can express complex
interdependencies among cells, and they can define constraints on the calculation, such as
limits on acceptable values or specific conditions under which a calculation should take place.
Once entered in a cell, formulas are hidden behind the scenes, perform their work in the
background, and display only the result of their calculation. To view the formula in a cell,
simply select the cell. Spreadsheet also provides an option that lets you make all formula
expression visible (via CGXGridParam::m_nDisplayExpression).
Spreadsheet also provides a wide array of functions that perform certain tasks. Functions can
be used alone or in conjunction with formulas and other functions. Spreadsheet provides
many specialized functions in addition to those that are found in typical financial
spreadsheets.
Formula Syntax
The general form of a Spreadsheet formula is:
= expression ; constraint expression // comment
where expression defines the calculations needed to generate the cell's value, constraint
expression places limits on acceptable values or the circumstances under which the calculation
should take place, and comment is any text you want to attach to the cell.
The expression part of Spreadsheet formulas looks just like an algebraic formula; it contains
values and operators that define the relationships between values.
Formula Values
Formulas can contain any or all of the following types of values:
• Numbers, such as 123, -123, 12.3.
• Addresses of single cells, such as A1, D5, Z100.
• Addresses of cell ranges such as B12..G29, A1..D5.
• Absolute cell references denoted with dollar signs before the fixed coordinate ($A$1, $A1,
or A$1), which is not updated when the referencing cell is moved or copied.
• Spreadsheet functions, such as @SUM or @RADIANS, with their arguments.
• Text surrounded by double quotation marks, such as "The sum is " or "Total".
• User-defined cell names or cell range names, such as TOTALS or PROJECT1
Formula Operators
Spreadsheet supports all the arithmetic, boolean and logical operators available in the
C programming language. It does not support the C address operators or the operators that
have side effects, such as ++. Spreadsheet provides two operators, exponentiation (**) and
percent (%), that are not available in the C language.
Spreadsheet Formulas can contain the following operators to define relationship between
values.
In formulas with more than one operator, Spreadsheet evaluates operators in the order of
precedence presented above, with highest precedence first. That is, AND/OR/NOT operators
are evaluated after inequality operators in a logical expression, and multiplication/division
operations are performed before subtraction/addition operations in an arithmetic expression.
Operators at the same precedence level are evaluated from left to right.
The precedence of operators can be overridden by using parentheses to explicitly specify the
order of evaluation.
Type the row and column coordinates of the cell in the formula. For example, to reference
Row 5 in Column D, type D5.
Type the row and column coordinates of two cells in opposite corners of the block to be
referenced, with two periods ( .. ) between the coordinates. For example, to reference the
first five columns and the first five rows of the spreadsheet, type A1..E5.
Relative Reference
Spreadsheet tracks the referenced cell by considering its position relative to the formula cell,
not by its address. For example, if the formula in cell A1 references cell B2, Spreadsheet
remembers that the referenced cell is one row down and one column right. If you copy the
formula in cell A1 to another location (e.g., D17), the formula will reference the cell one row
down and one column right of the new location (e.g., E18).
Absolute Reference
Absolute references remain the same, no matter where you move or copy the original formula.
For example, if the formula in cell A1 references cell B2, and you copy the formula in cell A1
to another location (e.g. D17), the formula still references cell B2.
Insert a dollar sign ($) before the address coordinate to be fixed, or before both
coordinates if the row and column coordinates are to be fixed. For example: $B$2.
Insert a dollar sign ($) before the address coordinate to remain fixed For example:
• B$5 makes the complete address absolute.
• B5 makes the column coordinate (B) absolute, the row coordinate (5) relative.
• B$5 makes the column coordinate (B) relative, the row coordinate (5) absolute.
Cell ranges are also relative, so when you move a cell range, references in formulas within that
range are updated to reflect their new location.
Insert dollar signs ($) before the coordinates in the formula. For example, to make the
range A1..D5 absolute, type the reference as $A$1..$D$5.
Insert dollar signs only before the coordinates to remain absolute. For example, $A1..$D5
fixes the column coordinates of cell references but adjust the row coordinates to reflect the
new location.
Type the pre-assigned name of the cell or cell block into the formula.
The current cell is identified in any expression with a pound sign (#). References to cells in the
neighborhood of the current cell are made with offset values enclosed in braces ( {} ) following
the #.
The offsets tell Spreadsheet where to look, in relation to the current cell, for the cell being
referenced.
If you include only one value in the offset, Spreadsheet assumes that it is a column offset. For
example, the offset reference #{-1} tells Spreadsheet to look to the column just left of the
current cell.
Examples:
• 0,-1} refers to the cell above the current cell.
• 2} refers to the cell two columns left of the current cell.
• 1} refers to the cell to the right of the current cell.
• 0,1} refers to the cell below the current cell.
• CSUM(C4..C100, #{-1} == "Joe") calculates the sum of all the values in the range
C4..C100 for which the cell in the column to the left contains the string ``Joe.''
• CCOUNT(C4..C100, # #{0,-1}) counts all the cells in the range C4..C100 whose value
is greater than the contents of the cell immediately above.
• XVALUE("master.xs3", #) returns the value of the same cell reference in which this
function is stored from the sheet indicated.
• verb/#-1+2/ adds 2 to the cell value from the cell to the left.
Constraint Expressions
Constraints are limitations or conditions placed on the variables in your spreadsheet. They are
expressed as algebraic statements appended to formulas. You can attach a constraint
expression to any formula, by typing a semicolon (;) and the constraint conditions after the
formula.
Constraint expressions establish conditions under which a formula operates or boundaries for
valid results of the formula. Constraint expressions may be simple equality/inequality
relationships, or they can be arbitrary formulas. Any valid Spreadsheet expression which
returns a numeric value is also a valid constraint expression. However, unlike the expression
that defines a cell value, a constraint expression can reference the cell in which it resides, using
the symbol #.
Constraint expressions are used for example in the conditional statistical functions.
The benefit of constraint expressions is maximized when combine with current cell reference
support
Explicit Dependency
There may be instances where you need to force a recalculation when certain cell values
change, when there is no implicit dependency in the formula that would trigger an automatic
recalculation. This option is indicated by appending a backslash (\) to the end of the
dependent formula. For example, the formula:
=@SUM(A1..A20)\D50
instructs Spreadsheet to recalculate @SUM(A1..A20) whenever the contents of D50
change. This feature is particularly important when you have a constraint expression
containing an offset reference that produces a cell reference outside the cell range
referenced in a dependent formula. Under these circumstances, Automatic Recalculation
would not necessarily be triggered. Take for instance, the example from above.
@CCOUNT(C4..C100, # #{0,-1})
counts all the cells in the range C4..C100 whose value is greater than the contents of the
cell immediately above. In order for C4 to be evaluated, it must be compared to C3 -
which is not part of the explicit range, C4..C100. Without indicating an explicit
dependency, C4 would never be evaluated properly. So, in this case, we would indicate
the dependency as follows:
@CCOUNT(C4..C100, # #{0,-1})\C3..C99
which tells Spreadsheet to recalculate whenever any cell in the range C3..C99 changes.
Built-in Functions
Spreadsheet functions are predefined formulas supplied with the program. They offer a
shortcut approach to accomplishing the work of long, complex formulas. Mathematical and
statistical functions are often used to sum a column of numbers, compute an average,
determine a minimum or maximum value, or round the results of a formula. Other functions
are used for more specialized purposes such as computing the future value of an investment or
the product of multiplying one cell range by another range. Some functions perform
calculations that arithmetic operators cannot handle such as text-string manipulations.
Mathematical Functions
Mathematical Functions perform calculations such as determining absolute value, finding the
integer portion of a number, or establishing the value of a constant. Although you could
accomplish these tasks with a formula, using a function saves time and trouble.
Spreadsheet also provides a full range of trigonometric functions including sine, cosine,
tangent, arc sine, hyperbolic sine, hyperbolic arc sine as well as vector and matrix arithmetic
and manipulation.
Statistical Functions
Statistical Functions perform aggregation operations such as calculating means, minimums,
maximums, and averages.
Spreadsheet also provides more sophisticated statistical test functions that perform operations
on a group of values expressed as a list of arguments. These include the F-test, T-tests,
correlation coefficient, deviations, and all common averages.
Only cells that meet constraint criteria are included in the calculation. The constraint
expression may be any Spreadsheet expression that evaluates to a numeric result.
String Functions
String Functions manipulate and evaluate character strings. For example, string functions can
return the length of a string, find the first occurrence of a string in a range, change a string
from upper to lower-case and vice versa, or replace one string with another.
Logic Functions
Logic Functions return one value if an argument meets certain criteria, another value if it
does not.
Digital logic functions return the values 0, 1, or -1 (unknown). Any value whose integer
portion is not equal to 0 or 1 is considered unknown. Unknown input values may cause
unknown output values.
Financial Functions
Financial Functions perform common financial calculations, such as calculating the future
value of an annuity at a given interest rate, straight-line depreciation, double-declining
depreciation, or the payment term for a given investment. The financial functions in
Spreadsheet cover annuities, cash flows, assets. bonds, and Treasury Bills.
Financial functions are most useful for solving cash flow calculations where you know all but
one variable. For example, if you know the present value of an investment, interest rate, and
periodic payment, you can use the @FV function to calculate the future value of the
investment. If you know the future value and other variables, but need to know the present
value, you can use the @PV function.
Many financial functions require specifying a Day Count Basis. A Day Count Basis indicates
the way in which the days in a month and the days in a year are to be counted. Most of the
financial functions in securities involve 4 different Day Count Basis: 30/360, actual/actual,
actual/360 and actual/365. 30/360 Day Count Basis assumes 30-day months and 360-day
years (12 months x 30 days). Spreadsheet also follows the ``End-of-Month'' rule which
assumes that a security pays interest on the last day of the month and will always make its
interest on the last day of the month. Special rules are followed when calculating the days
between two dates on 30/360 Day Count Basis.
4. If D2 is the last day of February (D2=28 or 29 in a leap year) and D1 is also the last day
of February, Spreadsheet uses 30 for D2.
The special arguments used by Spreadsheet financial functions are defined in Table TODO:
Financial functions use the arguments defined in Table interest rate The interest rate to be
used in the calculations. The rate may be specified as annual, monthly or quarterly, but it
must agree with the increment you use for periods. By default the interest rate is an annual
rate.
Functions related fixed income securities usually require special dates as arguments: issue date,
settlement date, first coupon date, last coupon date, maturity date of a security. When
specified, the following constraints should be followed:
These functions open up many possibilities for managing accounts receivable and calculating
test times.
Spreadsheet internally stores date and time information using the same convention as other
popular spreadsheet programs:
• Dates are represented as an integer equal to the number of days since December 31,
1899, so
January 1, 1900 equals 1.
• Times are represented as fractions of a day, starting at midnight. For example, 6:00 AM is
stored as 0.25 (a quarter of a 24-hour day).
Using this convention, date and time values may be used together. For example, the
date/time value
1.25 corresponds to 6:00:00 AM, January 1, 1900.
Miscellaneous Functions
Miscellaneous Functions perform a variety of calculations, such as returning a reference to
specific cells or ranges or returning the Nth argument from a list of arguments.
Embedded Tools
Embedded Tools are a powerful feature in Spreadsheet. Their power derives in part from their
ability to return a set of data, not just a single value. This function makes non-scalar
operations such as matrix multiplication and "live" recalculation as easy to use as an ordinary
spreadsheet function.
Embedded tools store values in a group of adjacent cells. These adjacent cells are set to
constant formulas with explicit dependencies on their neighboring cells. For example, an
embedded tool in cell
B2 might produce the formula =1.3459\B2 in cell B3. This formula indicates that the cell
currently contains the constant 1.3459 but that its value depends on the contents of cell B2
(the cell containing the embedded tool).
This notion of explicit dependencies is important for recalculation. It guarantees that any cell
that references B3 is not recalculated until after cell B2 is recalculated. This ensures that data
generated by the embedded tool is always current.
Embedded tools look like normal functions, and they can be copied, moved and formatted
just as any other formula in the spreadsheet. However, you must follow one important
guideline: DO NOT combine embedded tools with other embedded tools in a single
formula. For example, the formula
@INVERT(@MMUL(A1..C4,F1..I3))
is not allowed.
Formula Functions
@LOG(X) The log base 10 of X.
@LOG10(X) The log base 10 of X.
@LOG2(X) The log base 2 of X.
@MOD(X, Y) The remainder of X/Y.
@MODULUS(X, Y) The modulus of X/Y.
@PI The value of p.
@POLY(X, ...) The value of an Nth-degree polynomial in X.
@PRODUCT(X, ...) The product of all the numeric values in the argument
list.
@RADIANS(X) Converts the angle expressed in degrees to radians ( ).
@RAND A uniform random number on the interval [0,1).
@ROUND(X, n) X rounded to n number of decimal places (0 to 15).
@SIGMOID(X) The value of the sigmoid function.
@SIN(X) The sine of X.
@SINH(X) The hyperbolic sine of X.
@SQRT(X) The positive square root of X.
@SUMPRODUCT(R1, R2) The dot product of the vectors R1 and R2, where R1 and
R2 are of equal dimension.
@TAN(X) The tangent of X.
@TANH(X) The hyperbolic tangent of X.
@TRANSPOSE(M) The transpose of matrix M.
@VECLEN(...) The square root of the sum of squares of its arguments.
Statistical Functions
Formula Functions
@AVG(...) The average (arithmetic mean) of its arguments.
@CORR(R1, R2) Pearson's product-moment correlation coefficient for the
paired data in ranges R1 and R2.
@COUNT(...) A count of its non-blank arguments.
@F(M, N, F) The integral of Snedecor's F-distribution with M and N
degrees of freedom from minus infinity to F.
@ERF(L[, U]) Error function integrated between 0 and L; if U specified,
between L and U.
@ERFC(L) Complementary error function integrated between L and
infinity.
@FORECAST(...) Predicted Y values for given X.
@FTEST(R1, R2) The significance level ( ) of the two-sided F-test on the
variances of the data specified by ranges R1 and R2.
@GMEAN(...) The geometric mean of its arguments.
@HMEAN(...) The harmonic mean of its arguments.
@LARGE(R, N) The Nth largest value in range R.
@MAX(...) The maximum of its arguments.
@MEDIAN(...) The median (middle value) of the range R1.
@MIN(...) The minimum of its arguments.
@MODE(...) The mode, or most frequently occurring value.
@MSQ(...) The mean of the squares of its arguments.
@PERCENTILE(R, N) The value from the range R which is at the Nth percentile
in R.
@PERCENTRANK(R, N) The percentile rank of the number N among the values in
range R.
@PERMUT(S, T) The number of T objects that can be chosen from the set
S, where order is significant.
@PTTEST(R1, R2) The significance level ( ) of the two-sided T-test for the
paired samples contained in ranges R1 and R2.
@QUARTILE(R, Q) The quartile Q of the data in range R.
@RANK(E, R[, O]) The rank of a numeric argument E in the range R.
@RMS(...) The root of the mean of squares of its arguments.
@SMALL(R, N) The Nth smallest number in range R.
Formula Functions
@SSE(...) The sum squared error of its arguments. It is equivalent
to @VAR(...) @COUNT(...).
@SSQ(...) The sum of squares of its arguments.
@STD(...) The population standard deviation (N weighting) of its
arguments.
@STDS(...) The sample standard deviation (N-1 weighting) of its
arguments.
@SUM(...) The sum of its arguments.
@T(N, T) The integral of Student's T distribution with N degrees of
freedom from minus infinity to T.
@TTEST(R, X) The significance level of the two-sided single population
T-test for the population samples contained in range R.
@TTEST2EV(R1, R2) The significance level ( ) of the two-sided dual population
T-test for ranges R1 and R2, where the population
variances are equal.
@TTEST2UV(R1, R2) The significance level ( ) of the two-sided dual population
T-test for ranges R1 and R2, where the population
variances are not equal.
@VAR(...) The sample variance (N weighting) of its arguments.
@VARS(...) The sample variance (N-1 weighting) of its arguments.
@VSUM(...) The ``visual sum'' of its arguments, using precision and
rounding of formatted cell values.
String Functions
Formula Functions
@CHAR(N) The character represented by the code N.
@CLEAN(S) The string formed by removing all non-printing
characters from the string S.
@CODE(S) The ASCII code for the first character in string S.
@EXACT(S1, S2) Returns true (1) if string S1 exactly matches string S2,
otherwise returns 0.
@FIND(S1, S2, N) The index of the first occurrence of S1 in S2.
@HEXTONUM(S) The numeric value for the hexadecimal interpretation
of S.
@LEFT(S, N) The string composed of the leftmost N characters of S.
@LENGTH(S) The number of characters in S.
@LOWER(S) S converted to lower case.
@MID(S, N1, N2) The string of length N2 that starts at position N1 in S.
@NUMTOHEX(X) The hexadecimal representation of the integer portion
of X.
@PROPER(S) The string S with the first letter of each word capitalized.
@REGEX(S1, S2) Returns true (1) if string S1 exactly matches string S2;
otherwise returns false (0). Allows ``wildcard''
comparisons by interpreting S1 as a regular expression.
@REPEAT(S, N) The string S repeated N times.
@REPLACE(S1, N1, N2, S2) The string formed by replacing the N2 characters starting
at position N1 in S1 with string S2.
@RIGHT(S, N) The string composed of the rightmost N characters of S.
@STRCAT(...) The concatenation of all its arguments.
@STRING(X, N) The string representing the numeric value of X, to
N decimal places.
@STRLEN(...) The total length of all strings in its arguments.
@TRIM(S) The string formed by removing spaces from the string S.
@UPPER(S) The string S converted to upper case.
@VALUE(S) The numeric value represented by the string S; otherwise
0 if S does not represent a number.
Logic Functions
Formula Functions
@FALSE The logical value 0.
@FILEEXISTS(S) 1 if file S can be opened for reading; otherwise 0.
@IF(X, T, F) The value of T if X evaluates to on-zero, or F if X
evaluates to zero.
@ISERROR(X) Returns 1 if X ``contains'' an error, otherwise 0.
@ISNUMBER(X) 1 if X is a numeric value; otherwise 0.
@ISSTRING(X) 1 if X is a string value; otherwise 0.
@TRUE The logical value 1.
Financial Functions
Formula Functions
@ACCRINT(I, Ft, S, R, P, F[, B]) Accrued interest for a security that pays
periodic interest.
@ACCRINTM(I, S, R, P[, B]) Accrued interest for a security that pays
interest at maturity.
@COUPDAYBS(S, M, F[, B]) The number of days between the
beginning of the coupon period to the
settlement date.
@COUPDAYS(S, M, F[, B]) The number of days in the coupon
period that the settlement date is in.
@COUPDAYSNC(S, M, F[, B]) The number of days between the
settlement date and the next coupon
date.
@COUPNCD(S, M, F[, B]) The next coupon date after the
settlement date.
@COUPNUM(S, M, F[, B]) The number of coupon payments
between the settlement date and
maturity date.
@COUPPCD(S, M, F[, B]) The previous (most recent) coupon date
before the settlement date.
@CTERM(R, FV, PV) The number of compounding periods
for an investment.
@CUMIPMT(R, NP, PV, S, E, T) The cumulative interest on a loan
between start period S and end period E.
@CUMPRINC(R, NP, PV, S, E, T) The cumulative principal paid on a loan
between start period S and end period E.
@DB(C, S, L, P[, M]) Fixed- declining depreciation allowance.
@DDB(C, S, L, N) Double- declining depreciation
allowance.
@DISC(S, M, P, R[, B]) The discount rate for a security.
@DOLLARDE(FD, F) Converts a dollar amount expressed as a
fraction form into a decimal form.
@DOLLARFR(DD, F) Converts a dollar amount expressed as a
decimal form into a fraction form.
@DURATION(S, M, R, Y, F[, B]) The Macauley duration of a security
assuming $100 face value.
Formula Functions
@EFFECT(NR, NP) Returns the effective annual interest
rate.
@FV(P, R, N) Future value of an annuity.
@FVSCHEDULE(P, S) The future value of an initial investment
after compounding a series of interest
rates.
@INTRATE(S, M, I, R[, B]) The interest rate for a fully invested
security.
@IPMT(R, P, NP, PV, FV[, T]) The interest payment for a specific
period for an investment based on
periodic, constant payments and a
constant interest rate.
@IRR(G, F) The internal rate of return on an
investment. (See also @XIRR and
@MIRR.)
@MDURATION(S, M, R, Y, F[, B]) The modified Macauley duration of a
security assuming $100 face value.
@MIRR(CF, FR, RR) The modified internal rate of return for
a series of periodic cash flows.
@NOMINAL(ER, NP) The nominal annual interest rate.
@ODDFPRICE(S, M, I, FC, R, Y, RD, F[, B]) The price per $100 face value of a
security with an odd (short or long) first
period.
@ODDFYIELD(S, M, I, FC, R, PR, RD, F[, B]) The yield per of a security with an odd
(short or long) first period.
@PMT(PV, R, N) The periodic payment for a loan.
@PPMT(R, P, NP, PV, FV, T) The payment on the principal for a
specific period for an investment based
on periodic, constant payments and a
constant interest rate.
@PRICE(S, M, R, Y, RD, F[, B]) The price per $100 face value of a
security that pays periodic interest.
@PRICEDISC(S, M, D, RD[, B]) The price per $100 face value of a
discounted security.
@PRICEMAT(S, M, I, R, Y[, B]) The price per $100 face value of a
security that pays interest at maturity.
@PV(P, R, N) The present value of an annuity
Formula Functions
@RATE(FV, PV, N) The interest rate required to reach future
value FV.
@RECEIVED(S, M, I, D, [, B]) The amount received at maturity for a
fully vested security.
@SLN(C, S, L) The straight-line depreciation
allowance.
@SYD(C, S, L, N) The ``sum-of-years-digits'' depreciation
allowance.
@TBILLEQ(S, M, D) The bond-equivalent yield (BEY) for a
Treasury Bill.
@TBILLYIELD(S, M, D) The yield on a Treasury bill.
@TERM(P, R, FV) The number of payment periods for an
investment.
@VDB(C, S, L, S, E) Fixed- declining depreciation allowance
between two periods.
@XIRR(G, V, D) Internal rate of return for a series of cash
flows with variable intervals.
@XNPV(R, V, D) Returns the net present value for a series
of cash flows with variable intervals.
@YIELD(S, M, R, PR, RD, F[, B]) Yield of a security that pays periodic
interest.
@YIELDMAT(S, M, I, R, PR[, B]) Annual yield of a security which pays
interest at maturity.
Miscellaneous Functions
Formula Functions
@CELLREF(N1, N2) A reference to the cell in column N1 and row
N2.
@CHOOSE(N, ...) The Nth argument from the list.
@COL(C) The column address of the cell referenced by C.
@COLS(R) The number of columns in the specified range R.
@HLOOKUP(X, S, R) The value of the cell in range S that is R number
of rows beneath X.
@INIT(X1, X2) The first argument on the first recalculation pass
and the second argument on all subsequent
recalculation passes when Spreadsheet is
performing iterative calculations.
@INTERP2D(R1, R2, N) The interpolation value for a 2-dimensional
vector.
@INTERP3D(R, X, Y) The interpolation value for a 3-dimensional
vector.
@MATCH(V, R[, T]) The relative position in range R of value V based
on positioning criteria T.
@N(R) The numeric value of the top left cell in range R.
@RANGEREF(N1, N2, N3, N4) A reference to the range defined by coordinates
N1 through N4.
@ROW(C) The row address of the cell referenced by C.
@ROWS(R) The number of rows in the specified range R.
@S(R) The string value of the top left cell in range R.
@VLOOKUP(X, S, C) The value of the cell in range S that is C number
of columns to the right of X.
IMPORTANT Some Spreadsheet functions return a result that is a range or cell reference.
Spreadsheet does not include these indirect references in determining the pattern of
recalculation.
Plan carefully before using these functions. See the section, Computed Cell References at the
end of this chapter for more information.
Embedded Tools
Formula Functions
@DFT(R) The Discrete Fourier Transform of the range R.
@EIGEN(M) The eigenvalues of the matrix M.
@FFT(R) The Discrete Fourier Transform of the range R using a
fast Fourier Transform algorithm.
@FREQUENCY(R, B) Returns a frequency distribution for values R with a set of
intervals B.
@INVDFT(R) The inverse of the Discrete Fourier Transform of the
range R.
@INVERT(M) The inverse of matrix M.
@INVFFT(R) The inverse of the Discrete Fourier Transform of the
range R using a fast Fourier Transform algorithm.
@LINFIT(X, Y) The straight line least squares fit. This function is
equivalent to @POLYFIT(X, Y, 1).
@LLS(A, Y) The linear least squares solution X to the over-determined
system of equations AX=Y.
@MMUL(M1, M2) The product of multiplying matrix M2 by matrix M1.
@PLS(X, Y, d) Analyzes the least squares polynomial model Y=P(X),
where P is a polynomial of degree d.
@POLYCOEF(X, Y, d) The least squares coefficients for the polynomial fit
Y=P(X), where P is a polynomial of degree d.
@TRANSPOSE(M) The transpose of matrix M.
@TREND(NX, KX, KY) The y values for new x values given existing x and y values.
Note Embedded tools should not be contained within other functions or arithmetic
operations in a single formula. You may, however, copy, move and format embedded
tools just as any other function.
Arguments
Arguments specify the values the function should use in its calculations. The number of
arguments, their types, and their formats varies from one function to another. Arguments are
usually numeric values, cell or range references, or string values. Most functions have at least
one argument; a few have none.
The following chart shows different types of arguments used in Spreadsheet functions.
Argument Example:
Numeric Value 123
Address of a cell A10
Address of a range F9..F99
String Value ``Quarterly Report''
Spreadsheet does not recalculate the spreadsheet unless explicit dependencies have been
changed, so you may need to force recalculation if you change the value of a cell that is
referenced only indirectly through a function.
For example, suppose you want to count the numeric values in the range C3..J100 that fall
within the limits specified in cells A1 and A2. The Spreadsheet formula to compute this is
@CCOUNT(C3..J100,#A1 && #<A2)
This formula correctly counts the numeric values in the range C3..J100. However, if you
change the value in A1, Spreadsheet does not automatically recalculate the result, because A1
is referenced only indirectly through the constraint expression.
• To force Spreadsheet to recalculate the entire spreadsheet you should call the Recalc()
command. You should also add Recalculate menu in your application that calls Recalc().
• You can also force Spreadsheet to do a partial recalculation with respect to that cell, edit
the cell and append a blank and press the [Return] key on the cell containing the
@CCOUNT formula.
• You can also use explicit dependencies to circumvent the limitation described above, if
you entered the formula below in the form
@CCOUNT(C3..J100,#A1 && #<A2)\A1\A2
Spreadsheet would take into account the dependencies on A1 and A2 and update the
spreadsheet just as you expect.
• Another approach is to construct the condition string with an expression that references
the cells directly. For example,
@CCOUNT(C3..J100, @STRCAT("#",A1,"&_<",A2))
In this example, A1 and A2 are directly referenced and thus properly triggers recalculation.
Types of Errors
• Errors in Functions
Errors that occur inside functions are reported along with the name of the function in
which the error occurred.
• Formula Syntax Errors
These errors occur only when you are typing in a formula. When you finish entering the
formula, Spreadsheet attempts to read the formula and convert it to an internal
representation. If it is unable to do so, it continues to display the erroneous formula,
switches into ``edit mode'', places the text cursor at the beginning of the text which it had
difficulty parsing, and displays the error message.
The problem must be corrected before Spreadsheet can continue.
• Formula Evaluation Errors
Formula evaluation error occurs when Spreadsheet reads in a formula and converts it into
its internal formula representation, but is not able to evaluate the formula and produce a
correct numeric or string formula. In some cases, the formula has been entered
incorrectly, for example, an operand or parenthesis is missing. In other cases, an error has
occurred as a result of computation that cannot be handled properly by the computer's
floating point hardware, or there is an error condition in a cell or range that is referenced
in the context of this formula. Errors can also occur in the evaluation of Spreadsheet
built-in functions.
Contents
• Functional Reference
• Datafile Functions
• Extended Helper Functions
• Group Functions
• Instrument Functions
• Peak Functions
• Project Functions
• Sequence Functions
• Summary
Functional Reference
The following functions are available when creating advanced reports using the template
editor.
Syntax Notes:
• All of the functions described here are placed in cells in the template reporting
spreadsheet, and must begin with an '=' sign. For example, if a function were described as
Custom.Func(["Param A"]), then the actual function would look something like
=Custom.Func("Param A")
• [] These brackets indicate optional parameters. The brackets themselves are not included
in the actual parameters. For example, if a function were described as
Custom.Func(["Param A"]), then the actual function would look something like
=Custom.Func("Param A")
• <> These brackets indicate required parameters. The brackets themselves are not included
in the actual parameters. For example, if a function were described as
Custom.Func(<"Param A">), then the actual function would look something like
=Custom.Func("Param A")
• "" Quotation marks shown are required. The quotation marks are included in the actual
parameters. For example, if a function were described as Custom.Func(<"Param A">),
then the actual function would look something like
=Custom.Func("Param A")
Datafile Functions
These functions return information about data files that have been collected and analyzed.
Data.AcquisitionDate
Returns date and time of acquisition for the specified data file.
Syntax
=Data.AcquisitionDate(<Run Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
Return Type
Date/Time
Data.AnalysisDate
Returns date and time of the last analysis for the specified data file.
Syntax
=Data.AnalysisDate(<Run Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
Return Type
Date/Time
Data.BCDValue
Returns BCD value of the specified data file.
Syntax
=Data.BCDValue(<Run Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
Return Type
Number
Data.Description
Returns the description of the specified data file.
Syntax
=Data.Description(<Run Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
Return Type
String
Data.Filename
Returns file name of the specified data file. Only the file name is returned the path
information is not returned.
Syntax
=Data.Filename(<Run Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
Return Type
String
Data.FullFilename
Returns full file name of the specified data file. The file name and path information is
returned.
Syntax
=Data.FullFilename(<Run Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
Return Type
String
Data.InstrumentName
Returns the name of the instrument that was used to acquire the specified data file.
Syntax
=Data.InstrumentName(<Run Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
Return Type
String
Data.ISTDAmount
Returns ISTD amount of the specified data file.
Syntax
=Data.ISTDAmount(<Run Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
Return Type
Number
Data.LastMethodFilename
Returns the name of the last method file that was used to analyze the specified data file.
Syntax
=Data.LastMethodFilename(<Run Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
Return Type
String
Data.LastMethodFullFilename
Returns the full name and path of the last method file that was used to analyze the specified
data file.
Syntax
=Data.LastMethodFullFileName(<Run Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
Return Type
String
Data.MultiplierFactor
Returns multiplier factor of the specified data file.
Syntax
=Data.MultiplierFactor(<Run Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
Return Type
Number
Data.OriginalMethodFilename
Returns the name of the method file that was used to acquire the specified data file.
Syntax
=Data.OriginalMethodFilename(<Run Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
Return Type
String
Data.OriginalMethodFull Filename
Returns the full name and path of the method file that was used to acquire the specified
data file.
Syntax
=Data.OriginalMethodFullFileName(<Run Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
Return Type
String
Data.SampleAmount
Returns sample amount of the specified data file.
Syntax
=Data.SampleAmount(<Run Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
Return Type
Number
Data.SampleID
Returns sample id of the specified data file.
Syntax
=Data.SampleID(<Run Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
Return Type
String
Data.SystemWideParam
Returns a custom system wide result from the specified data file.
Syntax
=Data.SystemWideParam (<Param ID>, <Run Info>)
Parameters
<Param ID> A numeric identifier of the requested system wide custom parameter.
<Run Info> Describes the data file(s) that are used to extract the value.
Return Type
String / Number
Data.TraceName
Returns trace name for the specified index and data file.
Syntax
=Data.TraceName(<Trace Index>, <Run Info>)
Parameters
<Trace Index> A numeric index of the requested trace.
<Run Info> Describes the data file(s) that are used to extract the value.
Return Type
String
Data.UserName
Returns the name of the user that acquired the specified data file.
Syntax
=Data.UserName(<Run Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
Return Type
String
Data.Vial
Returns vial of the specified data file.
Syntax
=Data.Vial(<Run Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
Return Type
Number
Data.Volume
Returns volume of the specified data file.
Syntax
=Data.Volume(<Run Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
Return Type
Number
Ex.D
Returns a cell range for a dynamic set of data. If data in a cell is expanded dynamically, then
this function can be used to create a reference to the cells that the data expands into.
Syntax
=Ex.D (<Cell>, [Range Direction])
Parameters
<Cell> Contains a reference to a cell that is expanded for a dynamic
data range. This is in the form of 'B5', 'C12', etc. There are no
enclosing quotes on the cell reference.
[Range Direction] This is an optional numeric parameter that specifies the
direction of the dynamic expansion to use. If data is being
expanded in only one direction, then this parameter is not
necessary. If data is being expanded both across and down,
then this parameter can be used to control the range that is
used. The values of this parameter are as follows:
• Not Used or 0
Use any dynamic range that is available. If the data
expands both across and down, then a range is generated
that contains the entire expansion.
1. Only generate a range for dynamic data that expands
across the spreadsheet.
2. Only generate a range for dynamic data that expands
down the spreadsheet.
Return Type
Cell Range
Ex.R
This function can be used to repeat an enclosed spreadsheet formula over a series of cells,
based on a dynamic data set. For example, this function could be used to produce a total field
showing the sum of a set of peak areas for all peaks in a data file.
When using the EX.R function, the enclosed function must not have any run, trace, or peak
information. For example, the formula to show the peak area for the first named peak from
the current data file using the first trace would be:
However, when repeating the formula with the EX.R function to show the peak area for all
named peaks from all runs of a sequence using the first trace, the formula would look as
follows:
Syntax
=Ex.R(<Spreadsheet Formula>, <Dynamic Run Info>, [Trace Info], [Dynamic Peak Info])
or
=Ex.R(<Spreadsheet Formula>, <Dynamic Run Info>, [Trace Info], [Dynamic Group Info])
Parameters
<Spreadsheet Formula> Contains any valid spreadsheet formula that is expanded for a
dynamic data range.
[Range Direction] This is an optional numeric parameter that specifies the
direction to repeat the formula. If the referenced cell is being
repeated in only one direction, then this parameter is not
necessary. If the referenced cell is being repeated both across
and down, then this parameter can be used to control the
direction that is used. The values of this parameter are as
follows:
• Not Used or 0
Repeat exactly like the referenced cell. If the referenced cell
repeats both across and down, then this formula is
repeated both across and down.
1. Only repeat the formula across the spreadsheet as the
referenced cell does.
2. Only repeat the formula down the spreadsheet as the
referenced cell does.
<Dynamic Run Info> Used to determine the dynamic range to expand the formula
over.
[Trace Info] This is an optional parameter that is used to determine the
dynamic range to expand the formula over. See the appendix
for a description of this parameter.
[Dynamic Peak Info] This is an optional parameter that is used to determine the
dynamic range to expand the formula over.
Return Type
None
"P<x>; <peak type>" The peak with an index of <x> having the given
peak type.
"P<x-y>; <peak type>; <direction>; The peaks with an index in the range of <x-y>
<separation> having the given peak type. The peaks are
repeated in the direction specified by
<direction>, and are separated by <separation>
rows or columns.
"PA; <peak type>; <direction>; All peaks of the given peak type. The peaks are
<separation>" repeated in the direction specified by
<direction>, and are separated by <separation>
rows or columns.
"G<x>; <group type>" The group with an index of <x> having the given
group type.
"G<x-y>; <group type>; <direction>; The groups with an index in the range of <x-y>
<separation>" having the given group type. The groups are
repeated in the direction specified by
<direction>, and are separated by <separation>
rows or columns.
"GA; <group type>; <direction>; All groups of the given group type. The groups
<separation>" are repeated in the direction specified by
<direction>, and are separated by <separation>
rows or columns.
0 Report calibrated range groups that calculate concentrations for unnamed peaks in
this group.
1 Report calibrated range groups that do not calculate concentrations for unnamed
peaks in this group.
2 Report named peak groups.
Group Functions
These functions return information about groups.
Group.Area
Returns the area for the requested group(s).
Syntax
=Group.Area(<Run Info>, <Trace Info>, <Group Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Group Info> Describes the group(s) to use for the value.
Return Type
Number
Group.AreaPercent
Returns the area percent for the requested group(s).
Syntax
=Group.AreaPercent(<Run Info>, <Trace Info>, <Group Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Group Info> Describes the group(s) to use for the value.
Return Type
Number
Group.ESTDConcentration
Returns the ESTD concentration for the requested group(s).
Syntax
=Group.ESTDConcentration(<Run Info>, <Trace Info>, <Group Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Group Info> Describes the group(s) to use for the value.
Return Type
Number
Group.Height
Returns the height for the requested group(s).
Syntax
=Group.Height(<Run Info>, <Trace Info>, <Group Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Group Info> Describes the group(s) to use for the value.
Return Type
Number
Group.HeightPercent
Returns the height percent for the requested group(s).
Syntax
=Group.HeightPercent(<Run Info>, <Trace Info>, <Group Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Group Info> Describes the group(s) to use for the value.
Return Type
Number
Group.ISTDConcentration
Returns the ISTD concentration for the requested group(s).
Syntax
=Group.ISTDConcentration(<Run Info>, <Trace Info>, <Group Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Group Info> Describes the group(s) to use for the value.
Return Type
Number
Group.Name
Returns the group name for the requested group(s).
Syntax
=Group.Name(<Run Info>, <Trace Info>, <Group Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Group Info> Describes the group(s) to use for the value.
Return Type
String
Group.NORMConcentration
Returns the NORM concentration for the requested group(s).
Syntax
=Group.NORMConcentration(<Run Info>, <Trace Info>, <Group Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Group Info> Describes the group(s) to use for the value.
Return Type
Number
Group.Number
Returns the group number for the requested group(s).
Syntax
=Group.Number(<Run Info>, <Trace Info>, <Group Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Group Info> Describes the group(s) to use for the value.
Return Type
Number
Group.Quantitation
Returns the group quantitation for the requested group(s). This returns 'Area', 'Height', or
'Counts'.
Syntax
=Group.Quantitation (<Run Info>, <Trace Info>, <Group Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Group Info> Describes the group(s) to use for the value.
Return Type
String
Group.ResponseFactor
Returns the response factor for the requested group(s).
Syntax
=Group.ResponseFactor(<Run Info>, <Trace Info>, <Group Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Group Info> Describes the group(s) to use for the value.
Return Type
Number
Group.Units
Returns the units for the requested group(s).
Syntax
=Group.Units(<Run Info>, <Trace Info>, <Group Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Group Info> Describes the group(s) to use for the value.
Return Type
String
Instrument Functions
These functions return information about the current instrument.
Instrument.ID
Returns the internal instrument id of the current instrument.
Syntax
=Instrument.ID()
Parameters
None
Return Type
Number
Instrument.Name
Returns the instrument name of the current instrument.
Syntax
=Instrument.Name()
Parameters
None
Return Type
String
Instrument.UserName
Returns the name of the user logged into the current instrument.
Syntax
=Instrument.UserName()
Parameters
None
Return Type
String
Peak Functions
These functions return information about detected and named peaks.
Peak.AOHResolution
Returns the AOH resolution for the requested peak(s).
Syntax
=Peak.AOHResolution(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.AOHTheoreticalPlates
Returns the AOH theoretical plates for the requested peak(s).
Syntax
=Peak.AOHTheoreticalPlates(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.AOHTheoreticalPlatesPerMeter
Returns the AOH theoretical plates per meter for the requested peak(s).
Syntax
=Peak.AOHTheoreticalPlatesPerMeter(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.Area
Returns the area for the requested peak(s).
Syntax
=Peak.Area(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.AreaPercent
Returns the area percent for the requested peak(s).
Syntax
=Peak.AreaPercent(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.Asymmetry
Returns the asymmetry for the requested peak(s).
Syntax
=Peak.Asymmetry(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.AsymmetryTenPercent
Returns the asymmetry at 10% for the requested peak(s).
Syntax
=Peak.AsymmetryTenPercent(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.CapacityFactor
Returns the capacity factor for the requested peak(s).
Syntax
=Peak.CapacityFactor(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.CurrentResponseFactor
Returns the current response factor for the requested peak(s).
Syntax
=Peak.CurrentResponseFactor(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.CustomParam
Returns a custom peak result for the requested peaks.
Syntax
=Peak.CustomParam(<Param ID>, <Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Param ID> A numeric identifier of the requested peak custom parameter.
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
String / Number
Peak.DABResolution
Returns the DAB resolution for the requested peak(s).
Syntax
=Peak.DABResolution(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.DABTheoreticalPlates
Returns the DAB theoretical plates for the requested peak(s).
Syntax
=Peak.DABTheoreticalPlates(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.DABTheoreticalPlatesPerMeter
Returns the DAB theoretical plates per meter for the requested peak(s).
Syntax
=Peak.DABTheoreticalPlatesPerMeter(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.EMGResolution
Returns the EMG resolution for the requested peak(s).
Syntax
=Peak.EMGResolution(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.EMGTheoreticalPlates
Returns the EMG theoretical plates for the requested peak(s).
Syntax
=Peak.EMGTheoreticalPlates(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.EMGTheoreticalPlatesPerMeter
Returns the EMG theoretical plates per meter for the requested peak(s).
Syntax
=Peak.EMGTheoreticalPlatesPerMeter(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.ESTDConcentration
Returns the ESTD concentration for the requested peak(s).
Syntax
=Peak.ESTDConcentration(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.ExpectedRetentionTime
Returns the expected retention time for the requested peak(s).
Syntax
=Peak.ExpectedRetentionTime(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.Height
Returns the height for the requested peak(s).
Syntax
=Peak.Height(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.HeightPercent
Returns the height percent for the requested peak(s).
Syntax
=Peak.HeightPercent(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.Index
Returns the peak index information for the requested named peak, based on its peak id. The
returned information can be used in place of <Peak Info> for another function. For example
to find the peak name of a named peak with a peak id of 2 in the current data file, use the
following formula: =Peak.Name("RC", "T1", Peak.Index(2, "RC", "T1"))
Syntax
=Peak.Index(<Peak ID>, <Run Info>, <Trace Info>)
Parameters
<Peak ID> A numeric identifier of the requested named peak. This number
comes from the peak table.
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
Return Type
String
Peak.IntegrationCodes
Returns the integration codes for the requested peak(s).
Syntax
=Peak.IntegrationCodes(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
String
Peak.ISTDConcentration
Returns the ISTD concentration for the requested peak(s).
Syntax
=Peak.ISTDConcentration(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.JPResolution
Returns the JP resolution for the requested peak(s).
Syntax
=Peak.JPResolution(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.JPTheoreticalPlates
Returns the JP theoretical plates for the requested peak(s).
Syntax
=Peak.JPTheoreticalPlates(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.JPTheoreticalPlatesPer Meter
Returns the JP theoretical plates per meter for the requested peak(s).
Syntax
=Peak.JPTheoreticalPlatesPerMeter(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.Name
Returns the peak name for the requested peak(s).
Syntax
=Peak.Name(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
String
Peak.NORMConcentration
Returns the NORM concentration for the requested peak(s).
Syntax
=Peak.NORMConcentration(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.Number
Returns the detected peak number for the requested peak(s).
Syntax
=Peak.Number(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.Quantitation
Returns the peak quantitation for the requested peak(s). This returns 'Area', 'Height', or
'Counts'.
Syntax
=Peak.Quantitation (<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
String
Peak.RelativeRetentionTime
Returns the relative retention time for the requested peak(s).
Syntax
=Peak.RelativeRetentionTime(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.Resolution
Returns the resolution for the requested peak(s).
Syntax
=Peak.Resolution(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.ResolutionID
Returns the resolution ID for the requested peak(s).
Syntax
=Peak.ResolutionID(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.ResponseFactor
Returns the response factor for the requested peak(s).
Syntax
=Peak.ResponseFactor(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.RetentionTime
Returns the retention time for the requested peak(s).
Syntax
=Peak.RetentionTime(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.StartTime
Returns the start time for the requested peak(s).
Syntax
=Peak.StartTime(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.StopTime
Returns the stop time for the requested peak(s).
Syntax
=Peak.StopTime(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.TheoreticalPlates
Returns the theoretical plates for the requested peak(s).
Syntax
=Peak.TheoreticalPlates(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.TheoreticalPlatesPerMeter
Returns the theoretical plates per meter for the requested peak(s).
Syntax
=Peak.TheoreticalPlatesPerMeter(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.Units
Returns the concentration units for the requested peak(s).
Syntax
=Peak.Units(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
String
Peak.USPResolution
Returns the USP resolution for the requested peak(s).
Syntax
=Peak.USPResolution(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.USPTheoreticalPlates
Returns the USP theoretical plates for the requested peak(s).
Syntax
=Peak.USPTheoreticalPlates(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.USPTheoreticalPlatesPer Meter
Returns the USP theoretical plates per meter for the requested peak(s).
Syntax
=Peak.USPTheoreticalPlatesPerMeter(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.USPWidth
Returns the USP width for the requested peak(s).
Syntax
=Peak.USPWidth(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.Width
Returns the width for the requested peak(s).
Syntax
=Peak.Width(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.WidthFiftyPercent
Returns the width at 50% for the requested peak(s).
Syntax
=Peak.WidthFiftyPercent(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.WidthFivePercent
Returns the width at 5% for the requested peak(s).
Syntax
=Peak.WidthFivePercent(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Peak.WidthTenPercent
Returns the width at 10% for the requested peak(s).
Syntax
=Peak.WidthTenPercent(<Run Info>, <Trace Info>, <Peak Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
<Trace Info> Describes the trace that is used to extract the value.
<Peak Info> Describes the peak(s) to use for the value.
Return Type
Number
Project Functions
These functions return information about the current project.
Project.DataPath
Returns the default path used to store data files in the current project.
Syntax
=Project.DataPath()
Parameters
None
Return Type
String
Project.Description
Returns the description for the current project.
Syntax
=Project.Description()
Parameters
None
Return Type
String
Project.MethodPath
Returns the default path used to store method files in the current project.
Syntax
=Project.MethodPath()
Parameters
None
Return Type
String
Project.Name
Returns the name of the current project.
Syntax
=Project.Name()
Parameters
None
Return Type
String
Project.RootPath
Returns the default root path for the current project.
Syntax
=Project.RootPath()
Parameters
None
Return Type
String
Project.SequencePath
Returns the default path used to store sequence files in the current project.
Syntax
=Project.SequencePath()
Parameters
None
Return Type
String
Project.TemplatePath
Returns the default path used to store report template files in the current project.
Syntax
=Project.TemplatePath()
Parameters
None
Return Type
String
Sequence Functions
These functions return information about the sequence file that is used for reporting
purposes.
Sequence.Filename
Returns file name of the sequence file that is used for reporting. Only the file name is returned
the path information is not returned.
Syntax
=Sequence.Filename()
Parameters
None
Return Type
String
Sequence.FullFilename
Returns full file name of the sequence file that is used for reporting. The file name and path
information is returned.
Syntax
=Sequence.FullFilename()
Parameters
None
Return Type
String
Sequence.RunNumber
Returns the run number of the specified sequence run. This function can be used in
conjunction with the EX.R() formula to generate the run number of the runs in a sequence.
For example, the following formula would generate run numbers for all runs of a sequence
going down: =EX.R(SEQUENCE.RUNNUMBER(),"RA;1;0")
Syntax
=Sequence.RunNumber(<Run Info>)
Parameters
<Run Info> Describes the data file(s) that are used to extract the value.
Summary
The following describes the parameters that may be passed to template functions to describe
the requested data file, peak, and group information.
"P<x>; <peak The peak with an index of <x> having the given peak type.
type>"
"G<x>; <group The group with an index of <x> having the given group type.
type>"
0 Report calibrated range groups that calculate concentrations for unnamed peaks in
this group.
1 Report calibrated range groups that do not calculate concentrations for unnamed
peaks in this group.
2 Report named peak groups.
Index
A Calibration Curve
Average RF 50
Actions Button 135
calculations 42
Add to Table 2
Calibration curve
adjusting the group range 31
cubic fit 46
adjusting the retention time window 29 linear fit 43
AIA point-to-point fit 42
export of standard files 73 quadratic fit 44
ANDI Calibration Curves 36
file format 73 External Standard 38
Area/Amount Response Factor Definition 37 Internal Standard 40
ASCII Export Options 74 Capacity Factor 58
Ascii Sequence File Format 93 Concentration Calculations 36
Custom Parameter Programs 69
B
Back Tangent Skim 15 D
Backward Horizontal Baseline 13 Data Export 71
Baseline Codes 33 File Extensions Used for 77
File extensions used for data export 77
C Peak and group export files 75
Standard Report Export Files 78
Calculations Standard report export files 76
area% report 55
external standard report 56
internal standard report 56 E
normalization report 56 Equations and Calculations 35
system performance 57 Export 73
Calibration Export Files 75
Automatic Averaging 52 Exporting Chromatograms 73
Calib Weight 53 Exporting Groups 73
calibration averages 51 Exporting Peaks 73
Matrix operations 49 Exporting Standard Reports 73
modified least squares 48 ODBC 74
modified least squares calculation 47
replicates 52, 52
Response Factor Definition 36
F
Scaling 54 Force Peak Start/Stop 24
Weighting and Scaling 51
Weighting Method 54 H
Wt Average 51
Horizontal Baseline 12
T
Tangent Skim 15, 16
Theoretical Plates 59, 62, 63, 65
Threshold 7
U
Uncalibrated peak concentrations 41
Uncalibrated Range group 41
User Programs 69
V
Valley to Valley 11
W
Width 7