LabVIEW Graphical Programming (4th Ed) (Gary and Richard)
LabVIEW Graphical Programming (4th Ed) (Gary and Richard)
LabVIEW Graphical Programming (4th Ed) (Gary and Richard)
Fourth Edition
Gary W. Johnson
Richard Jennings
McGraw-Hill
New York Chicago San Francisco Lisbon London
Madrid Mexico City Milan New Delhi San Juan
Seoul Singapore Sydney Toronto
Copyright © 2006 by The McGraw-Hill Companies. All rights reserved. Except as permitted under the United States Copyright
Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database
or retrieval system, without the prior written permission of the publisher.
ISBN: 978-0-07-150153-8
MHID: 0-07-150153-3
The material in this eBook also appears in the print version of this title: ISBN: 978-0-07-145146-8, MHID: 0-07-145146-3.
All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a
trademarked name, we use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of
infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps.
McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in
corporate training programs. To contact a representative please visit the Contact Us page at www.mhprofessional.com.
Information has been obtained by McGraw-Hill from sources believed to be reliable. However, because of the possibility of
human or mechanical error by our sources, McGraw-Hill, or others, McGraw-Hill does not guarantee the accuracy, adequacy, or
completeness of any information and is not responsible for any errors or omissions or the results obtained from the use of such
information.
TERMS OF USE
This is a copyrighted work and The McGraw-Hill Companies, Inc. (“McGraw-Hill”) and its licensors reserve all rights in and to
the work. Use of this work is subject to these terms. Except as permitted under the Copyright Act of 1976 and the right to store
and retrieve one copy of the work, you may not decompile, disassemble, reverse engineer, reproduce, modify, create derivative
works based upon, transmit, distribute, disseminate, sell, publish or sublicense the work or any part of it without McGraw-Hill’s
prior consent. You may use the work for your own noncommercial and personal use; any other use of the work is strictly
prohibited. Your right to use the work may be terminated if you fail to comply with these terms.
THE WORK IS PROVIDED “AS IS.” McGRAW-HILL AND ITS LICENSORS MAKE NO GUARANTEES OR WAR-
RANTIES AS TO THE ACCURACY, ADEQUACY OR COMPLETENESS OF OR RESULTS TO BE OBTAINED FROM
USING THE WORK, INCLUDING ANY INFORMATION THAT CAN BE ACCESSED THROUGH THE WORK VIA
HYPERLINK OR OTHERWISE, AND EXPRESSLY DISCLAIM ANY WARRANTY, EXPRESS OR IMPLIED, INCLUDING
BUT NOT LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR
PURPOSE. McGraw-Hill and its licensors do not warrant or guarantee that the functions contained in the work will meet your
requirements or that its operation will be uninterrupted or error free. Neither McGraw-Hill nor its licensors shall be liable to you
or anyone else for any inaccuracy, error or omission, regardless of cause, in the work or for any damages resulting therefrom.
McGraw-Hill has no responsibility for the content of any information accessed through the work. Under no circumstances shall
McGraw-Hill and/or its licensors be liable for any indirect, incidental, special, punitive, consequential or similar damages that
result from the use of or inability to use the work, even if any of them has been advised of the possibility of such damages. This
limitation of liability shall apply to any claim or cause whatsoever whether such claim or cause arises in contract, tort or
otherwise.
Contents
Preface ........................................................ xi
Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Chapter 1 Roots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
LabVIEW and Automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Virtual instruments: LabVIEW’s foundation . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Why use LabVIEW? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
The Origin of LabVIEW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Introduction ................................................... 9
A vision emerges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
All the world’s an instrument . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
A hard-core UNIX guy won over by the Macintosh . . . . . . . . . . . . . . . . . . . . . 12
Putting it all together with pictures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Favoring the underdog platform for system design . . . . . . . . . . . . . . . . . . . . . 15
Ramping up development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Stretching the limits of tools and machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Facing reality on estimated development times . . . . . . . . . . . . . . . . . . . . . . . . 18
Shipping the first version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Apple catches up with the potential offered by LabVIEW ................ 19
LabVIEW 2: A first-rate instrument control product becomes
a world-class programming system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
The port to Windows and Sun . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
LabVIEW 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
LabVIEW 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
LabVIEW branches to BridgeVIEW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
LabVIEW 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
The LabVIEW RT branch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
LabVIEW 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
LabVIEW 7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
LabVIEW 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Crystal Ball Department . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
LabVIEW influences other software products . . . . . . . . . . . . . . . . . . . . . . . . . 32
LabVIEW Handles Big Jobs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
iii
iv Contents
Twenty years have passed since the release of LabVIEW. During this
period, it has become the dominant programming language in the world
of instrumentation, data acquisition, and control. A product of National
Instruments Corporation (Austin, Texas), it is built upon a purely
graphical, general-purpose programming language, G, with extensive
libraries of functions, an integral compiler and debugger, and an appli-
cation builder for stand-alone applications. The LabVIEW development
environment runs on Apple Macintosh computers and IBM PC compat-
ibles with Linux or Microsoft Windows. Programs are portable among
the various development platforms. The concept of virtual instruments
(VIs), pioneered by LabVIEW, permits you to transform a real instru-
ment (such as a voltmeter) into another, software-based instrument
(such as a chart recorder), thus increasing the versatility of available
hardware. Control panels mimic real panels, right down to the switches
and lights. All programming is done via a block diagram, consisting of
icons and wires, that is directly compiled to executable code; there is no
underlying procedural language or menu-driven system.
Working with research instrumentation, we find LabVIEW
indispensable—a flexible, time-saving package without all the frustrat-
ing aspects of ordinary programming languages. The one thing LabVIEW
had been missing all these years was a useful application-oriented book.
The manuals are fine, once you know what you want to accomplish, and
the classes offered by National Instruments are highly recommended
if you are just starting out. But how do you get past that first blank
window? What are the methods for designing an efficient LabVIEW
application? What about interface hardware and real-world signal-
conditioning problems? In this book, we describe practical problem-
solving techniques that aren’t in the manual or in the introductory
classes—methods you learn only by experience. The principles and
techniques discussed in these pages are fundamental to the work of
a LabVIEW programmer. This is by no means a rewrite of the manu-
als or other introductory books, nor is it a substitute for a course in
xi
xii Preface
simple: more than half the “LabVIEW” questions that coworkers ask us
turn out to be hardware- and signal-related. Information in this chap-
ter is vital and will be useful no matter what software you may use for
measurement and control. Chapter 14, “Writing a Data Acquisition Pro-
gram,” contains a practical view of data acquisition (DAQ) applications.
Some topics may seem at first to be presented backward—but for good
reasons. The first topic is data analysis. Why not talk about sampling
rates and throughput first? Because the only reason for doing data
acquisition is to collect data for analysis. If you are out of touch with the
data analysis needs, you will probably write the wrong data acquisition
program. Other topics in this chapter are sampling speed, throughput
optimization, and configuration management. We finish with some of
the real applications that you can use right out of the box.
LabVIEW RT brings the ease of graphical programming to the arcane
world of real-time system programming. In Chapter 15, “LabVIEW RT,”
we show you how LabVIEW RT works and how to achieve top per-
formance by paying attention to code optimization, scheduling, and
communications.
When software-timed real-time applications won’t fit the bill,
LabVIEW FPGA is the way to go. LabVIEW FPGA applications are not
constrained by processor or operating system overhead. With LabVIEW
FPGA you can write massively parallel hardware-timed digital con-
trol applications with closed loop rates in the tens of megahertz. Chap-
ter 16, “LabVIEW FPGA,” gives a solid introduction to programming
FPGAs with LabVIEW.
Embedded computer systems are all around us—in our cars, VCRs,
appliances, test equipment, and a thousand other applications. But until
now, LabVIEW has not been a viable development system for those min-
iaturized computers. Chapter 17, “LabVIEW Embedded,” introduces a
new version of LabVIEW capable of targeting any 32-bit microprocessor.
Chapter 18, “Process Control Applications,” covers industrial con-
trol and all types of measurement and control situations. We’ll look at
human-machine interfaces, sequential and continuous control, trend-
ing, alarm handling, and interfacing to industrial controllers, particu-
larly programmable logic controllers (PLCs).We frequently mention a
very useful add-on toolkit that you install on top of LabVIEW, called
the Datalogging and Supervisory Control Module (formerly available
as BridgeVIEW), which adds many important features for industrial
automation.
LabVIEW has a large following in physics research, so we wrote
Chapter 19, “Physics Applications.” Particular situations and solutions
in this chapter are electromagnetic field and plasma diagnostics, mea-
suring fast pulses with transient recorders, and handling very large
data sets. This last topic, in particular, is of interest to almost all users
xiv Preface
Gary W. Johnson:
To my wife, Katharine
Richard Jennings:
To my Lord and Savior, Jesus Christ
xv
ABOUT THE AUTHORS
1
Roots
LabVIEW has certainly made life easier for Gary, the engineer. I remem-
ber how much work it was in the early 1980s, writing hideously long pro-
grams to do what appeared to be simple measurement and control tasks.
Scientists and engineers automated their operations only when it was
absolutely necessary, and the casual users and operators wouldn’t dare
to tamper with the software because of its complexity. This computer-
based instrumentation business was definitely more work than fun. But
everything changed when, in mid-1987, I went to a National Instru-
ments (NI) product demonstration. The company was showing off a new
program that ran on a Macintosh. It was supposed to do something with
instrumentation, and that sounded interesting. When I saw what those
programmers had done—and what LabVIEW could do—I was stunned!
Wiring up icons to write a program? Graphical controls? Amazing! I had
to get hold of this thing and try it out for myself.
By the end of the year, I had taken the LabVIEW class and started
on my first project, a simple data acquisition system. It was like watch-
ing the sun rise. There were so many possibilities now with this easy
and fun-to-use programming language. I actually started looking for
things to do with it around the lab (and believe me, I found them).
It was such a complete turnaround from the old days. Within a year,
LabVIEW became an indispensable tool for my work in instrumenta-
tion and control. Now my laboratories are not just computerized, but
also automated. A computerized experiment or process relies heavily
on the human operators—the computer makes things easier by tak-
ing some measurements and doing simple things like that, but it’s far
from being a hands-off process. In an automated experiment, on the
other hand, you set up the equipment, press the Start button on the
LabVIEW screen, and watch while the computer orchestrates a sequence
1
2 Chapter One
this concept if you work in an area that has a need for versatile data
acquisition. Use whatever spare equipment you may have, recycle some
tried-and-true LabVIEW programs, and pile them on a cart. It doesn’t
even matter what kind of computer you have since LabVIEW runs on
all versions of Windows, Power Macintosh, Sun SPARCstations, Linux,
and HP workstations. The crash cart concept is simple and marvel-
ously effective.
Automation is expensive: The costs of sensors, computers, software,
and the programmer’s effort quickly add up. But in the end, a marvelous
new capability can arise. The researcher is suddenly freed from the labor
of logging and interpreting data. The operator no longer has to orches-
trate so many critical adjustments. And data quality and product quality
rise. If your situation fits the basic requirements for which automation is
appropriate, then by all means consider LabVIEW as a solution.
Like any other tool, LabVIEW is useful only if you know how to use
it. And the more skilled you are in the use of that tool, the more you will
use it. After 19 years of practice, I’m really comfortable with LabVIEW.
It’s gotten to the point where it is at least as important as a word pro-
cessor, a multimeter, or a desktop calculator in my daily work as an
engineer.
In the beginning, there was only machine language, and all was dark-
ness. But soon, assembly language was invented, and there was a glim-
mer of light in the Programming World. Then came Fortran, and the light
went out.
*© 1990 IEEE. Reprinted, with permission, from IEEE Spectrum, vol. 27, no. 8,
pp. 36–39, August 1990.
Roots 9
Introduction
Prior to the introduction of personal computers in the early 1980s, nearly
all laboratories using programmable instrumentation controlled their
test systems using dedicated instrument controllers. These expensive,
single-purpose controllers had integral communication ports for control-
ling instrumentation using the IEEE-488 bus, also known as the General
Purpose Interface Bus (GPIB).With the arrival of personal computers,
however, engineers and scientists began looking for a way to use these
cost-effective, general-purpose computers to control benchtop instru-
ments. This development fueled the growth of National Instruments,
which by 1983 was the dominant supplier of GPIB hardware interfaces
for personal computers (as well as for minicomputers and other machines
not dedicated solely to controlling instruments).
So, by 1983, GPIB was firmly established as the practical mechanism for
electrically connecting instruments to computers. Except for dealing with
some differing interpretations of the IEEE-488 specification by instru-
ment manufacturers, users had few problems physically configuring their
systems. The software to control the instruments, however, was not in
such a good state. Almost 100 percent of all instrument control programs
developed at this time were written in the BASIC programming language
because BASIC was the dominant language used on the large installed
base of dedicated instrument controllers. Although BASIC had advan-
tages (including a simple and readable command set and interactive
capabilities), it had one fundamental problem: like any other text-based
programming language, it required engineers, scientists, and technicians
who used the instruments to become programmers. These users had to
translate their knowledge of applications and instruments into the lines
of text required to produce a test program. This process, more often than
not, proved to be a cumbersome and tedious chore, especially for those
with little or no prior programming experience.
A vision emerges
National Instruments, which had its own team of programmers struggling
to develop BASIC programs to control instrumentation, was sensitive to
the burden that instrumentation programming placed on engineers and
scientists. A new tool for developing instrumentation software programs
was clearly needed. But what form would it take? Dr. Jim Truchard and
Jeff Kodosky, two of the founders of National Instruments, along with Jack
MacCrisken, who was then a consultant, began the task of inventing this
tool. (See Figure 1.2.) Truchard was in search of a software tool that would
markedly change the way engineers and scientists approached their test
development needs. A model software product that came to mind was
the electronic spreadsheet. The spreadsheet addressed the same general
problem Truchard, Kodosky, and MacCrisken faced—making the com-
puter accessible to nonprogrammer computer users. Whereas the spread-
sheet addressed the needs of financial planners, this entrepreneurial trio
10 Chapter One
wanted to help engineers and scientists. They had their rallying cry—they
would invent a software tool that had the same impact for scientists and
engineers that the spreadsheet had on the financial community.
Over several years, Kodosky refined the concept of this test system to
the notion of instrumentation software as a hierarchy of virtual instru-
ments. A virtual instrument (VI) would be composed of lower level virtual
instruments, much like a real instrument was composed of printed cir-
cuit boards and boards composed of integrated circuits (ICs). The bottom-
level VIs represented the most fundamental software building blocks:
computational and input/output (I/O) operations. Kodosky gave particular
emphasis to the interconnection and nesting of multiple software layers.
Specifically, he envisioned VIs as having the same type of construction at
all levels. In the hardware domain, the techniques for assembling ICs into
boards are dramatically different than assembling boards into a chassis.
In the software domain, assembling statements into subroutines differs
from assembling subroutines into programs, and these activities differ
greatly from assembling concurrent programs into systems. The VI model
of homogeneous structure and interface, at all levels, greatly simplifies the
construction of software—a necessary achievement for improving design
productivity. From a practical point of view, it was essential that VIs have
a superset of the properties of the analogous software components they
were replacing. Thus, LabVIEW had to have the computational ability of
a programming language and the parallelism of concurrent programs.
In addition, because the user interface was an integral part of every VI, it
was always available for troubleshooting a system when a fault occurred.
(The virtual instrument concept was so central to LabVIEW’s incarnation
that it eventually became embodied in the name of the product. Although
Kodosky’s initial concerns did not extend to the naming of the product,
much thought would ultimately go into the name LabVIEW, which is an
acronym for Laboratory Virtual Instrument Engineering Workbench.)
program, however, requires a great deal of skill. What Kodosky wanted was
a software-diagramming technique that would be easy to use for concep-
tualizing a system, yet flexible and powerful enough to actually serve as a
programming language for developing instrumentation software.
Two visual tools Kodosky considered were flowcharts and state diagrams.
It was obvious that flowcharts could not help. These charts offered a visu-
alization of a process, but to really understand them you have to read the
fine print in the boxes on the chart. Thus, the chart occupies too much
space relative to the fine print yet adds very little information to a well-
formatted program. The other option, a state diagram, is flexible and pow-
erful but the perspective is very different from that of a block diagram.
Representing a system as a collection of state diagrams requires a great
amount of skill. Even after completion, the diagrams must be augmented
with textual descriptions of the transitions and actions before they can be
understood.
By the end of 1984, Kodosky had experimented with most of the diagram-
ming techniques, but they were all lacking in some way. Dataflow diagrams
were the easiest to work with up until the point where loops were needed.
Considering a typical test scenario, however, such as “take 10 measure-
ments and average them,” it’s obvious that loops and iteration are at the
heart of most instrumentation applications. In desperation, Kodosky began
to make ad hoc sketches to depict loops specifically for these types of opera-
tions. Loops are basic building blocks of modern structured programming
languages, but it was not clear how or if they could be drawn in a dataflow
concept. The answer that emerged was a box; a box in a dataflow diagram
could represent a loop. From the outside, the box would behave as any
other node in the diagram, but inside it would contain another diagram,
a subdiagram, representing the contents of the loop. All the semantics of
the loop behavior could be encapsulated in the border of the box. In fact,
all the common structures of structured programming languages could be
represented by different types of boxes. His structured dataflow diagrams
were inherently parallel because they were based on dataflow. In 1990, the
first two U.S. patents were issued, covering structured dataflow diagrams
and virtual instrument panels. (See Figure 1.3.)
n
n!
Toggling boolean
1
Control (input)
Constant
Indicator RUN
Iteration terminal (output)
Conditional terminal
Figure 1.3 LabVIEW For Loop and While Loop programming structures
with Shift Registers to recirculate data from previous iterations.
Sine 1 [0..1]
the front panel. Of equal importance, the reviewers expressed great con-
fidence that they would be able to easily construct such diagrams to do
other applications. Now, the only remaining task was to write the soft-
ware, facetiously known as SMOP: small matter of programming.
Kodosky’s affinity for the Macintosh was not for aesthetic reasons alone.
The Macintosh system ROM contains high-performance graphics routines
collectively known as QuickDraw functions. The Macintosh’s most signifi-
cant graphics capability is its ability to manipulate arbitrarily shaped
regions quickly. This capability makes animated, interactive graphics
possible. The graphics region algebra performed by the Macintosh is fast
because of the unique coordinate system built into QuickDraw: the pixels
are between, not on, the gridlines. In addition, the graphics display of the
Macintosh uses square pixels, which simplifies drawing in general and
rotations of bitmaps in particular. This latter capability proves especially
useful for displaying rotating knobs and indicators on a VI front panel.
Ramping up development
Kodosky hired several people just out of school (and some part-time people
still in school) to staff the development team. Without much experience,
16 Chapter One
none of the team members was daunted by the size and complexity of the
software project they were undertaking and instead they jumped into it
with enthusiasm. The team bought 10 Macintoshes equipped with 512
kilobytes of memory and internal hard-disk drives called HyperDrives.
They connected all the computers to a large temperamental disk server.
The team took up residence in the same office near campus used by
Kodosky for his brainstorming session. The choice of location resulted in
11 people crammed into 800 square feet. As it turned out, the working con-
ditions were almost ideal for the project. There were occasional distrac-
tions with that many people in one room but the level of communication
was tremendous. When a discussion erupted between two team members,
it would invariably have some impact on another aspect of the system
they were inventing. The other members working on aspects of the project
affected by the proposed change would enter the discussion and quickly
resolve the issue. The lack of windows and a clock also helped the team
stay focused. (As it turned out, the developers were so impressed with
the productivity of the one-room team concept that the LabVIEW group
is still located in one large room, although it now has windows with a
great view.)
They worked for long hours and couldn’t afford to worry about the time.
All-nighters were the rule rather than the exception and lunch break often
didn’t happen until 3 P.M. There was a refrigerator and a microwave in the
room so the team could eat and work at the same time. The main nutri-
tional staples during development were double-stuff Oreo cookies and pop-
corn, and an occasional mass exodus to Armin’s for Middle Eastern food.
the problem. Each time it occurred, the vendor expanded the tables. These
fixes would last for a couple months until the project grew to overflow
them again. The project continued to challenge the capabilities of the
development tools for the duration of the project.
The next obstacle encountered was the Macintosh jump table. The project
made heavy use of object-oriented techniques, which resulted in lots of
functions, causing the jump tables to overflow. The only solution was to
compromise on design principles and work within the limits imposed by
the platform. As it turned out, such compromises would become more com-
monplace in the pursuit of acceptable performance.
The last major obstacle was memory. The project was already getting too
large for the 512-kilobyte capacity of Macintosh and the team still hadn’t
implemented all the required functions, let alone the desirable ones they
had been hoping to include. The prospects looked dim for implementing the
complete system on a DOS-based PC, even with extensive use of overlay-
ing techniques. This situation almost proved fatal to the project. The team
was at a dead end and morale was at an all-time low. It was at this oppor-
tune time that Apple came to the rescue by introducing the Macintosh
Plus in January 1986. The Macintosh Plus was essentially identical to the
existing Macintosh except that it had a memory capacity of one megabyte.
Suddenly, there was enough memory to implement and run the product
with most of the features the team wanted.
Once again, the issue of the marketability of the Macintosh arose. A quick
perusal of the DOS-based PC market showed that the software and hard-
ware technology had not advanced very much. Kodosky decided (with
approval by Dr. Truchard after some persuasion) that, having come this
far on the Macintosh, they would go ahead and build the first version of
LabVIEW on the Macintosh. By the time the first version of LabVIEW
was complete, there would surely, they thought, be a new PC that could
run large programs.
18 Chapter One
It was at this point that the company became over-anxious and tried to
force the issue by prematurely starting beta testing. The testing was a
fiasco. The software was far from complete. There were many bugs encoun-
tered in doing even the most simple and common operations. Development
nearly ground to a halt as the developers spent their time listening to beta
testers calling in the same problems. (See Figure 1.6.)
As the overall design neared completion, the team began focusing more
on details, especially performance. One of the original design goals was to
match the performance of interpreted BASIC. It was not at all clear how
much invention or redesign it would require to achieve this performance tar-
get, making it impossible to predict when the team would achieve this goal.
On most computational benchmarks, the software was competitive with
BASIC. There was one particular benchmark, the Sieve of Eratosthenes,
that posed, by nature of its algorithm and design, particular problems for
dataflow implementations. The performance numbers the team measured
for the sieve benchmark were particularly horrendous and discouraging—a
fraction of a second for a compiled C program, two minutes for interpreted
BASIC, and over eight hours for LabVIEW.
Kodosky did his best to predict when the software would be complete,
based on the number of bugs, but was not sure how the major bugs would
be found, much less fixed. Efficiently testing such a complex interactive
program was a vexing and complex problem. The team finally settled on
the bug day approach. They picked a day when the entire team would
stop working on the source code and simply use LabVIEW. They would
try all types of editing operations, build as many and varied VIs as they
could, and write down all the problems they encountered until the white
boards on every wall were full. The first bug day lasted only three hours.
The team sorted the list and for the next five weeks fixed all the fatal
flaws and as many minor flaws as possible. Then they had another bug
day. They repeated this process until they couldn’t generate even one fatal
flaw during an entire day. The product wasn’t perfect, but at least it would
not be an embarrassment.
box products that converted these ports to IEEE-488 control ports, but
performance suffered greatly.
The Macintosh II’s open architecture made it possible to add not only
IEEE-488 support but also other much-needed I/O capabilities, such as
analog-to-digital conversion and digital I/O. The Macintosh II used the
NuBus architecture, an IEEE standard bus that gave the new machine
high-performance 32-bit capabilities for instrumentation and data acqui-
sition that were unmatched by any computer short of a minicomputer (the
PC’s bus was 16 bits).With the flexibility and performance afforded by the
new Macintosh, users now had access to the hardware capabilities needed
to take full advantage of LabVIEW’s virtual instrumentation capabilities.
Audrey Harvey (now a system architect at National Instruments) led the
hardware development team that produced the first Macintosh II NuBus
interface boards, and Lynda Gruggett (now a LabVIEW consultant) wrote
the original LabDriver VIs that supported this new high-performance I/O.
With such impressive new capabilities and little news from the PC world,
National Instruments found itself embarking on another iteration of
LabVIEW development, still on the Macintosh.
While the development team was working with the new Macintosh, they
were also scrambling to meet some of the demands made by customers.
They were simultaneously making incremental improvements in per-
formance, fixing flaws that came to light after shipment began, and try-
ing to plan future developments. As a result of this process, LabVIEW
progressed from version 1.0 to version 1.2 (and the sieve progressed to
23 seconds). LabVIEW 1.2 was a very reliable and robust product. I,
Roots 21
for one, wrote a lot of useful programs in version 1.2 and can’t recall
crashing. (See Figures 1.7 and 1.8.)
LabVIEW 2’s compiler is especially notable not only for its performance but
for its integration into the development system. Developing in a standard
programming language normally requires separate compilation and link-
ing steps to produce an executable program. The LabVIEW 2 compiler is
Roots 23
Figure 1.9 The team that delivered LabVIEW 2 into the hands of
engineers and scientists (clockwise from upper left): Jack Barber,
Karen Austin, Henry Velick, Jeff Kodosky, Tom Chamberlain, Debo-
rah Batto, Paul Austin, Wei Tian, Steve Chall, Meg Fletcher, Rob
Dye, Steve Rogers, and Brian Powell. Not shown: Jeff Parker, Jack
MacCrisken, and Monnie Anderson.
LabVIEW 3
The LabVIEW 2.5 development effort established a new and flexible
architecture that made the unification of all three versions in LabVIEW
3 relatively easy. LabVIEW 3, which shipped in July of 1993, included a
number of new features beyond those introduced in version 2.5. Many
of these important features were the suggestions of users accumulated
over several years. Kodosky and his team, after the long and painful port,
finally had the time to do some really creative programming. For instance,
there had long been requests for a method by which the characteristics of
controls and indicators could be changed programmatically. The Attribute
Node addressed this need. Similarly, Local Variables made it possible to
both read from and write to controls and indicators. This is an extension
of strict dataflow programming, but it is a convenient way to solve many
tricky problems. Many subtle compiler improvements were also made that
enhance performance, robustness, and extensibility. Additional U.S. pat-
ents were issued in 1994 covering these extensions to structured dataflow
diagrams such as globals and locals, occurrences, attribute nodes, execu-
tion highlighting, and so forth.
Roots 25
Figure 1.10 “Some new features are so brilliant that eye protection is recommended when view-
ing.” The team that delivered LabVIEW 2.5 and 3.0 includes (left to right, rear) Steve Rogers, Thad
Engeling, Duncan Hudson, Keving Woram, Greg Richardson, Greg McKaskle; (middle) Dean Luick,
Meg Kay, Deborah Batto-Bryant, Paul Austin, Darshan Shah; (seated) Brian Powell, Bruce Mihura.
Not pictured: Gregg Fowler, Apostolos Karmirantzos, Ron Stuart, Rob Dye, Jeff Kodosky, Jack
MacCrisken, Stepan Riha.
LabVIEW 4
Like many sophisticated software packages, LabVIEW has both ben-
efited and suffered from feature creep: the designers respond to every
user request, the package bulks up, and pretty soon the beginner is over-
whelmed. April 1996 brought LabVIEW 4 to the masses, and, with it,
some solutions to perceived ease-of-use issues. Controls, functions, and
tools were moved into customizable floating palettes, menus were reor-
ganized, and elaborate online help was added. Tip strips appear when-
ever you point to icons, buttons, or other objects. Debugging became much
more powerful. And even the manuals received an infusion of valuable
new information. I heard significant, positive feedback from new users on
many of these features.
26 Chapter One
LabVIEW 5
For LabVIEW 5, Jeff Kodosky’s team adopted completely new development
practices based on a formalized, milestone-driven scheme that explicitly
measured software quality. (See Figure 1.11.) Bugs were tracked quan-
titatively during all phases of the development cycle, from code design
to user beta testing. The result, in February 1998, was the most reliable
LabVIEW release ever, in spite of its increased complexity.
Roots 27
Undo was positively the most-requested feature among users, all the way
back to the dawn of the product. But it was also considered one of the
most daunting. Behind the scenes, LabVIEW is an extremely complex col-
lection of objects—orders of magnitude more complicated than any word
processor or drawing program. The simpleminded approach to implement-
ing undo is simply to duplicate the entire VI status at each editing step.
However, this quickly leads to excessive memory usage, making only a
single-level undo feasible. Instead, the team devised an incremental undo
strategy that is fast, memory efficient, and supports multiple levels. It’s
so unique that it’s patented. Of course, it took a while to develop, and it
involved a complete rewrite of the LabVIEW editor. That was step 1.
As any programmer can tell you, the simpler a feature appears, the more
complex is its underlying code—and that certainly describes the conver-
sion of LabVIEW to multithreading. The major task was to evaluate every
one of the thousands of functions in LabVIEW to determine whether it was
thread-safe, reentrant, or needed some form of protection. This evaluation
process was carried out by first writing scripts in PERL (a string manipu-
lation language) that analyzed every line of code in the execution system,
and then having the programmers perform a second (human) evaluation.
As it turned out, about 80 percent of the code required no changes, but
every bit had to be checked. Kodosky said the process was “. . . like doing
a heart and intestine transplant.”
The third and final step in the Great Rewrite involved the compiler. As you
know by now, LabVIEW is unique in the graphical programming world in
that it directly generates executable object code for the target machine.
Indeed, that compiler technology is one of the crown jewels at National
Instruments. At the time, one of the latent shortcomings in the compiler
was the difficulty involved in supporting multiple platforms. Every time a
new one was added, a great deal of duplicated effort was needed to write
another platform-specific compiler process. The solution was to create a
new platform-independent layer that consolidated and unified the code
prior to object code generation.
Now adding a new platform involves only the lowest level of object code
generation, saving time and reducing the chances for new bugs. It also
reduced the code size of the compiler by several thousand lines. It’s
another of those underlying technology improvements that remains invis-
ible to the user except for increased product reliability.
80486 processor, which was chosen for its code compatibility and adequate
performance at low cost. It was a reasonable engineering decision for an
initial product release.
LabVIEW 6
June 2000 brought us LabVIEW 6, with a host of new user-interface
features (rendered in 3D, no less), extensive performance and memory
optimization at all levels, and perhaps most important, a very powerful
VI Server. Introduced in LabVIEW 5, the VI Server gives the user exter-
nal hooks into VIs. For instance, you can write programs that load and
run VIs, access VI documentation, and change VI setup information, all
without directly connecting anything to the target VI. Furthermore, most
of these actions can be performed by applications other than LabVIEW,
including applications running elsewhere on a network. This new para-
digm effectively publishes LabVIEW, making it a true member of an
application system rather than exclusively running the show as it had in
the past. You can even export shared libraries (DLLs) in LabVIEW. From
the perspective of professional LabVIEW programmers, we believe that
LabVIEW 6 has truly fulfilled the promise of graphical programming.
LabVIEW 7
LabVIEW 7 Express was released in April of 2003 with new features and
wizards designed to make programming and getting started easier for
30 Chapter One
Figure 1.12 My, how we have grown! This is most of the LabVIEW 6 team.
LabVIEW 7 also brings LabVIEW to new platforms far from the desktop.
Targets include a variety of compact computers and operating systems,
including Windows CE, Palm OS, and the ARM processor family. New in
LabVIEW is a compiler for field-programmable gate arrays (FPGAs).
LabVIEW for FPGA continues National Instruments’ push of LabVIEW
Everywhere. This is more than just marketing hype. With LabVIEW
FPGA, engineers are now able to convert LabVIEW VIs into hardware
implementations of G code.
Roots 31
LabVIEW 8
Released in October 2005, LabVIEW 8 introduced new tools to make
LabVIEW developers more productive and application development and
integration across a wide range of platforms easier. The LabVIEW proj-
ect provides a cohesive environment for application developers. Finally, at
long last, we can simultaneously develop applications for multiple targets.
Developers using LabVIEW FPGA and RT can develop applications on the
host and the target without having to close one environment and switch to
another. The project interface provides a relatively painless way to manage
development and deployment of large applications by many developers.
LabVIEW is the number one programming language in test and measure-
ment and it’s time we had some big time tools to manage our projects.
Look for more object oriented programming as LabVIEW includes built
in OOLV tools. National Instruments continues to take LabVIEW deeper
into education and simulation with the educational version of LabVIEW
for DSPs and the new LabVIEW MathScript. MathScript is an integral
part of LabVIEW that combines dataflow with text-based mathematical
programming. And as shown in Figure 1.13, LabVIEW keeps growing.
Figure 1.13 LabVIEW keeps growing and so does the development team. This is the team circa
2003. Photo courtesy of National Instruments Corporation.
32 Chapter One
In the past, LabVIEW was typically the tool for testing the widgets being
produced, but now it is also helping design the widget and may even be
part of the widget.
Getting Started
2
Make no mistake about it: LabVIEW is a programming language, and a
sophisticated one at that. It takes time to learn how to use it effectively.
After you buy the program, you should go through the tutorial man-
ual, and then hopefully you’ll take the basic LabVIEW class, and later
the advanced LabVIEW class; even though it costs money and takes
time, it’s worth it. Ask any of the MBAs in the front office; they know
the value of training in terms of dollars and cents as well as worker
productivity. Remember to keep working with LabVIEW; training goes
stale without practice. There is a great deal of information to absorb.
It’s good to read, experiment, and then read some more. For the pur-
poses of this book, we assume that you’ve been working with LabVIEW
and know the basic concepts.
Another thing you should do is to spend plenty of time looking at the
example VIs that come with the LabVIEW package. Collectively, the
examples contain about 80 percent of the basic concepts that you really
need to do an effective job, plus a lot of ready-to-use drivers and special
functions. The rest you can get right here. Remember that this book is
not a replacement for the user’s manual. Yes, we know, you don’t read
user’s manuals either, but it might be a good idea to crack the manu-
als the next time you get stuck. Most folks think that the manuals are
pretty well written. And the online help fills in details for every pos-
sible function. What we cover in this book are important concepts that
should make your programming more effective.
37
38 Chapter Two
Sequence structures
0 [0..2] 0 [0..2] 1 [0..2] 1 [0..2]
Case structures
Figure 2.1A standard for printing Sequence and Case structures. This is
but one compromise we must make when displaying LabVIEW programs
on paper.
DISPLAY
How a procedural language How your computer thinks (if you call this
tells the computer what to do thinking) when programmed with control flow
GET
INPUT A PROCESS DISPLAY
A AND B RESULT
GET
INPUT B
The parts of a VI
We see VIs as front panels and block diagrams, but there is a lot that
you don’t see. Each VI is a self-contained piece of LabVIEW software
with four key components (Figure 2.3):
■ Front panel. The front panel code contains all the resources for
everything on the front panel. This includes text objects, decorations,
controls, and indicators. When a VI is used as a subVI, the front panel
code is not loaded into memory unless the front panel is open or the
VI contains a Property node.
■ Block diagram. The block diagram is the dataflow diagram for the
VI. The block diagram is not loaded into memory unless you open the
block diagram or the VI needs to be recompiled.
■ Data. The VI data space includes the default front panel control and
indicator values, block diagram constants, and required memory buf-
fers. The data space for a VI is always loaded into memory. Be careful
Parts of a VI
Data
Always Loaded
Front Block
Code
Panel Diagram
when you make current values default on front panel controls or use
large arrays as block diagram constants; all that data is stored on
disk in your VI.
■ Code. This is the compiled machine code for your subVI. This is
always loaded into memory. If you change platforms or versions of
LabVIEW, the block diagram will be reloaded and the code will be
recompiled.
Each VI is stored as a single file with the four components above plus
the linker information about the subVIs, controls, .dll, and external
code resources that the VI needs. When a VI is loaded into memory, it
also loads all its subVIs, and each subVI loads its subVIs, and so on,
until the entire hierarchy is loaded.
successfully used by LabVIEW since its birth, and it is still the process
being used today. Beginning with LabVIEW 5, we’ve seen its effective-
ness multiplied through multithreading. A thread is an independent
program within a program that is managed by the operating system.
Modern operating systems are designed to run threads efficiently,
and LabVIEW receives a noticeable improvement from multithread-
ing. Each LabVIEW thread contains a copy of the LabVIEW execution
engine and the clumps of code that the engine is responsible for run-
ning. Within each thread the execution engine still multitasks clumps
of code, but now the operating system schedules where and when each
thread runs. On a multiprocessor system, the operating system can
even run threads on different processors in parallel. Today LabVIEW
on all major platforms is multithreaded and multitasking and able to
take advantage of multiprocessor systems. Figure 2.5 illustrates the
multiple threads within LabVIEW’s process. The threads communicate
via a carefully controlled messaging system. Note that one thread is
dedicated to the user interface. Putting the user interface into a sepa-
rate thread has several advantages:
■ Decoupling the user interface from the rest of your program allows
faster loop rates within your code.
Getting Started 43
LabVIEW Process
Thread
Code
+
Thread
Thread Execution
Code
Engine +
Thread
Messages Execution
Code
User Engine
Thread +
Interface Code
Execution
+
Engine
Execution
Engine
■ The user interface doesn’t need to run any faster than the refresh
rate on most monitors—less than 100 Hz.
■ Decoupling the user interface from the rest of the program makes
it possible to run the user interface on a separate computer. This is
what happens in LabVIEW RT.
What does all this mean? It means the two For Loops in Figure 2.4 can
run in parallel and you don’t have to do anything other than draw them
that way. The LabVIEW compiler generates code that runs in parallel,
manages all the resources required by your program, spreads the execu-
tion around smoothly, and does it all transparently. Imagine if you had
to build all this into a test system yourself. National Instruments has
spent more than 20 years making sure LabVIEW is the best program-
ming environment in test and measurement. You don’t need to know
the details of the execution engine or multithreading to program in
LabVIEW, but we think it will make you a better programmer. Stream-
line your block diagram to take advantage of dataflow and parallelism,
and avoid littering your block diagram with user-interface code.
Front panel
The front panel is the user’s window into your application. Take the
time to do it right. Making a front panel easy to understand while
satisfying complex requirements is one of the keys to successful vir-
tual instrument development. LabVIEW, like other modern graphical
presentation programs, has nearly unlimited flexibility in graphical
user-interface design. But to quote the manager of a large graphic
arts department, “We don’t have a shortage of technology, we have
a shortage of talent!” After years of waiting, we can finally refer you
to a concise and appropriate design guide for user-interface design,
Dave Ritter’s book LabVIEW GUI (2001). Some industries, such as the
nuclear power industry, have formal user-interface guidelines that are
extremely rigid. If you are involved with a project in such an area, be
sure to consult the required documents. Otherwise, you are very much
on your own.
Take the time to look at high-quality computer applications and see
how they manage objects on the screen. Decide which features you
would prefer to emulate or avoid. Try working with someone else’s
LabVIEW application without getting any instructions. Is it easy to
understand and use, or are you bewildered by dozens of illogical, unla-
beled controls, 173 different colors, and lots of blinking lights? Observe
your customer as he or she tries to use your VI. Don’t butt in; just
watch. You’ll learn a lot from the experience, and fast! If the user gets
stuck, you must fix the user interface and/or the programming prob-
lems. Form a picture in your own mind of good and bad user-interface
design. Chapter 8, “Building an Application,” has some more tips on
designing a user interface.
One surefire way to create a bad graphical user interface (GUI) is to
get carried away with colors, fonts, and pictures. When overused, they
quickly become distracting to the operator. (Please don’t emulate the
glitzy screens you see in advertising; they are exactly what you don’t
want.) Instead, stick to some common themes. Pick a few text styles
and assign them to certain purposes. Similarly, use a standard back-
ground color (such as gray or a really light pastel), a standard highlight
color, and a couple of status colors such as bright green and red. Human
factors specialists tell us that consistency and simplicity really
are the keys to designing quality man-machine interfaces (MMIs).
Getting Started 45
Controls
LabVIEW’s controls have a lot of built-in functionality that you should
use for best effect in your application. And of course this built-in func-
tionality runs in the user-interface thread. Quite a bit of display work,
computational overhead, and user-interface functionality has already
been done for you. All LabVIEW’s graphs support multiple scales on
both the X and Y axes, and all charts support multiple Y scales. Adjust
the chart update modes if you want to see the data go from left to right,
in sweep mode, update all at once in scope mode, or use the default
strip chart mode going from right to left. Control properties are all
easily updateable from the properties page. Figure 2.6 shows the prop-
erty page for an OK button. Changing the button behavior (also called
mechanical action) can simplify how you handle the button in your
application. The default behavior for the OK button is “Latch when
released,” allowing users to press the buttons, change their minds, and
move the mouse off the button before releasing. Your block diagram is
never notified about the button press. On the other hand, latch when
pressed notifies the block diagram immediately. Experiment with the
mechanical behavior; there’s a 1-in-6 chance the default behavior is
right for your application.
Property nodes
Property nodes are an extremely flexible way to manipulate the appear-
ance and behavior of the user interface. You create a Property node by
popping up on a control or indicator and choosing Create . . . Prop-
erty Node. This creates a Property node that is statically linked to
the control. You do not need to wire a reference to it for it to access the
control. The disadvantage is that statically linked Property nodes can
only reference front panel objects on their VI. Figure 2.7 shows both
a statically linked and a dynamically linked Property node. With a
control reference you can access properties of controls that aren’t on
46 Chapter Two
Figure 2.6 Control properties are modified through the Property pages.
Boolean buttons have six different mechanical actions that affect your block
diagram.
the local front panel. This is a handy way to clean up a block diagram
and generate some reusable user-interface code within a subVI. Every
control and every indicator has a long list of properties that you can
read or write. There can be multiple Property nodes for any given front
panel item. A given Property node can be resized by dragging at a cor-
ner to permit access to multiple attributes at one time. Each item in
the list can be either a read or a write, and execution order is sequen-
tial from top to bottom. We’re not going to spend a great deal of time
on Property nodes because their usage is highly dependent upon the
particular control and its application. But you will see them through-
out this book in various examples. Suffice it to say that you can change
almost any visible characteristic of a panel item, and it’s worth your
time to explore the possibilities.
There are some performance issues to be aware of with Property
nodes. Property nodes execute in the user-interface thread. Remember,
in our discussion of dataflow and LabVIEW’s execution system, how
Getting Started 47
Figure 2.7 Statically linked Property nodes can only reference front panel objects
within the scope of their VIs. Dynamically linked Property nodes use a control
reference to access properties of controls that may be in other VIs.
each node depends on having all its data before it can proceed. Indis-
criminately scattering Property nodes throughout your program will
seriously degrade performance as portions of your block diagram wait
while properties execute in the user interface. A general rule of thumb
is to never place Property nodes within the main processing loop(s).
Use a separate UI loop to interact with the front panel. Look at Chap-
ter 3, “Controlling Program Flow,” to see how you can use Property
nodes within the event structure.
Property nodes can increase the memory requirements of your appli-
cation. Property nodes within a subVI will cause the subVI’s front panel
to be loaded into memory. This may or may not be an issue, depending
on what you have on your front panel, or how many subVIs have Prop-
erty nodes on them. Try to limit the number of Property nodes and
keep them contained with only a few VIs. You could even build a reus-
able suite of user-interface VIs by using Property nodes and references
to front panel items.
One property that you want to avoid is the Value property. This
is such a convenient property to use, but it is a performance hog. In
LabVIEW 6 writing a control’s value with the Value property was
benchmarked at almost 200 times slower than using a local variable.
And a local variable is slower than using the actual control. LabVIEW
is optimized for dataflow. Read and write to the actual controls and
indicators whenever possible. You can do incredible things with Prop-
erty nodes; just have the wisdom to show restraint.
48 Chapter Two
Block diagram
The block diagram is where you draw your program’s dataflow diagram.
For simple projects you might need only a few Express VIs. These are
the “Learn LabVIEW in Three Minutes” VIs that salespeople are so
fond of showing. They are great to experiment with, and you can solve
many simple projects with just a few Express VIs. It is really easy to
take measurements from a DAQ card and log it to disk compared to
LabVIEW 2. However, when time comes to build a complete applica-
tion to control your experiment, you’ll probably find yourself scratching
your head and wondering what to do. Read on. In Chapter 3, “Control-
ling Program Flow,” we’ll talk about design patterns that you can apply
to common problems and build on.
A large part of computer programming consists of breaking a com-
plex problem into smaller and smaller pieces. This divide-and-con-
quer technique works in every imaginable situation from designing
a computer program to planning a party. In LabVIEW the pieces are
subVIs, and we have an advantage that other programming environ-
ments don’t—each subVI is capable of being run and tested on its own.
You don’t have to wait until the application is complete and run against
massive test suites. Each subVI can, and should, be debugged as it is
developed. LabVIEW makes it easy to iteratively develop, debug, and
make all the pieces work together.
SubVIs
Approach each problem as a series of smaller problems. As you break
it down into modular pieces, think of a one sentence statement that
clearly summarizes the purpose. This one sentence is your subVI. For
instance, one might be “This VI loads data from a series of transient
recorders and places the data in an output array.” Write this down
with the labeling tool on the block diagram before you put down a
single function; then build to that statement. If you can’t write a sim-
ple statement like that, you may be creating a catchall subVI. Also
write the statement in the VI description to help with documenta-
tion. Comments in the VI description will also show up in the context
help window. Refer to Chapter 9, “Documentation,” for more tips on
documenting your VIs. Consider the reusability of the subVIs you cre-
ate. Can the function be used in several locations in your program?
If so, you definitely have a reusable module, saving disk space and
memory. If the subVI requirements are almost identical in several
locations, it’s probably worth writing it in such a way that it becomes
a universal solution—perhaps it just needs a mode control. Refer
to Chapter 8, “Building an Application,” for more help on general
application design.
Getting Started 49
Icons
Each subVI needs an icon. Don’t rest content with the default Lab-
VIEW icon; put some time and effort into capturing the essence of
what the VI does. You can even designate terminals as being manda-
tory on the connector pane. They will show up in bold on the context
help window. Also create an easily recognizable theme for driver VIs.
Figure 2.8 shows the VI hierarchy for the Agilent 34401 Multimeter.
Figure 2.8 VI hierarchy for Agilent 34401 Multimeter. The banner at the top of each icon gives
the VIs a common theme. Each graphic succinctly captures VI function.
50 Chapter Two
Polymorphic VIs
Polymorphic VIs, introduced in LabVIEW 6, allow you to handle mul-
tiple data types with a single VI, thus going beyond the limitations
of LabVIEW’s normal polymorphism. A polymorphic VI is like a con-
tainer for a set of subVIs, each of which handles a particular data type.
All those subVIs must have identical connector panes, but otherwise
you’re free to build them any way you like, including totally differ-
ent functionality. To create a new polymorphic VI, visit the File menu,
choose New . . . , and in the dialog box, pick Polymorphic VI. Up pops
a dialog box in which you can select each of your subVIs. You can also
give it an icon, but the connector pane is derived from the subVI. After
you save it, your new polymorphic VI behaves just as any other VI,
except that its inputs and outputs can adapt to the data types that you
wire to them. This is called compile-time polymorphism because
it’s accomplished before you run the VI. LabVIEW doesn’t really sup-
port polymorphism at run time (something that C++ does), although
crafty users sometimes come up with strange type casting solutions
that behave that way. Almost.
Data
As you write your one-sentence description of each VI, consider the
information that has to be passed from one part of the program to
another. Make a list of the inputs and outputs. How do these inputs
and outputs relate to other VIs in the hierarchy that have to access
these items? Do the best you can to think of everything ahead of time.
At least, try not to miss the obvious such as a channel number or error
cluster that needs to be passed just about everywhere.
Think about the number of terminals available on the connector pane
and how you would like the connections to be laid out. Is there enough
room for all your items? If not, use clusters to group related items.
Always leave a few uncommitted terminals on the connector pane in
case you need to add an item later. That way, you don’t have to rewire
everything.
Clusters
A cluster is conceptually the same as a record in Pascal or a struct
in C. Clusters are normally used to group related data elements
that are used in multiple places on a diagram. This reduces wiring
Getting Started 51
MODULES
0
ACTIVE
SLOT
5
DATA OUT
0
CHANNELS
NAME
0 Channel Name
NAME
Channel Name
VALUE
0.26
ACTIVE
Empty
Initializes Bundle
shift register
Figure 2.9 The Modules data structure at the upper left implies the algorithm at the bot-
tom because the way that the cluster arrays are nested is related to the way the For Loops
are nested.
Typedefs
An important feature of LabVIEW that can help you manage compli-
cated data structures is the Type Definition, which performs the
same function as a typedef in C and a Type statement in Pascal. You
edit a control by using the LabVIEW Control Editor, in which you
Getting Started 53
can make detailed changes to the built-in controls. Add all the items
that you need in the data structure. Then you can save your new con-
trol with a custom name. An option in the Control Editor window is
Type Definition. If you select this item, the control can’t be recon-
figured from the panel of any VI in which it appears; it can only be
modified through the Control Editor. The beauty of this scheme is that
the control itself now defines a data type, and changing it in one place
(the Control Editor) automatically changes it in every location where
it is used. If you don’t use a Type Definition and you decide to change
one item in the control, then you must manually edit every VI in which
that control appears. Otherwise, broken wires would result through-
out the hierarchy—a major source of suffering. As you can see, Type
Definitions can save you much work and raise the quality of the final
product.
Arrays
Any time you have a series of numbers or any other data type that
needs to be handled as a unit, the numbers probably belong in an
array. Most arrays are one-dimensional (1D, a column or vector), a few
are 2D (a matrix), and some specialized data sets require 3D or greater.
LabVIEW permits you to create arrays of numerics, strings, clusters,
and any other data type (except for arrays of arrays). The one require-
ment is that all the elements be of the same type. Arrays are often cre-
ated by loops, as shown in Figure 2.10. For Loops are the best because
they preallocate the required memory when they start. While Loops
can’t; LabVIEW has no way of knowing how many times a While Loop
will cycle, so the memory manager will have to be called occasionally,
slowing execution somewhat.
Creating arrays by using a For Loop. This is an efficient way to build arrays with
Figure 2.10
many elements. A While Loop would do the same thing, but without preallocating memory.
54 Chapter Two
For Loops can also autoindex (execute sequentially for each element)
through arrays. This is a powerful technique in test systems in which
you want to increment through a list of test parameters. Pass the list
as an array, and let your For Loop execute once for each value, or set
of values. Refer to Chapter 4, “Data Types,” for more information on
arrays and clusters.
Debugging
Debugging is a part of life for all programmers (hopefully not too big
a part). As you develop your application, squash each bug as it comes
up. Never let them slip by, because it’s too easy to forget about them
later, or the situation that caused the bug may not come up again. Be
sure to check each subVI with valid and invalid inputs. One thing you
can always count on is that users will enter wrong values. LabVIEW
has some tools and techniques that can speed up the debugging pro-
cess. Read over the chapter of the LabVIEW manual on executing and
debugging VIs. It has a whole list of debugging techniques. Here are a
few of the important ones.
Peeking at data
It is often helpful to look inside a VI at intermediate values that are
not otherwise displayed. Even if no problems are evident, this can be
valuable—one of those lessons that you can only learn the hard way, by
having a VI that appears to be working, but only for the particular test
cases that you have been using. During the prototyping stage, connect
extra, temporary indicators at various points around the diagram as a
sanity check. Look at such things as the iteration counter in a loop or
the intermediate results of a complicated string search. If the values
you see don’t make sense, find out why. Otherwise, the problem will
come back to haunt you.
While a VI is executing, you can click on any wire with the probe
tool, or pop up on any wire and select Probe from the pop-up menu.
A windoid (little window) appears that displays the value flowing
through that wire. It works for all data types—even clusters and
arrays—and you can even pop up on data in the probe window to
prowl through arrays. You can have as many probe windows as you
like, floating all over the screen, and the probes can be custom con-
trols to display the data just the way you’d like. If the VI executes too
fast, the values may become a blur; you may need to single-step the
VI in that case (see the following section). Probes are great because
you don’t have to create, wire, and then later delete all kinds of tem-
porary indicators. Use probes instead of execution trace when you
want to view data in real time. A Custom Probe is a special subVI
(Figure 2.11) you create that you can dynamically insert into a wire
to see what is going on. Because a custom probe is a subVI you can
put in any functionality you want. Right-click on the wire you want
to probe, and select Custom Probe >> New . . . to create a new
custom probe.
Figure 2.11 Custom probe and the probe’s block diagram. Custom
probes are special VIs you can use to look at live data in a wire
while the VI is executing. This custom probe is for a DBL data
type and lets you conditionally pause the diagram. You can create
custom probes with any functionality you want.
paused with some part of the diagram flashing. You then control execu-
tion with the following buttons:
Step Into allows the next node to execute. If the node is a subVI,
the subVI opens in pause mode, and its diagram is displayed with
part of its diagram flashing. This is a recursive type of debugging,
allowing you to burrow down through the hierarchy.
Step Over lets you execute subVIs without opening them. For
built-in functions, Step Over and Step Into are equivalent. This is my
most used button for stepping through a big diagram.
Step Out Of lets you finish quickly. You can use this button to
complete a loop (even if it has 10,000 iterations to go) or a Sequence
structure without taking the VI out of pause mode. If you hold down
the mouse button, you can control how many levels it steps out of.
Execution highlighting
Another way to see exactly what is happening in a diagram is to enable
execution highlighting. Click on the execution highlighting button
, and notice that it changes to . Run the VI while viewing the
diagram.
When execution starts, you can watch the flow of data. In the Debug-
ging Preferences, you can enable data bubbles to animate the flow. Each
time the output of a node generates a value, that value is displayed
in a little box if you select the autoprobe feature in the Debugging
Preferences. Execution highlighting is most useful in conjunction with
single-stepping: You can carefully compare your expectations of how
the VI works against what actually occurs. A favorite use for execution
highlighting is to find an infinite loop or a loop that doesn’t terminate
after the expected number of iterations.
While Loops are especially prone to execute forever. All you have to
do is to accidentally invert the boolean condition that terminates the
loop, and you’re stuck. If this happens in a subVI, the calling VI may
be in a nonresponsive state. When you turn on execution highlighting
or single-stepping, any subVI that is running will have a green arrow
embedded in its icon: .
Sometimes a VI tries to run forever even though it contains no loop
structures. One cause may be a subVI calling a Code Interface node
(CIN) that contains an infinite loop. Reading from a serial port using
the older Serial Port Read VI is another way to get stuck because that
operation has no time-out feature. If no characters are present at the
port, the VI will wait forever. (The solution to that problem is to call
Bytes at Serial Port to see how many characters to read, then read
that many. If nothing shows up after a period of time, you can quit with-
out attempting to read. See Chapter 10, “Instrument Driver Basics,” for
more information.) That is why you must always design low-level I/O
routines and drivers with time-outs. In any case, execution highlight-
ing will let you know which node is “hung.”
Setting breakpoints
A breakpoint is a marker you set in a computer program to pause
execution when the program reaches it. Breakpoints can be set on
any subVI, node, or wire in the hierarchy by clicking on them with the
58 Chapter Two
Programming by Plagiarizing
A secret weapon for LabVIEW programming is the use of available
example VIs as starting points. Many of your problems have already
been solved by someone else, but the trick is to get hold of that code.
60 Chapter Two
The examples and driver libraries that come with LabVIEW are a
gold mine, especially for the beginner. Start prowling these directories,
looking for tidbits that may be useful. Install LabVIEW on your home
computer. Open up every example VI and figure out how it works. If
you have access to a local LabVIEW user group, be sure to make some
contacts and discuss your more difficult problems. The online world is
a similarly valuable resource. We simply could not do our jobs any more
without these resources.
The simplest VIs don’t require any fancy structures or programming
tricks. Instead, you just plop down some built-in functions, wire them
up, and that’s it. This book contains many programming constructs
that you will see over and over. The examples, utilities, and drivers
can often be linked to form a usable application in no time at all. Data
acquisition using plug-in multifunction boards is one area where you
rarely have to write your own code. Using the examples and utilities
pays off for file I/O as well. We mention this because everyone writes
her or his own file handlers for the first application. It’s rarely neces-
sary, because 90 percent of your needs are probably met by the file
utilities that are already on your hard disk: Write to Text File and
Read From Text File. The other 10 percent you can get by modifying
an example or utility to meet your requirements. Little effort on your
part and lots of bang for the buck for the customer. That’s the way it’s
supposed to work.
Bibliography
AN 114, Using LabVIEW to Create Multithreaded VIs for Maximum Performance and
Reliability, www.ni.com, National Instruments Corporation, 11500 N. Mopac Express-
way, Austin, Tex., 2000.
AN 199, LabVIEW and Hyper-Threading, www.ni.com, National Instruments Corpora-
tion, 11500 N. Mopac Expressway, Austin, Tex., 2004.
Brunzie, Ted J.: “Aging Gracefully: Writing Software that Takes Changes in Stride,”
LabVIEW Technical Resource, vol. 3, no. 4, Fall 1995.
Fowler, Gregg: “Interactive Architectures Revisited,” LabVIEW Technical Resource, vol. 4,
no. 2, Spring 1996.
Gruggett, Lynda: “Getting Your Priorities Straight,” LabVIEW Technical Resource, vol. 1,
no. 2, Summer 1993. (Back issues are available from LTR Publishing.)
“Inside LabVIEW,” NIWeek 2000, Advanced Track 1C. National Instruments Corpora-
tion, 11500 N. Mopac Expressway, Austin, Tex., 2000.
Johnson, Gary W. (Ed.): LabVIEW Power Programming, McGraw-Hill, New York, 1998.
Ritter, David: LabVIEW GUI, McGraw-Hill, New York, 2001.
Chapter
Sequences
The simplest way to force the order of execution is to use a Sequence
structure as in the upper example of Figure 3.1. Data from one frame
is passed to succeeding frames through Sequence local variables,
which you create through a pop-up menu on the border of the Sequence
structure. In a sense, this method avoids the use of (and advantages of )
dataflow programming. You should try to avoid the overuse of Sequence
structures. LabVIEW has a great deal of inherent parallelism, where
GET A and GET B could be processed simultaneously. Using a sequence
guarantees the order of execution but prohibits parallel operations. For
instance, asynchronous tasks that use I/O devices (such as GPIB and
61
62 Chapter Three
result
result
Figure 3.1 In the upper example, Sequence structures force LabVIEW to not
use dataflow. Do this only when you have a good reason. The lower example
shows the preferred method.
Data Dependency
A fundamental concept of dataflow programming is data dependency,
which says that a given node can’t execute until all its inputs are avail-
able. So in fact, you can write a program using only dataflow, or not.
Let’s look at an example that is very easy to do with a sequence, but
is better done without. Consider Figure 3.2, which shows four possible
solutions to the problem where you need to open a file, read from it,
and then close it.
Solution A uses a stacked sequence. Almost everybody does it this way
the first time. Its main disadvantage is that you have to flip through
the subdiagrams to see what’s going on. Now, let’s do it with dataflow,
Controlling Program Flow 63
where all the functions are linked by wires in some logical order. A
problem arises in solution B that may not be obvious, especially in a
more complicated program: Will the file be read before it is closed?
Which function executes first? Don’t assume top-to-bottom or left-to-
right execution when no data dependency exists! Make sure that the
sequence of events is explicitly defined when necessary. A solution is to
create artificial data dependency between the Read File and the
Close File functions, as shown in example C. We just connected one of
the outputs of the Read File function (any output will do) to the border
of the Sequence structure enclosing the Close File. That single-frame
sequence could also have been a Case structure or a While Loop—no
matter: It’s just a container for the next event. The advantage of this
style of programming is clarity: The entire program is visible at first
glance. Once you get good at it, you can write many of your programs
in this way.
Looping
Most of your VIs will contain one or more of the two available loop
structures, the For Loop and the While Loop. Besides the obvious
use—doing an operation many times—there are many nonobvious
Error out
Simple Error
AI Config.vi AI Start.vi DAQ Task ID AI Clear.vi Handler.vi
Figure 3.3 A cluster containing error information is passed through all the
important subVIs in this program that use the DAQ library. The While Loop
stops if an error is detected, and the user ultimately sees the source of the
error displayed by the Simple Error Handler VI. Note the clean appearance of
this style of programming.
Controlling Program Flow 65
ways to use a loop (particularly the While Loop) that are helpful to
know about. We’ll start with some details about looping that are often
overlooked.
While LOOPS
The While Loop is one of the most versatile structures in LabVIEW.
With it, you can iterate an unlimited number of times and then sud-
denly quit when the boolean conditional terminal becomes True. You
can also pop up on the conditional terminal and select Stop If False. If
the condition for stopping is always satisfied (for instance, Stop If False
wired to a False boolean constant), the loop executes exactly one time. If
you put uninitialized shift registers on one of these one-trip loops and
then construct your entire subVI inside it, the shift registers become
a memory element between calls to the VI. We call this type of VI a
functional global. You can retain all sorts of status information in
this way, such as knowing how long it’s been since this subVI was last
called (see the section on shift registers that follows).
Pretest While Loop. What if you need a While Loop that does not exe-
cute at all if some condition is False? We call this a pretest While Loop,
and it’s really easy to do (Figure 3.4). Just use a Case structure to con-
tain the code that you might not want to execute.
Conditional
Terminal
Figure 3.4 A pretest While Loop. If the pretest condition
is False, the False frame of the Case structure is executed,
skipping the actual work to be done that resides in the True
case.
66 Chapter Three
For Loops
Use the For Loop when you definitely know how many times a subdi-
agram needs to be executed. Examples are the building and processing
of arrays and the repetition of an operation a fixed number of times.
Most of the time, you process arrays with a For Loop because LabVIEW
already knows how many elements there are, and the auto-indexing
feature takes care of the iteration count for you automatically: All you
have to do is to wire the array to the loop, and the number of itera-
tions (count) will be equal to the number of elements in the array.
99999
99999 RUN
Figure 3.5 The left example may run an extra cycle after the RUN
switch is turned off. Forcing the switch to be evaluated as the last item
(right) guarantees that the loop will exit promptly.
Controlling Program Flow 67
Count Terminal
5 elements 999
5 elements
5 elements
10 elements
5 elements
10 elements
Iteration Terminal
Figure 3.6 The rule of For Loop limits: The smallest count always wins,
whether it’s the N terminal or one of several arrays.
But what happens when you hook up more than one array to the For
Loop, each with a different number of elements? What if the count
terminal is also wired, but to yet a different number? Figure 3.6
should help clear up some of these questions. Rule: The smaller count
always wins. If an empty array is hooked up to a For Loop, that loop will
never execute.
Also, note that there is no way to abort a For Loop. In most program-
ming languages, there is a GOTO or an EXIT command that can force
the program to jump out of the loop. Such a mechanism was never
included in LabVIEW because it destroys the dataflow continuity. Sup-
pose you just bailed out of a For Loop from inside a nested Case struc-
ture. To what values should the outputs of the Case structure, let alone
the loop, be set? Every situation would have a different answer; it is
wiser to simply enforce good programming habits. If you need to escape
from a For Loop, use a While Loop instead! By the way, try popping up
(see your tutorial manual for instructions on popping up on your plat-
form) on the border of a For Loop, and use the Replace operation; you
can instantly change it to a While Loop, with no rewiring. The reverse
operation also works. You can even swap between Sequence and Case
structures, and LabVIEW preserves the frame numbers. Loops and
other structures can be removed entirely, leaving their contents wired
in place so far as possible, by the same process (for example, try the
Remove For Loop operation). See the LabVIEW user’s manual for these
and other cool editing tricks that can save you time.
Shift registers
Shift registers are special local variables or memory elements avail-
able in For Loops and While Loops that transfer values from the com-
pletion of one iteration to the beginning of the next. You create them by
popping up on the border of a loop. When an iteration of the loop com-
pletes, a value is written to the shift register—call this the nth value.
68 Chapter Three
Unfiltered Filtered
1.0 1.0
0.8 0.8
RUN
0.5 0.5
0.3 0.3
0.0 0.0
133 183 133 183
Figure 3.8 Weeding out empty strings with a shift register. A similar
program might be used to select special numeric values. Notice that we
had to use Build Array inside a loop, a relatively slow construct, but also
unavoidable.
Controlling Program Flow 69
previous
changed?
setpoint Only initialize
True upon loading
False
init
Write Setpoint has
setpoint not changed.
Setpoint
SP Read Temperature
command PV
a desired value, you could use this to trigger a periodic function such
as data logging or watchdog timing. This technique is also used in the
PID (proportional integral derivative) Toolkit control blocks whose
algorithms are time-dependent.
Globals
Global and local variables
Another, very important use for uninitialized shift registers occurs
when you create global variable VIs, also called Functional
Globals. In LabVIEW, a variable is a wire connecting two objects on a
diagram. Since it exists only on one diagram, it is by definition a local
variable. By using an uninitialized shift register in a subVI, two or
more calling VIs can share information by reading from and writing
to the shift register in the subVI. This is a kind of global variable, and
it’s a very powerful notion. Figure 3.11 shows what the basic model
looks like.
If Set Value is True (write mode), the input value is loaded into
the shift register and copied to the output. If Set Value is False (read
mode), the old value is read out and recycled in the shift register. The
Valid indicator tells you that something has been written; it would be
a bad idea to read from this global and get nothing when you expect a
number. Important note: For each global variable VI, you must create a
distinct VI with a unique name.
There are several really nice features of global variables. First, they
can be read or written any time, any place, and all callers will access
the same copy of the data. There is no data duplication. This includes
multiple copies on the same diagram and, of course, communications
between top-level VIs. This permits asynchronous tasks to share infor-
mation. For instance, you could have one top-level VI that just scans
Figure 3.11 A global variable VI that stores a single numeric value. You could
easily change the input and output data type to an array, a cluster, or any-
thing else.
72 Chapter Three
event B
interval B (sec) event B interval B (sec)
4.00
min 60.00
interval C (min) event C
event C
5.00
interval C (min)
event D
interval D (ms)
Figure 3.12 A time interval generator based on uninitialized shift registers. Call this
subVI with one of the Event booleans connected to a Case structure that contains a func-
tion you wish to perform periodically. For instance, you could place a strip chart or file
storage function in the True frame of the Case structure.
your input hardware to collect readings at some rate. It writes the data
to a global array. Independently, you could have another top-level VI
that displays those readings from the global at another rate, and yet
another VI that stores the values. This is like having a global database
or a client-server relationship among VIs.
Because these global variables are VIs, they are more than just mem-
ory locations. They can perform processing on the data, such as filtering
or selection, and they can keep track of time as the Interval Timer
VI does in Figure 3.12. Each channel of this interval timer compares
the present time with the time stored in the shift register the last time
when the channel’s Event output was True. When the Iteration input
is equal to zero, all the timers are synchronized and are forced to trigger.
This technique of initialization is another common trick. It’s convenient
because you can wire the Iteration input to the iteration terminal, in
the calling VI, which then initializes the subVI on the first iteration. We
use this timer in many applications throughout this book.
As you can see, functional globals can store multiple elements of dif-
ferent types. Just add more shift registers and input/output terminals.
By using clusters or arrays, the storage capacity is virtually unlimited.
LabVIEW has built-in global variables as well (discussed in the next
section). They are faster and more efficient than functional global vari-
ables and should be used preferentially. However, you can’t embed any
intelligence in them since they have no diagram; they are merely data
storage devices.
Controlling Program Flow 73
Loop A Loop B
a global control B a global
control A
number (write) number (write)
a global indicator
(read) number
RUN RUN
Figure 3.13 Race conditions are a hazard associated with all global
variables. The global gets written in two places. Which value will it
contain when it’s read?
you end up making many copies of the data. This wastes memory and
adds execution overhead. A solution is to create a subVI that encapsu-
lates the global, providing whatever access the rest of your program
requires. It might have single-element inputs and outputs, addressing
the array by element number. In this way, the subVI has the only direct
access to the global, guaranteeing that there is only one copy of the
data. This implementation is realized automatically when you create
functional globals built from shift registers.
Global variables, while handy, can quickly become a programmer’s
nightmare because they hide the flow of data. For instance, you could
write a LabVIEW program where there are several subVIs sitting on a
diagram with no wires interconnecting them and no flow control struc-
tures. Global variables make this possible: The subVIs all run until
a global boolean is set to False, and all the data is passed among the
subVIs in global variables. The problem is that nobody can understand
what is happening since all data transfers are hidden. Similar things
happen in regular programming languages where most of the data is
passed in global variables rather than being part of the subroutine
calls. This data hiding is not only confusing, but also dangerous. All you
have to do is access the wrong item in a global at the wrong time, and
things will go nuts. How do you troubleshoot your program when you
can’t even figure out where the data is coming from or going to?
The answer is another rule. Rule: Use global variables only where
there is no other dataflow alternative. Using global variables to syn-
chronize or exchange information between parallel loops or top-level
VIs is perfectly reasonable if done in moderation. Using globals to
carry data from one side of a diagram to the other “because the wires
would be too long” is asking for trouble. You are in effect making your
data accessible to the entire LabVIEW hierarchy. One more helpful tip
with any global variable: Use the Visible Items . . . Label pop-up item
(whether it’s a built-in or VI-based global) to display its name. When
there are many global variables in a program, it’s difficult to keep track
of which is which, so labeling helps. Finally, if you need to locate all
instances of a global variable, you can use the Find command from the
Edit menu, or you can pop up on the global variable and choose Find:
Global References.
Local variables
Another way to manipulate data within the scope of a LabVIEW dia-
gram is with local variables. A local variable allows you to read data
from or write data to controls and indicators without directly wiring to
the usual control or indicator terminal. This means you have unlimited
read-write access from multiple locations on the diagram. To create a
76 Chapter Three
local variable, select a local variable from the Structures palette and
drop it on the diagram. Pop up on the local variable node and select the
item you wish to access. The list contains the names of every control
and indicator on the front panel. Then choose whether you want to read
or write data. The local variable behaves exactly the same as the con-
trol or indicator’s terminal, other than the fact that you are free to read
or write. Here are some important facts about local variables.
■ Local variables act only on the controls and indicators that reside on
the same diagram. You can’t use a local variable to access a control
that resides in another VI. Use global variables, or better, regular
wired connections to subVIs to transfer data outside of the current
diagram.
■ You can have as many local variables as you want for each control
or indicator. Note how confusing this can become: Imagine your con-
trols changing state mysteriously because you accidentally selected
the wrong item in one or more local variables. Danger!
■ As with global variables, you should use local variables only when
there is no other reasonable dataflow alternative. They bypass
the explicit flow of data, obscuring the relationships between data
sources (controls) and data sinks (indicators).
■ Each instance of a local variable requires an additional copy of the
associated data. This can be significant for arrays and other data
types that contain large amounts of data. This is another reason that
it’s better to use wires than local variables whenever possible.
There are three basic uses for local variables: control initialization,
control adjustment or interaction, and temporary storage.
Figure 3.14 demonstrates control initialization—a simple, safe, and
common use for a local variable. When you start up a top-level VI, it
is important that the controls be preset to their required states. In
this example, a boolean control opens and closes a valve via a digital
Valve
Run
Master loop
Chart 1
On/Off Switch
Turn off the switch
after stopping loops
Local WRITE variable
Slave loop
On/Off Switch
Chart 2
Managing controls that interact is another cool use for local variables.
But a word of caution is in order regarding race conditions. It is very
easy to write a program that acts in an unpredictable or undesirable
manner because there is more than one source for the data displayed
in a control or an indicator. What you must do is to explicitly define
the order of execution in such a way that the action of a local variable
cannot interfere with other data sources, whether they are user inputs,
control terminals, or other local variables on the same diagram. It’s dif-
ficult to give you a more precise description of the potential problems
because there are so many situations. Just think carefully before using
local variables, and always test your program thoroughly.
To show you how easy it is to create unnerving activity with local
variables, another apparently simple example of interactive controls is
shown in Figure 3.16. We have two numeric slider controls, for Poten-
tial (volts) and Current (amperes), and we want to limit the product
of the two (Power, in watts) to a value that our power supply can han-
dle. In this solution, the product is compared with a maximum power
limit, and if it’s too high, a local variable forces the Potential control
to an appropriate value. (We could have limited the Current control
as easily.)
This program works, but it has one quirk: If the user sets the Current
way up and then drags and holds the Potential slider too high, the local
variable keeps resetting the Potential slider to a valid value. However,
since the user keeps holding it at the illegal value, the slider oscillates
between the legal and illegal values as fast as the loop can run. In the
section on events we’ll show you an event-driven approach.
Power limit
Watts = Volts x Amps (False case is empty)
50.00 W Power
Current True
Current A Potential V Power Potential Power too high;
adjust potential
20.0 20.0 50.00 W
Power limit
WRITE local
15.0 15.0 variable
Potential
10.0 10.0
Don't be a CPU hog
5.0 5.0
STOP 10 stop
0.0 0.0
Figure 3.16Limiting the product of two numeric controls by using local variables. But it acts funny
when the user drags and holds the Potential slider to a high value.
Controlling Program Flow 79
bugs to crop up. Here’s a brief list of common mistakes associated with
global variables and state memory:
■ Accidentally writing data to the wrong global variable. It’s easy to do;
just select the wrong item on one of LabVIEW’s built-in globals, and
you’re in trouble.
■ Forgetting to initialize shift-register-based memory. Most of the
examples in this book that use uninitialized shift registers have a
Case structure inside the loop that writes appropriate data into the
shift register at initialization time. If you expect a shift register to be
empty each time the program starts, you must initialize it as such. It
will automatically be empty when the VI is loaded, but will no longer
be empty after the first run.
■ Calling a subVI with state memory from multiple locations without
making the subVI reentrant when required. You need to use reen-
trancy whenever you want the independent calls not to share data,
for instance, when computing a running average. Reentrant execu-
tion is selected from the VI Properties menu.
■ Conversely, setting up a subVI with state memory as reentrant when
you do want multiple calls to share data. Remember that calls to
reentrant VIs from different locations don’t share data. For instance,
a file management subVI that creates a data file in one location and
then writes data to the file in another location might keep the file
path in an uninitialized shift register. Such a subVI won’t work if it
is made reentrant.
■ Creating race conditions. You must be absolutely certain that every
global variable is accessed in the proper order by its various call-
ing VIs. The most common case occurs when you attempt to write
and read a global variable on the same diagram without forcing the
execution sequence. Race conditions are absolutely the most difficult
bugs to find and are the main reason you should avoid using globals
haphazardly.
■ Try to avoid situations where a global variable is written in more
than one location in the hierarchy. It may be hard to figure out which
location supplied the latest value.
How do you debug these global variables? The first thing you must
do is to locate all the global variable’s callers. Open the global variable’s
panel and select This VI’s Callers from the Browse menu. Note every
location where the global is accessed, then audit the list and see if it
makes sense. Alternatively, you can use the Find command in the Proj-
ect menu. Keep the panel of the global open while you run the main VI.
80 Chapter Three
Observe changes in the displayed data if you can. You may want to
single-step one or more calling VIs to more carefully observe the data.
One trick is to add a new string control called info to the global vari-
able. At every location where the global is called to read or write data,
add another call that writes a message to the info string. It might
say, “SubVI abc writing xyz array from inner Case structure.” You can
also wire the Call Chain function (in the Advanced function palette) to
the info string. (Actually, it has to be a string array in this case.) Call
Chain returns a string array containing an ordered list of the VI call-
ing chain, from top-level VI on down. Either of these methods makes it
abundantly clear who is accessing what.
Events
Asynchronous event-driven applications are fundamentally different
from data-flow-driven applications. This conflict between data-flow-
driven and event-driven architecture has made writing user-interface
event-driven applications in LabVIEW difficult without deviating from
dataflow. In the past you had to monitor front panel controls using poll-
ing. Figure 3.9 is an example of how polling is used to monitor changes
in a front panel control. The value of the numeric control is saved in a
shift register and compared with the current value on each iteration of
the loop. This was the typical way to poll the user interface. There were
other methods, some more complex than others, to save and compare
control values; but when you used polling, you always risked missing
events. Now with the introduction of the Event structure in Lab-
VIEW 6.1, we finally have a mechanism that allows us to respond to
asynchronous user-interface events.
Figure 3.18 Register for the Value Change event from the Slider
control. This event case will fire every time the slider changes
value.
Controlling Program Flow 83
The locking action gives you a chance to swap out the user interface
or whatever else you need to do to handle the event. The user will not
notice anything as long as you process the event and quickly exit the
event case. However, if you place code in your event case that takes a
while to execute, or if your Event structure is buried in a state machine,
the front panel of your VI will be totally unresponsive. Some practical
guidelines are as follows:
■ Use only one Event structure per block diagram. Let a single Event
structure provide a focal point for all user-interface code.
■ Do only minimal processing inside an Event structure. Lengthy pro-
cessing inside the event case will lock up the user interface.
■ Use a separate While Loop containing only the Event structure. In
the section on design patterns we’ll show you a good parallel archi-
tecture for user-interface-driven applications.
You can combine events, Property nodes, and local variables to create
event-driven applications that are extremely responsive. Because the
Event structure consumes no CPU cycles while waiting for an event,
they make a very low overhead way to enhance your user interface.
Figure 3.20 shows the application from Figure 3.16 rewritten using an
Figure 3.20 Limiting the product of two numeric controls using local variables inside
an Event structure. Interaction is smooth and easy compared to that in Figure 3.16.
Controlling Program Flow 85
Mechanical actions
The cleanest, easiest, and recommended way to use events and stay
within the dataflow concept is to put controls inside their event case.
This is especially important for boolean controls with a latching
mechanical action. To return to its default state, a boolean latch depends
on being read by the block diagram. If you place the latching boolean
inside its event case, the mechanical action will function properly. But
there’s a funny thing about Value Change events and latched booleans.
What happens when the user selects Edit >> Undo Data Change?
The event fires again, but the control value is not what you expected.
Figure 3.21 illustrates how to handle this unexpected event by using
a Case structure inside the event for any latched boolean. The True
case handles the normal value change, and the False case handles the
unexpected Undo.
Dynamic events
Dynamic events allow you to register and unregister for events while
your application is running. The Filter and Notify events shown so far
were statically linked at edit time to a specific control. As long as the VI
is running, changes to the control will fire a Value Change event. Usu-
ally this is the behavior you want. But there are some situations where
dynamic events are a better solution. In Figure 3.22 we used control
references to dynamically register controls for a Value Change event.
The Register For Events property node receives an array of references
of all the controls on every page of a tab control (Figure 3.23). Dynami-
cally registering for this event can save a lot of development time in
high-channel-count applications. Note the data from the control is
returned as a variant. In our case all the controls were clusters with
the same data type, so it is simple to extract the data from the variant.
For clarity the actual controls were not placed in the Value Change
Figure 3.23 A reference to Tab Control is used to register every control on every
page of Tab Control for the Value Change event. This could easily be turned into a
reusable user-interface subVI.
Controlling Program Flow 87
event case, but good programming practice suggests you should put
them there.
Dynamically registering controls for a Value Change event only
touches the tip of the iceberg in what you can do with dynamic events.
You can create your own user-defined events and use the Event struc-
ture as a sophisticated notification and messaging system. You can also
use the Event structure for “.NET” events. Whatever you use the Event
structure for, try not to stray too far from dataflow programming. In
the section on design patterns we’ll go over some proven ways to use
the Event structure in your applications.
Design Patterns
Your initial objective is to decide on an overall architecture, or pro-
gramming strategy, which determines how you want your application
to work in the broadest sense. Here are some models, also called Lab-
VIEW Design Patterns, that represent the fundamental structure
of common LabVIEW applications. The goals of using design patterns
are that the architecture has already been proved to work, and when
you see someone else’s application that incorporates a design pattern
you’ve used before, you recognize and understand it. The list of design
patterns continues to grow as the years go by, and they are all reliable
approaches. Choosing an appropriate architecture is a mix of analysis,
intuition, and experience. Study these architectures, memorize their
basic attributes, and see if your next application doesn’t go together a
little more smoothly.
1. Acquire.
2. Analyze.
3. Display.
4. Store.
Interval RUN
Figure 3.24 A Sequence containing frames for starting up, doing the work, and shutting down.
It is a classic for data acquisition programs. The While Loop runs until the operation is finished.
The Sequence inside contains the real work to be accomplished.
Initialization code goes here Main loop does Shutdown code goes here
• Open files the real work • Close files
• Initialize I/O • Turn off or clear I/O
• Manage configuration info graph
config
error error
Interval RUN
Figure 3.25 Dataflow instead of Sequence structure enhances readability. It has the same func-
tionality as Figure 3.24. Note that the connections between tasks (subVIs) are not optional: they
force the order of execution.
Controlling Program Flow 89
The net result is a clearer program, explainable with just one page. This
concept is used by most advanced LabVIEW users, like you. The example
shown here is in every way a real, working LabVIEW program.
Real-time operation
Initialize Temperature
Setting Global local var.
chart
500
RUN
Config & Read
Shutdown
Initialize Ramp/Soak
Controller
Shutdown
Manual operation
Dependency: loops
run after RUN local RUN Slave loop runs until RUN
variable is set to True. local variable is False.
Figure 3.26 This example uses two independent loops: the upper one for the actual control and the
lower one to handle manual operation (shown simplified here). Thus the main control loop always
runs at a constant interval regardless of the state of the manual loop.
90 Chapter Three
easily spotted. In this example, the Manual Control subVI reads set-
tings information from a global, changes that information, and then
writes it back to the global. We could have hidden the global operations
inside the subVI, but then you wouldn’t have a clue as to how that subVI
acts on the settings information, which is also used by the Ramp/Soak
Controller subVI. It also helps to show the name of the global variable
so that the reader knows where the data is being stored. We also want
to point out what an incredibly powerful, yet simple technique this is.
Go ahead: Implement parallel tasks in C or Pascal.
Client-server
The term client-server comes from the world of distributed comput-
ing where a central host has all the disk drives and shared resources
(the server), and a number of users out on the network access
those resources as required (the clients). Here is how it works in a
LabVIEW application: You write a server VI that is solely responsible
for, say, acquiring data from the hardware. Acquired data is prepared
and then written to one or more global variables that act as a data-
base. Then you design one or more client VIs that read the data in the
global variable(s), as shown in Figure 3.27. This is a very powerful con-
cept, one that you will see throughout this book and in many advanced
examples.
data global
RUN
data global
data
Client VI #2: Reads data from the
global variable and displays it.
RUN
Unbundle
data global By Name Display
data value
RUN
Figure 3.27 Client-server systems. Client VIs receive data from a server VI
through the use of a global variable. This permits time independence between
several VIs or loops.
Controlling Program Flow 91
A good use for this might be in process control (Chapter 18, “Process
Control Applications”) where the client tasks are such things as alarm
generation, real-time trending, historical trending to disk, and several
different operator-interface VIs. The beauty of the client-server concept
is that the clients can be
If you want one button to shut down everything, add a RUN boolean
control to a global variable and wire that into all the clients. You can
also set the priority of execution for each VI by two different means.
First, you can put a timer in each loop and make the high-priority
loops run more often. This is simple and foolproof and is generally the
recommended technique. Second, you can adjust the execution prior-
ity through the VI Properties dialog. Typically, the server would have
slightly higher priority to guarantee fresh data and good response to
the I/O systems while the clients would run at lower priority. Use exe-
cution priority with caution and a good dose of understanding about
how LabVIEW schedules VI execution. In a nutshell, high-priority VIs
cut in line ahead of low-priority VIs. If a high-priority VI needs service
too often, it may starve all the other VIs, and your whole hierarchy will
grind to a halt. You can read more about this topic in the LabVIEW
User’s Manual, in the chapter on performance issues. Other references
on this topic are listed in the Bibliography.
Race conditions are a risk with the client-server architecture
because global variables may be accessed without explicit ordering.
The most hazardous situation occurs when you have two or more loca-
tions where data is written to a global variable. Which location wrote
the latest data? Do you care? If so, you must enforce execution order by
some means. Another kind of race condition surrounds the issue of data
aging. For example, in Figure 3.27, the first client VI periodically stores
data in a file. How does that client know that fresh data is available?
Can it store the same data more than once, or can it miss new data?
This may be a problem. The best way to synchronize clients and serv-
ers is by using a global queue. A queue is like people waiting in line.
The server puts new data in the back of the queue (data is enqueued),
and the clients dequeue the data from the front at a later time. If the
queue is empty, the client must wait until there is new data. For a good
example of the queue technique, see the LabVIEW examples in the
directory examples/general/synchexm.llb/QueueExample.VI.
92 Chapter Three
Master VI: Reads data from the global variable and saves it in a file.
data global
Initialization
data
500 data global
Stop Run
data global
Read Data server
Run Stop the
Continuously writes server
Dependency
data to global
error out
have the VI generate a dialog box, but what happens after that? A ver-
satile solution is to have the autonomous VI set the Run global boolean
to False and wire the master VI in such a way that it is forced to stop
when the Run global becomes False. That way, any VI in the hierarchy
could potentially stop everything. Be careful. Don’t have the application
come to a screeching halt without telling the user why it just halted.
There is only a slight advantage to the autonomous VI solution com-
pared to placing multiple While Loops on the main diagram. In fact,
it’s an obscure performance benefit that you’ll notice only if you’re try-
ing to get the last ounce of performance out of your system. Here’s the
background: LabVIEW runs sections of code along with a copy of the
execution system in threads. This is called multithreading. The oper-
ating system manages when and where the threads run. Normally a
subVI uses the execution system of its calling VI, but in the VI Proper-
ties dialog (Figure 3.29), you can select a different execution system
and therefore ensure a different thread from the calling VI. Running
a subVI in a separate execution system increases the chances that it
will run in parallel with code in other execution systems on a multi-
processor system. Normally the disadvantage to placing a subVI in a
different execution system is that each call to the subVI will cause a
context switch as the operating system changes from one execution
system (thread) to another. That’s the background; now for the case of
the autonomous VI. If the autonomous VI is placed in a separate execu-
tion system, there will only be one context switch when it is originally
called, and then the autonomous VI will be merrily on its way in its
own execution system. As we said, the advantage is slight, but it’s there
if you need it.
State machines
There is a very powerful and versatile alternative to the Sequence
structure, called a state machine, as described in the advanced
LabVIEW training course. The general concept of a state machine orig-
inates in the world of digital (boolean) logic design where it is a formal
method of system design. In LabVIEW, a state machine uses a Case
structure wired to a counter that’s maintained in a shift register in a
While Loop. This technique allows you to jump around in the sequence
by manipulating the counter. For instance, any frame can jump directly
to an error-handling frame. This technique is widely used in drivers,
sequencers, and complex user interfaces and is also applicable to situa-
tions that require extensive error checking. Any time you have a chain
of events where one operation depends on the status of a previous oper-
ation, or where there are many modes of operation, a state machine is
a good way to do the job.
Figure 3.30 is an example of the basic structure of a state machine.
The state number is maintained as a numeric value in a shift register,
so any frame can jump to any other frame. Shift registers or local vari-
ables must also be used for any data that needs to be passed between
frames, such as the results of the subVI in this example. One of the con-
figuration tricks you will want to remember is to use frame 0 for errors.
That way, if you add a frame later on, the error frame number doesn’t
change; otherwise, you would have to edit every frame to update the
error frame’s new address.
Each frame that contains an activity may also check for errors
(or some other condition) to see where the program should go next.
In driver VIs, you commonly have to respond to a variety of error
1
Intermediate
results
Bad data 0
Initial state Get Data.vi next
1
state selector state
Good data 2
Keep running
Figure 3.30 A basic state machine. Any frame of the case can jump to
any other by manipulating the state selector shift register. Results are
passed from one frame to another in another shift register. Normal
execution proceeds from frame 1 through frame 2. Frame 0 or frame 2
can stop execution. This is an exceptionally versatile concept, well worth
studying.
Controlling Program Flow 95
Settings
Status
Setpoint A/M 1 Heating
Manual
• Recipe Setpoint (°C)
Ramp & Soak
0.0 Now on step Controller.vi
• Manual
3
Restart recipe Controller Addr.
Recipe Group ID Minutes into step
• Run 17.00
0
• Pause
Unit ID
Furnace On Line?
1
• On Line
• Off
Great confusion can arise when you are developing a state machine.
Imagine how quickly the number of frames can accumulate in a
situation with many logical paths. Then imagine what happens when
one of those numeric constants that points to the next frame has the
wrong number. Or even worse, what if you have about 17 frames and
you need to add one in the middle? You may have to edit dozens of little
numeric constants. And just wait until you have to explain to a novice
user how it all works!
Thankfully, creative LabVIEW developers have come up with an easy
solution to these complex situations: Use enumerated constants
(also called enums) instead of numbers. Take a look at Figure 3.32,
showing an improved version of our previous state machine example.
You can see the enumerated constant, with three possible values (error,
get data, calculate), has replaced the more cryptic numeric constants.
The Case structure now tells you in English which frame you’re on, and
the enums more clearly state where the program will go next. Strings
are also useful as a self-documenting case selector.
Controlling Program Flow 97
A "get data"
B
Intermediate
results
Bad data "get data"
Initial state error string
Get Data.vi next
get data get data
state
calculate Keep running
Good data
Figure 3.32 (A) A recommended way to write state machines, using enumerated constants.
(B) Strings are also useful as self-documenting selectors.
Figure 3.34 LabVIEW state machine created with the State Diagram Editor Tool-
kit. It is easy to design and prototype, but you cannot modify the state machine
without the editor.
Each case of a queued message handler can jump to any other case,
but unlike for a state machine, the order of the execution is defined
by the list of commands (message queue or array) that you pass in.
In Figure 3.35 the message queue is held in a shift register. With
each iteration of the While Loop, a message at the front of the queue
is pulled off and the appropriate case executes. The loop terminates
when the queue is empty or an Exit command is reached. Note that
we’re also using a type defined cluster as a Data structure. The Data
structure is a defined mechanism providing operating parameters
for subVIs and a way to get data in and out of the queued message
handler. The VI in Figure 3.35 would be used as a subVI in a larger
application.
The queued message handler architecture works best in combination
with other design patterns. It’s tempting to create a queued message
handler with many atomic states, giving you fine-grained control over
your equipment, but this creates problems when one state depends on
another state which depends on another state, etc. Then the calling VI
has to know too much about the inner workings of the queued message
handler. A better solution is to use the queued message handler to call
intelligent state machines. Each intelligent state machine manages all
the specifics of a piece of equipment or of a process. Now the queued
message handler specifies actions, not procedures. Once you master
this, you are well on your way to building robust, reusable applications
that are driven by input parameters, not constrained by predefined
behavior.
100 Chapter Three
Event-driven applications
Event-driven applications are almost trivially simple with the Event
structure. Our guidelines for using the Event structure are as follows:
■ Use only one Event structure per block diagram. Let a single Event
structure provide a focal point for all user-interface code.
■ Do only minimal processing inside an Event structure. Lengthy pro-
cessing inside the event case will lock up the user interface.
■ Use a separate While Loop containing only the Event structure.
The locking nature of the Event structure and the need for immedi-
ate attention to events dictate that you use two parallel loops. You can
use a single loop for both the Event structure and the rest of your code,
but eventually you’ll run up against a timing issue requiring you to
parallelize your application. The Producer/Consumer Design Pattern
template in LabVIEW provides a great place to start.
Figure 3.36 is a modified version of LabVIEW’s Producer/Consumer
Design Pattern. The consumer loop is very similar to the queued mes-
sage handler. Instead of passing in an array we’re using the queue
functions to pass in each item as it happens. A queue is a FIFO (first
in, first out). Messages are enqueued at one end and dequeued at the
other. For more on queues and other synchronization functions, look
at Chapter 6, “Synchronization.” Note that we’re giving our queue a
unique name based on the time of creation. Queues are referenced by
name, and all you need is the name of the queue to put items in it. If
we left the name empty, there’s a real good chance another nameless
queue would put items in ours with disastrous results. We’re also using
a string as the queue element, because they’re easy to use as case selec-
tors. After creation the first thing we do is to enqueue “Init,” telling
our consumer loop to run its initialization case. In the earlier design
patterns, initialization routines were outside the main loop. We still
initialize first, but keep our block diagram clean and simple. We can
also reinitialize without leaving our consumer loop.
The event structure is the heart of the producer loop. The event
case in Figure 3.36 is registered for user menu events. Select “Edit >>
Run-Time Menu…” to create a custom menu for your application
(Figure 3.37). As soon as the user makes a menu selection from your
running application, the item path is enqueued in the event case and
dequeued in the consumer loop. The full item path is a string with a
colon separator between items. Passing menu items as the full item
path eliminates errors we might have if two menus had an item with
the same name. When the user selects New from the File menu, we get
the string “File:New.” The consumer loop uses Match Pattern to split
Controlling Program Flow 101
Figure 3.36 Producer/Consumer Design Pattern modified for event-driven application. User menu
selections are placed in the queue and handled in the consumer loop. Match Pattern splits the
menu item path at the “:” (colon) separator. Nested cases handle the event actions. (Error handling
is omitted for clarity.)
Figure 3.37Create a custom menu for your application and use the event-
driven Producer/Consumer Design Pattern to handle user selections.
102 Chapter Three
the string at the first colon. We get the two strings File and New. If there
is not a colon in the string, Match Pattern returns the whole string
at its “before substring” terminal. Passing the full item path between
loops also allows us to modularize the consumer loop with a case hier-
archy based partly on menu structure. This event-driven design pat-
tern is powerful and very easy to design.
Design patterns provide a starting point for successful program-
ming. There is no one and only design pattern; otherwise, we would
have shown you that first and omitted the rest. Pick the design pattern
that best fits your skill level and your application. There is no sense in
going for something complicated if all you need is to grab a few mea-
surements and write them to file.
Bibliography
AN 114, Using LabVIEW to Create Multithreaded VIs for Maximum Performance and
Reliability, www.ni.com, National Instruments Corporation, 11500 N. Mopac Express-
way, Austin, Tex., 2000.
AN 199, LabVIEW and Hyper-Threading, www.ni.com, National Instruments Corpora-
tion, 11500 N. Mopac Expressway, Austin, Tex., 2004.
Brunzie, Ted J.: “Aging Gracefully: Writing Software That Takes Changes in Stride,”
LabVIEW Technical Resource, vol. 3, no. 4, Fall 1995.
“Inside LabVIEW,” NIWeek 2000, Advanced Track 1C, National Instruments Corpora-
tion, 11500 N. Mopac Expressway, Austin, Tex., 2000.
Chapter
4
LabVIEW Data Types
Our next project is to consider all the major data types that LabVIEW
supports. The list consists of scalar (single-element) types such as
numerics and booleans and structured types (containing more than one
element) such as strings, arrays, and clusters. The LabVIEW Controls
palette is roughly organized around data types, gathering similar types
into subpalettes for easy access. There are, in fact, many ways to present
a given data type, and that’s the main reason that there are apparently
so many items in those palettes. For instance, a given numeric type can
be displayed as a simple number, a bar graph, a slider, or an analog
meter, or in a chart. But underneath is a well-defined representation
of the data that you, the programmer, and the machine must mutually
understand and agree upon as part of the programming process.
One thing that demonstrates that LabVIEW is a complete program-
ming language is its support for essentially all data types. Numbers can
be floating point or integer, with various degrees of precision. Booleans,
bytes, strings, and numerics can be combined freely into various struc-
tures, giving you total freedom to make the data type suit the problem.
Polymorphism is an important feature of LabVIEW that simplifies
this potentially complicated world of data types into something that
even the novice can manage without much study. Polymorphism is the
ability to adjust to input data of different types. Most built-in LabVIEW
functions are polymorphic. Ordinary virtual instruments (VIs) that
you write are not truly polymorphic—they can adapt between numeric
types, but not between other data types such as strings to numerics.
Most of the time, you can just wire from source to destination without
much worry since the functions adapt to the kind of data that you sup-
ply. How does LabVIEW know what to do? The key is object-oriented
programming, where polymorphism is but one of the novel concepts
103
104 Chapter Four
Numeric Types
Perhaps the most fundamental data type is the numeric, a scalar value
that may generally contain an integer or a real (floating-point) value.
LabVIEW explicitly handles all the possible integer and real repre-
sentations that are available on current 32-bit processors. LabVIEW 8
adds support for 64-bit integers. Figure 4.1 displays the terminals for
each representation, along with the meaning of each and the number
of bytes of memory occupied. LabVIEW floating-point types follow the
IEEE-754 standard, which has thankfully been adopted by all major
CPU and compiler designers.
The keys to choosing an appropriate numeric representation in
most situations are the required range and precision. In general,
the more bytes of memory occupied by a data type, the greater the
possible range of values. This factor is most important with integers;
among floating-point types, even single precision can handle values
up to 1038, and it’s not too often that you need such a large number. An
unsigned integer has an upper range of 2N − 1, where N is the num-
ber of bits. Therefore, an unsigned byte ranges up to 255, an unsigned
integer (2 bytes) up to 65,535, and an unsigned long integer up to
4,294,967,295. Signed integers range up to 2N − 1, or about one-half
of their unsigned counterparts.
If you attempt to enter a value that is too large into an integer con-
trol, it will be coerced to the maximum value. But if an integer math-
ematical process is going on, overflow or underflow can produce
erroneous results. For instance, if you do an unsigned byte addition of
255 + 1, you get 0, not 256—that’s overflow.
Scalar numerics in LabVIEW cover the gamut of integer and real types.
Figure 4.1
The number of bytes of memory required for each type is shown.
LabVIEW Data Types 105
Strings
Every programmer spends a lot of time putting together strings of
characters and taking them apart. Strings are useful for indicators
where you need to say something to the operator, for communications
106 Chapter Four
Building strings
Instrument drivers are the classic case study for string building. The
problem is to assemble a command for an instrument (usually GPIB)
based on several control settings. Figure 4.2 shows how string building
in a typical driver works.
Figure 4.2 String building in a driver VI. This one uses most of the major string-building
functions in a classic diagonal layout common to many drivers.
LabVIEW Data Types 107
(see the LabVIEW online help for details) to use this function. The
percent sign tells it that the next few characters are a formatting
instruction. What is nice about this function is that not only does
it format the number in a predictable way, but also, it allows you
to tack on other characters before or after the value. This saves
space and gets rid of a lot of Concatenate Strings functions. In this
example, the format string %e translates a floating-point value in
exponential notation and adds a trailing comma.
4. A more powerful function is available to format multiple values:
Format Into String. You can pop up on this function and select
Edit Format String to obtain an interactive dialog box where you
build the otherwise cryptic C-style formatting commands. In this
case, it’s shown formatting an integer, a float, and a string into one
concatenated output with a variety of interspersed characters. Note
that you can also use a control or other string to determine the
format. In that case, there could be run-time errors in the format
string, so the function includes error I/O clusters. It is very handy
and very compact, similar to its C counterpart sprintf().
5. The Append True/False String function uses a boolean control to
pick one of two choices, such as ON or OFF, which is then appended
to the string. This string-building process may continue as needed to
build an elaborate instrument command.
Parsing strings
The other half of the instrument driver world involves interpreting
response messages. The message may contain all sorts of headers,
delimiters, flags, and who knows what, plus a few numbers or important
letters that you actually want. Breaking down such a string is known
as parsing. It’s a classic exercise in computer science and linguistics
as well. Remember how challenging it was to study our own language
back in fifth grade, parsing sentences into nouns, verbs, and all that?
We thought that was tough; then we tackled the reply messages that
some instrument manufacturers come up with! Figure 4.3 comes from
one of the easier instruments, the Tektronix 370A driver. A typical
response message would look like this:
STPGEN NUMBER:18;PULSE:SHORT;OFFSET:-1.37;NVERT:OFF;MULT:ON;
Figure 4.3 Using string functions to parse the response message from a GPIB instrument.
This is typical of many instruments you may actually encounter.
many special characters you can type into the regular expression
input that controls Match Pattern. One other thing we did is to pass
the incoming string through the To Upper Case function. Other-
wise, the pattern keys would have to contain both upper- and lower-
case letters. The output from each pattern match is known as after
substring. It contains the remainder of the original string immedi-
ately following the pattern, assuming that the pattern was found. If
there is any chance that your pattern might not be found, test the
offset past match output; if it’s less than zero, there was no match,
and you can handle things at that point with a Case structure.
2. Each of the after substrings in this example is then passed to another
level of parsing. The first one uses the Decimal String To Number
function, one of several specialized number extractors; others handle
octal, hexadecimal, and fraction/scientific notation. These are very
robust functions. If the incoming string starts with a valid numeric
character, the expected value is returned. The only problem you can
run into occurs in cases where several values are run together such
as [123.E3-.567.89]. Then you need to use the Scan From String
function to break it down, provided that the format is fixed. That is,
you know exactly where to split it up. You could also use Match Pat-
tern again if there are any other known, embedded flags, even if the
flag is only a space character.
3. After locating the keyword PULSE, we expect one of three possible
strings: OFF, SHORT, or LONG. Match First String searches a
string array containing these words and returns the index of the
one that matches (0, 1, or 2). The index is wired to a ring indicator
named Pulse that displays the status.
LabVIEW Data Types 109
Figure 4.4 The Scan From String function is great for parsing simple strings.
110 Chapter Four
Figure 4.5 Here are some of the ways you can convert to and from strings that contain unprintable characters.
All these functions are found in the String palette.
field width and decimal precision; plain “%e” or “%d” generally does the
job for floating-point or integer values, respectively.
In Figure 4.7, a two-dimensional (2D) array, also known as a matrix,
is easily converted to and from a spreadsheet string. In this case, you
get a tab between values in a row and a carriage return at the end of
each row. If the rows and columns appear swapped in your spread-
sheet program, insert the array function Transpose 2D Array before
you convert the array to text. Notice that we hooked up Spreadsheet
String To Array to a different type specifier, this time a long integer
(I32). Resolution was lost because integers don’t have a fractional part;
this demonstrates that you need to be careful when mixing data types.
We also obtained that type specifier by a different method—an array
Figure 4.7 Converting an array to a table and back, this time using a 2D array (matrix).
LabVIEW Data Types 113
constant, which you can find in the array function palette. You can
create constants of any data type on the diagram. Look through the
function palettes, and you’ll see constants everywhere. In this case,
choose an array constant, and then drag a numeric constant into it.
This produces a 1D numeric array. Then pop up on the array and select
Add Dimension to make it 2D.
A more general solution to converting arrays to strings is to use a
For Loop with a shift register containing one of the string conversion
functions and Concatenate Strings. Sometimes this is needed for more
complicated situations where you need to intermingle data from several
arrays, you need many columns, or you need other information within
rows. Figure 4.8 uses these techniques in a situation where you have a
2D data array (several channels and many samples per channel), another
array with time stamps, and a string array with channel names.
The names are used to build a header in the upper For Loop, which is
then concatenated to a large string that contains the data. This business
can be a real memory burner (and slow as well) if your arrays are large.
The strings compound the speed and memory efficiency problem.
This is one of the few cases where it can be worth writing a Code
Interface node, which may be the only way to obtain acceptable per-
formance with large arrays and complex formatting requirements. If
you’re doing a data acquisition system, try writing out the data as you
collect it—processing and writing out one row at a time—rather than
saving it all until the end. On the other hand, file I/O operations are
pretty slow, too, so some optimization is in order.
Arrays
Any time you have a series of numbers or any other data type that needs
to be handled as a unit, it probably belongs in an array. Most arrays
are one-dimensional (1D, a column or vector), a few are 2D (a matrix),
and some specialized data sets require 3D or greater. LabVIEW permits
you to create arrays of numerics, strings, clusters, and any other data
type (except for arrays of arrays). Arrays are often created by loops, as
shown in Figure 4.9. For Loops are the best because they preallocate
the required memory when they start. While Loops can’t; LabVIEW
has no way of knowing how many times a While Loop will cycle, so the
memory manager will have to be called occasionally, slowing execution
somewhat.
You can also create an array by using the Build Array function
(Figure 4.10). Notice the versatility of Build Array: It lets you concat-
enate entire arrays to other arrays, or just tack on single elements.
It’s smart enough to know that if one input is an array and the other
is a scalar, then you must be concatenating the scalar onto the array
Figure 4.9Creating arrays using a For Loop. This is an efficient way to build arrays with many elements.
A While Loop would do the same thing, but without preallocating memory.
LabVIEW Data Types 115
Figure 4.10 Using the Build Array function. Use the pop-up
menu on its input terminals to determine whether the input is
an element or an array. Note the various results.
and the output will be similar to the input array. If all inputs have the
same dimensional size (for instance, all are 1D arrays), you can pop
up on the function and select Concatenate Inputs to concatenate them
into a longer array of the same dimension (this is the default behavior).
Alternatively, you can turn off concatenation to build an array with
n + 1 dimensions. This is useful when you need to promote a 1D array
to 2D for compatibility with a VI that required 2D data.
To handle more than one input, you can resize the function by dragging
at a corner. Note the coercion dots where the SGL (single-precision
floating point) and I16 (2-byte integer) numeric types are wired to the
top Build Array function. This indicates a change of data type because
an array cannot contain a mix of data types. LabVIEW must promote
all types to the one with the greatest numeric range, which in this
case is DBL (double-precision floating point). This also implies that you
can’t build an array of, say, numerics and strings. For such intermixing,
you must turn to clusters, which we’ll look at a bit later.
Figure 4.11 shows how to find out how many elements are in an
array by using the Array Size function. Note that an empty array has
zero elements. You can use Index Array to extract a single element.
Like most LabVIEW functions, Index Array is polymorphic and will
return a scalar of the same type as the array.
If you have a multidimensional array, these same functions still
work, but you have more dimensions to keep track of (Figure 4.12).
Array Size returns an array of values, one per dimension. You have
to supply an indexing value for each dimension. An exception occurs
when you wish to slice the array, extracting a column or row of data,
as in the bottom example of Figure 4.12. In that case, you resize Array
Size, but don’t wire to one of the index inputs. If you’re doing a lot of
work with multidimensional arrays, consider creating special subVIs
116 Chapter Four
Figure 4.11 How to get the size of an array and fetch a single value. Remember
that all array indexing is based on0, not1.
to do the slicing, indexing, and sizing. That way, the inputs to the subVI
have names associated with each dimension, so you are not forced
to keep track of the purpose for each row or column. Another tip for
multidimensional arrays is this: It’s a good idea to place labels next to
each index on an array control to remind you which is which. Labels
such as row and column are useful.
Figure 4.12Sizing, indexing, and slicing 2D arrays are a little different, since you
have two indices to manipulate at all steps.
LabVIEW Data Types 117
Figure 4.13 Two of the more powerful array editing functions are Delete From Array and Replace Array Subset.
Like all the array functions, they work on arrays of any dimension.
Besides slicing and dicing, you can easily do some other useful array
editing tasks. In Figure 4.13, the Delete From Array function removes
a selected column from a 2D array. The function also works on arrays of
other dimensions, such as all array functions. For instance, you can delete
a selected range of elements from a 1D array. In that case, you wire a
value to the Length input to limit the quantity of elements to delete.
The Replace Array Subset function is very powerful for array
surgery. In this example, it’s shown replacing two particular values in
a 3 × 3 matrix. You can replace individual values or arrays of values
in a single operation. One other handy function not shown here is
Insert Into Array. It does just what you’d expect: Given an input
array, it inserts an element or new subarray at a particular location.
When we didn’t have these functions, such actions required extensive
array indexing and building, which took up valuable diagram space
and added much overhead. You newcomers have it easy.
Initializing arrays
Sometimes you need an array that is initialized when your program
starts, say, for a lookup table. There are many ways to do this, as shown
in Figure 4.14.
■ If all the values are the same, use a For Loop with a constant inside.
Disadvantage: It takes a certain amount of time to create the array.
■ Use the Initialize Array function with the dimension size input
connected to a constant numeric set to the number of elements. This
is equivalent to the previous method, but more compact.
■ Similarly, if the values can be calculated in some straightforward
way, put the formula in a For Loop instead of a constant. For instance,
a special waveform or function could be created in this way.
118 Chapter Four
■ Use the Initialize Array function with the dimension size input
unconnected. This is functionally equivalent to the For Loop with
N = 0.
■ Use a diagram array constant. Select Empty Array from its Data
Operations pop-up menu.
Note that you can’t use the Build Array function. Its output always
contains at least one element.
Functions that can reuse buffers Functions that cannot reuse buffers
Multiply Increment
Build Array Array to Concatenate Strings
Replace Array Spreadsheet
Element String
Code Interface
For loop, indexing Node with input-
Bundle Transpose 2D Array
and reassembling output terminal
an array
Figure 4.15 Here are some of the operations that you can count on for predictable reuse or non-reuse of
memory. If you deal with large arrays or strings, think about these differences.
120 Chapter Four
Figure 4.16 Use Replace Array Element inside a For Loop instead of Build Array
to permit reuse of an existing data buffer. This is much faster. The bottom example
uses five memory buffers compared to two on the top.
LabVIEW Data Types 121
So at least you can keep track of that much. By the way, it is consid-
ered good form to label your array controls and wires on the diagram
as we did in this example. When you access a multidimensional array
with nested loops as in the previous example, the outer loop accesses
the top index and the inner loop accesses the bottom index. Figure 4.17
summarizes this array index information. The Index Array function
even has pop-up tip strips that appear when you hold the wiring tool
over one of the index inputs. They call the index inputs column, row,
and page, just as we did in this example.
All this memory reuse business also adds overhead at execution time
because the memory manager has to be called. Talk about an over-
worked manager! The poor guy has to go searching around in RAM,
looking for whatever size chunk the program happens to need. If a space
can’t be found directly, the manager has to shuffle other blocks around
until a suitable hole opens up. This can take time, especially when mem-
ory is getting tight. This is also the reason your VIs sometimes execute
faster the second time you run them: Most of the allocation phase of
memory management is done on the first iteration or run.
Similarly, when an array is created in a For Loop, LabVIEW can usu-
ally predict how much space is needed and can call the memory manager
just once. This is not so in a While Loop, since there is no way to know in
advance how many times you’re going to loop. It’s also not so when you
are building arrays or concatenating strings inside a loop—two more
situations to avoid when performance is paramount. A new feature in
LabVIEW 7.1 Professional Version shows buffer allocations on the block
diagram. Select Tools >> Advanced >> Show Buffer Allocations to
bring up the Show Buffer Allocations window. Black squares will
appear on the block diagram, showing memory allocations. The effect is
subtle, like small flakes of black pepper; but if you turn them on and off,
122 Chapter Four
the allocations will jump out at you. Figure 4.16 shows what a powerful
tool it can be. The best source of information on memory management
is Application Note 168, LabVIEW Performance and Memory Manage-
ment, found online at www.ni.com.
Clusters
You can gather several different data types into a single, more manage-
able unit, called a cluster. It is conceptually the same as a record in
Pascal or a struct in C. Clusters are normally used to group related data
elements that are used in multiple places on a diagram. This reduces
wiring clutter—many items are carried along in a single wire. Clusters
also reduce the number of terminals required on a subVI. When saved
as a custom control with the typedef or strict typedef option (use the
Customize Control feature, formerly known as the Control Editor,
to create typedefs), clusters serve as data type definitions, which can
simplify large LabVIEW applications. Saving your cluster as a typedef
propagates any changes in the data structure to any code using the
typedef cluster. Using clusters is good programming practice, but it does
require a little insight as to when and where clusters are best employed.
If you’re a novice programmer, look at the LabVIEW examples and the
figures in this book to see how clusters are used in real life.
An important fact about a cluster is that it can contain only con-
trols or indicators, but not a mixture of both. This precludes the use
of a cluster to group a set of controls and indicators on a panel. Use
graphical elements from the Decorations palette to group controls and
indicators. If you really need to read and write values in a cluster, local
variables can certainly do the job. We would not recommend using local
variables to continuously read and write a cluster because the chance
for a race condition is very high. It’s much safer to use a local variable
to initialize the cluster (just once), or perhaps to correct an errant input
or reflect a change of mode. Rule: For highly interactive panels, don’t
use a cluster as an input and output element.
Clusters are assembled on the diagram by using either the Bundle
function (Figure 4.18) or the Bundle By Name function (Figure 4.19).
The data types that you connect to these functions must match the data
types in the destination cluster (numeric types are polymorphic; for
instance, you can safely connect an integer type to a floating-point type).
The Bundle function has one further restriction: The elements must be
connected in the proper order. There is a pop-up menu available on the
cluster border called Reorder Controls in Cluster. . . that you use to
set the ordering of elements. You must carefully watch cluster ordering.
Two otherwise identical clusters with different element orderings can’t
LabVIEW Data Types 123
be connected. An exception to this rule, one which causes bugs that are
difficult to trace, occurs when the misordered elements are of similar
data type (for instance, all are numeric). You can legally connect the
misordered clusters, but element A of one cluster may actually be
passed to element B of the other. This blunder is far too common and is
one of the reasons for using Bundle By Name.
To disassemble a cluster, you can use the Unbundle or the Unbun-
dle By Name function. When you create a cluster control, give each
element a reasonably short name. Then, when you use Bundle By Name
or Unbundle By Name, the name doesn’t take up too much space on the
diagram. There is a pop-up menu on each of these functions (Select
Item) with which you select the items to access. Named access has the
Figure 4.19 Use Bundle By Name and Unbundle By Name in preference to their no-name counterparts.
124 Chapter Four
Figure 4.20 Building a cluster array (top) and building an array of clusters that contain an array
(bottom). This is distinctly different from a 2D array.
LabVIEW Data Types 125
Waveforms
A handy data type, introduced in LabVIEW 6, is the Waveform. It’s a
sensible grouping of information that describes the very common situ-
ation of a one-dimensional time-varying waveform. The waveform type
is similar to a cluster containing a 1D numeric array (the data), tim-
ing information (start time and sample interval), and a variant part
containing user-definable items such as the channel’s name, units, and
error information. Native waveform controls and indicators are avail-
able from the I/O control palette, but other indicators, such as graphs,
will also adapt directly to waveforms. In the Programming palette,
there’s a subpalette called Waveform that includes a variety of basic
operations, such as building, scaling, mathematical operations, and so
forth, plus higher-level functions, such as waveform generation, mea-
surements, and file I/O.
Figure 4.21 shows some of the basic things you might do with a
waveform. Waveforms can come from many sources, among them the
Waveform Generation VIs, such as the Sine Waveform VI that
appears in the figure. The analog data acquisition (DAQ) VIs also
handle waveforms directly. Given a waveform, you can either process it
as a unit or disassemble it for other purposes. In this example, we see
one of the Waveform Measurement utilities, the Basic Averaged
DCRMS VI, which extracts the dc and rms ac values from a signal. You
can explicitly access waveform components with the Get Waveform
Components function, which looks a lot like Unbundle By Name.
Figure 4.21 Here are the basics of the waveform data type. You can access individual components
or use the waveform utility VIs to generate, analyze, or manipulate waveforms.
126 Chapter Four
Figure 4.23 The stealth component of waveforms, called attributes, can be accessed when you need them. They
are variant data types, which require some care during access.
Figure 4.24 Polymorphism in action. Think how complicated this would be if the functions didn’t adapt
automatically. The bottom example shows the limits of polymorphism. How in the world could you add a
boolean or a string? If there were a way, it would be included in LabVIEW.
First, the value has to be interpreted in some way, requiring that a spe-
cial conversion program be run. Second, the new data type may require
more or less memory, so the system’s memory manager may need to be
called. By now you should be getting the idea that conversion is some-
thing you may want to avoid, if only for performance reasons.
Conversion is explicitly performed by using one of the functions from
the Conversion menu. They are polymorphic, so you can feed them sca-
lars (simple numbers or booleans), arrays, clusters, and so on, as long
as the input makes some sense. There is another place that conversions
occur, sometimes without you being aware. When you make a connec-
tion, sometimes a little gray dot appears at the destination’s terminal:
. This is called a coercion dot, and it performs exactly the
same operation as an explicit conversion function. One other warning
about conversion and coercion: Be wary of lost precision.
A DBL or an EXT floating point can take on values up to 10−237 or
thereabouts.
If you converted such a big number to an unsigned byte (U8), with
a range of only 0 to 255, then clearly the original value could be lost.
It is generally good practice to modify numeric data types to eliminate
coercion because it reduces memory usage and increases speed. Use
the Representation pop-up menu item on controls, indicators, and
diagram constants to adjust the representation.
Figure 4.25 A cluster of booleans makes a nice user-interface item, but it’s hard to interpret. Here are two
solutions that find the true bit. The bottom solution turned out to be the faster.
130 Chapter Four
only controls of the same type (you can’t arbitrarily mix strings, numer-
ics, etc.). Once you have an array of booleans, the figure shows two ways
to find the element that is True. In the upper solution, the Boolean
Array To Number function returns a value in the range of 0 to 232 – 1
based on the bit pattern in the boolean array. Since the bit that is set
must correspond to a power of 2, you then take the log2 of that number,
which returns a number between 0 and 32 for each button and −1 for
no buttons pressed. Pretty crafty, eh? But the bottom solution turns out
to be faster. Starting with the boolean array, use Search 1D Array to
find the first element that is true. Search 1D Array returns the element
number, which is again a number between 0 and 32, or a large negative
number for no buttons.
This number could then be passed to the selection terminal in a Case
structure to take some action according to which switch was pressed.
Figure 4.26 shows a way to use a nice-looking set of Ring Indica-
tors in a cluster as a status indicator. An array of I32 numerics is con-
verted to a cluster by using the Array To Cluster function. A funny
thing happens with this function. How does LabVIEW know how many
elements belong in the output cluster (the array can have any number
of elements)? For this reason, a pop-up item on Array To Cluster called
Cluster Size was added. You have to set the number to match the
indicator (5, in this case) or you’ll get a broken wire.
One of the most powerful ways to change one data type to another is
type casting. As opposed to conversions, type casting changes only the
type descriptor. The data component is unchanged. The data is in no
way rescaled or rearranged; it is merely interpreted in a different way.
The good news is that this process is very fast, although a new copy of
Figure 4.26 A numeric array is converted to a cluster of Ring Indicators (which are of type I32) by using Array
To Cluster. Remember to use the pop-up item on the conversion function called Cluster Size for the number of
cluster elements. In this case, the size is 5.
LabVIEW Data Types 131
Figure 4.27 Lifted from the HP 54510 driver, this code segment strips the header off
a data string, removes a trailing linefeed character, then type casts the data to a U16
integer array, which is finally converted to an EXT array.
the incoming data has to be made, requiring a call to the memory man-
ager. The bad news is that you have to know what you’re doing! Type
casting is a specialized operation that you will very rarely need, but
if you do, you’ll find it on the Data Manipulation palette. LabVIEW
has enough polymorphism and conversion functions built in that you
rarely need the Type Cast function.
The most common use for the Type Cast function is shown in
Figure 4.27, where a binary data string returned from an oscilloscope
is type cast to an array of integers. Notice that some header informa-
tion and a trailing character had to be removed from the string before
casting. Failure to do so would leave extra garbage values in the resul-
tant array. Worse yet, what would happen if the incoming string were
off by 1 byte at the beginning? Byte pairs, used to make up I16 integers,
would then be incorrectly paired. Results would be very strange. Note
that the Type Cast function will accept most data types except for clus-
ters that contain arrays or strings.
Warning: The Type Cast function expects a certain byte ordering,
namely big-endian, or most significant byte first to guarantee porta-
bility between platforms running LabVIEW. But there are problems
interpreting this data outside of LabVIEW. Big-endian is the normal
ordering for the Macintosh and Sun, but not so on the PC! This is an
example where your code, or the data you save in a binary file, may not
be machine-independent.
Indeed, there is much trouble in type casting land, and you should try
to use polymorphism and conversion functions whenever possible. Con-
sider Figure 4.28. The first example uses the Type Cast function, while
the second example accomplishes exactly the same operation—writing
a binary image of an array to disk—in a much clearer, more concise
way. Keep looking through the function palettes if the particular data
type compatibility that you need is not apparent. The bottom example
in Figure 4.28 shows how flexible the LabVIEW primitives are. In this
case, the Write File function accommodates any imaginable data type
without putting you through type casting obfuscation.
132 Chapter Four
Figure 4.28“Eschew obfuscation,” the English teacher said. Look for ways to avoid type casting and simplify
your diagrams. Higher-level functions and primitives are extremely flexible.
If you get into serious data conversion and type casting exercises, be
careful, be patient, and prepare to explore the other data conversion
functions in the Data Manipulation palette. Sometimes the bytes are
out of order, as in the PC versus Mac situation. In that case, try Swap
Bytes or Swap Words, and display the data in a numeric indicator
with its format set to hexadecimal, octal, or binary (use the pop-up
menu Format and Precision). Some instruments go so far as to send
the data backward; that is, the first value arrives last. You can use
Reverse Array or Reverse String to cure that nasty situation. Split
Number and Join Number are two other functions that allow you to
directly manipulate the ordering of bytes in a machine-independent
manner. Such are the adventures of writing instrument drivers, a topic
covered in detail in Chapter 10, “Instrument Driver Basics.”
Figure 4.29 Use flattened data when you need to transmit complicated data types
over a communications link or store them in a binary file. LabVIEW 8 lets you set
byte order and optionally prepend the data string’s size.
The data string also contains embedded header information for nons-
calar items (strings and arrays), which are useful when you are trying
to reconstruct flattened data. Figure 4.29 shows Flatten To String in
action, along with its counterpart Unflatten From String. The data
string could be transmitted over a network, or stored in a file, and then
read and reconstructed by Unflatten From String. As with all binary
formats, you must describe the underlying data format to the read-
ing program. Unflatten From String requires a data type to properly
reconstruct the original data. In LabVIEW 7.x and previous versions,
the data type descriptor was available as an I16 array. LabVIEW 8’s
data type descriptors are 32-bit, and the type descriptor terminal is no
longer available on Flatten to String. If you still need the type descrip-
tor, select Convert 7.x Data from Flatten to String’s pop-up menu.
The most common uses for flattened data are transmission over a
network or storage to a binary file. A utility VI could use this technique
to store and retrieve clusters on disk as a means of maintaining front-
panel setup information.
Figure 4.30Enumerated types (enums) are integers with associated text. Here are some useful tips for
comparison and conversion.
Figure 4.30 shows some precautions and tricks for enums. One sur-
prise comes when you attempt to do math with enums. They’re just
integers, right? Yes, but their range is limited to the number of enu-
merated items. In this example, there are three values {Red, Grn, Blu},
so when we attempt to add one to the maximum value (Blu, or 2), the
answer is Red—the integer has rolled over to 0.
You can also do some nonobvious conversions with enums, as shown
in the bottom of Figure 4.30. Given an enum, you can extract the string
value by using Format Into String with a format specifier of %s.
Similarly, you can match an incoming string value with the possible
string equivalents in an enum by using Scan From String. There is
a risk of error here, however: The incoming string has to be an exact
case-sensitive match for one of the possible enum values. Otherwise
Scan From String will return an error, and your output enum will be
set to the value of the input enum. To distinguish this result from a
valid conversion, you could add a value to your enum called Invalid
and handle that value appropriately.
Figure 4.31Making a fixed-length, null-terminated string for compatibility with the C language. The key is the
Array To Cluster conversion that produces a fixed number of elements. This is a really obscure example.
5
Timing
137
138 Chapter Five
Figure 5.1 Write a simple timer test VI to check the resolution of your system’s LabVIEW ticker.
Intervals
Traditionally if you want a loop to run at a nice, regular interval, the
function to use is Wait Until Next ms Multiple. Just place it inside the
loop structure, and wire it to a number that’s scaled in milliseconds. It
waits until the tick count in milliseconds becomes an exact multiple
of the value that you supply. If several VIs need to be synchronized,
this function will help there as well. For instance, two independent
VIs can be forced to run with harmonically related periods such as
100 ms and 200 ms, as shown in Figure 5.2. In this case, every 200 ms,
you would find that both VIs are in sync. The effect is exactly like the
action of a metronome and a group of musicians—it’s their heartbeat.
This is not the case if you use the simpler Wait (ms) function; it just
guarantees that a certain amount of time has passed, without regard
to absolute time. Note that both of these timers work by adding
activity to the loop. That is, the loop can’t go on to the next cycle
until everything in the loop has finished, and that includes the timer.
Timed structures
There are three programming structures in LabVIEW 8 with built-in
scheduling behavior: a Timed Loop, a Timed Sequence, and a com-
bination of the two called a Timed Loop with Frames. Timed Loops
are most useful when you have multiple parallel loops you need to exe-
cute in a deterministic order. LabVIEW’s default method of executing
code is multiple threads of time-sliced multitasking execution engines
with a dash of priorities. In other words, your code will appear to run
in parallel, but it is really sharing CPU time with the rest of your
application (and everything else on your computer). Deterministically
controlling the execution order of one loop over another is an incredibly
difficult task—until now.
Timed Loops are as easy to use as a While Loop and ms Timer combi-
nation. The Timed Loop in Figure 5.3 executes every 1 ms and calculates
the elapsed time. You configure the loop rate by adjusting the period, or
dt. You can also specify an offset (t0) for when the loop should start. With
these two settings you can configure multiple loops to run side by side
with the same period but offset in time from each other. How to configure
Timing 141
Figure 5.3 Timed Loops have a built-in scheduling mechanism. You can configure a Timed Loop’s behavior dynam-
ically at run time through terminals on the Input node, or statically at edit time through a configuration dialog.
the period and offset is all that most of us need to know about Timed
Loops, but there is much more. You can change everything about a Timed
structure’s execution, except the name and the timing source, on the fly
from within the structure. You can use the execution feedback from the
Left Data Node in a process control loop to tell whether the structure
started on time or finished late in the previous execution. Time-critical
processes can dynamically tune the Timed structure’s execution time
and priority according to how timely it executed last time. The left data
node even provides nanosecond-resolution timing information you can
use to benchmark performance.
Timing sources
Timed structures (loops and sequences) run off a configurable timing
source. Figure 5.4 shows a typical configuration dialog. The default
source is a 1-kHz clock (1-ms timer) on a PC. On an RT system with a
Pentium controller, a 1-MHz clock is available, providing a 1-µs timer.
Other configuration options are the Period and Priority. In our dialog
the loop is set to execute every 1000 ms by using the PC’s internal
1-kHz clock. Note that you can also use a timing source connected to
the structure’s terminal. The timing source can be based on your com-
puter’s internal timer or linked through DAQmx to a hardware-timed
DAQ event with DAQmx Create Timing Source.vi (Figure 5.5). You
can even chain multiple timing sources together by using Build Tim-
ing Source Hierarchy.vi. One example in which multiple timing
sources could come in handy is if you wanted a loop to execute based on
time or every time a trigger occurred.
142 Chapter Five
Figure 5.4 Configuration dialog for a Timed Loop with a single frame.
Figure 5.5 Timed structures can be synchronized to external timing sources with DAQmx.
Timing 143
Figure 5.6 Frames in a multiframe Timed structure can have start times relative to the completion of the previous
frame. The effect is to put dead time between each frame. Frame 2 starts 200 ms after frame 1 completes, frame 3
starts 500 ms after frame 2, and frame 4 starts 200 ms after frame 3.
144 Chapter Five
Loop through the Right Data node. The Timed Loop’s priority is a pos-
itive integer between 1 and 2,147,480,000. But don’t get too excited
since the maximum number of Timed structures you can have in mem-
ory, running or not, is 128.
Timed structures at the same priority do not multitask between
each other. The scheduler is preemptive, but not multitasking. If two
structures have the same priority, then dataflow determines which
one executes first. Whichever structure starts execution first will fin-
ish before the other structure can start. Each Timed Loop runs to
completion unless preempted by a higher-priority Timed Loop, a VI
running at Time Critical priority, or the operating system. It is this
last condition that kills any hope of deterministic operation on a desk-
top operating system.
On a real-time operating system, Timed structures are deterministic,
meaning the timing can be predicted. This is crucial on an RTOS; but
on a non-real-time operating system (such as Windows), Timed struc-
tures are not deterministic. This doesn’t mean that you can’t use Timed
structures on Windows; just be aware that the OS can preempt your
structure at any time to let a lower-priority task run. A real-time oper-
ating system runs code based on priority (higher-priority threads pre-
empt lower-priority threads) while a desktop OS executes code based
on “fairness”; each thread gets an opportunity to run. This is great
Timing 145
when you want to check e-mail in the background on your desktop, but
terrible for a process controller. If you need deterministic execution,
then use Timed structures on real-time systems such as LabVIEW RT.
They are powerful and easy to use.
Timing guidelines
All LabVIEW’s platforms have built-in multithreading and preemp-
tive multitasking that help to mitigate the effects of errant code con-
suming all the CPU cycles. Even if you forget to put in a timer and
start running a tight little loop, every desktop operating system will
eventually decide to put LabVIEW to sleep and to let other programs
run for a while. But rest assured that the overall responsiveness of
your computer will suffer, so please put timers in all your loops. For
more information on how the LabVIEW execution system works, see
Application Note 114, Using LabVIEW to Create Multithreaded VIs for
Maximum Performance and Reliability.
Here’s when to use each of the timers:
■ Highly regular loop timing—use Wait Until Next ms Multiple or a
Timed Loop.
■ Many parallel loops with regular timing—use Wait Until Next ms
Multiple or Timed Loops.
■ Arbitrary, asynchronous time delay to give other tasks some time to
execute—use Wait (ms).
■ Single-shot delays (as opposed to cyclic operations, such as loops)—
use Wait (ms) or a Timed Sequence.
Figure 5.8 How to switch between the various formats for system time, particularly getting in and out of the
date/time rec cluster.
handle epoch seconds if you give them a little help. In Microsoft Excel,
for instance, you must divide the epoch seconds value by 86,400, which
is the number of seconds in 1 day. The result is what Excel calls a
Serial Number, where the integer part is the number of days since the
zero year and the fractional part is a fraction of 1 day. Then you just
format the number as date and time. Again, watch out for the numeric
precision problem if you’re importing epoch seconds.
For long-term stability, you might get away with setting your com-
puter’s clock with one of the network time references; such options
are built into all modern operating systems. They guarantee NIST-
traceable long-term stability, but there is, of course, unknown latency
in the networking that creates uncertainty at the moment your clock
is updated.
Excellent time accuracy can be obtained by locking to signals received
from the Global Positioning System (GPS) satellites, which provide a
1-Hz clock with an uncertainty of less than 100 ns and essentially no
long-term drift. The key to using GPS timing sources is to obtain a
suitable interface to your computer and perhaps a LabVIEW driver.
One GPS board we like is the TSAT cPCI from KSI. It has a very com-
plete LabVIEW driver with some extra features such as the ability to
programmatically toggle a TTL line when the GPS time matches an
internal register. You can use this programmable Match output to hard-
ware-gate—a data acquisition task. Keep in mind the ultimate limita-
tion for precision timing applications: software latency. It takes a finite
amount of time to call the program that fetches the time measurement
or that triggers the timing hardware. If you can set up the hardware in
such a way that there is no software “in the loop,” very high precision
is feasible. We placed TSAT cPCI cards into multiple remote PXI data
acquisition systems and were able to synchronize data collection with
close to 100 ns of accuracy. This was essential for our application, but a
GPS board is probably overkill for most applications.
Perhaps the easiest way to gain timing resolution is to use a National
Instruments data acquisition (DAQ) board as a timekeeper. All DAQ
boards include a reasonably stable crystal oscillator time base with
microsecond (or better) resolution that is used to time A/D and D/A
conversions and to drive various countertimers. Through the DAQ VI
library, you can use these timers to regulate the cycle time of software
loops, or you can use them as high-resolution clocks for timing short-
term events. This DAQ solution is particularly good with the onboard
counter/timers for applications such as frequency counting and time
interval measurement.
But what if the crystal clock on your DAQ board is not good enough?
Looking in the National Instruments catalog at the specifications
for most of the DAQ boards, we find that the absolute accuracy is
± 0.01 percent, or ± 100 ppm, with no mention of the temperature
coefficient. Is that sufficient for your application? For high-quality
calibration work and precision measurements, probably not. It might
even be a problem for frequency determination using spectral estimation.
In the LabVIEW analysis library, there’s a VI called Extract Single Tone
Information that locates, with very high precision, the fundamental
frequency of a sampled waveform. If the noise level is low, this VI can
Timing 149
Bibliography
Application Note 200, Using the Timed Loop to Write Multirate Applications in LabVIEW,
www.ni.com, National Instruments Corporation, 11500 N. Mopac Expressway, Austin,
Tex., 2004.
This page intentionally left blank
Chapter
Synchronization
6
When you build a LabVIEW application, you’ll eventually find the need
to go beyond simple sequencing and looping to handle all the things
that are going on in the world outside your computer. A big step in
programming complexity appears when you start handling events. An
event is usually defined as something external to your program that
says, “Hey, I need service!” and demands that service in a timely fash-
ion. A simple example is a user clicking a button on the panel to start
or stop an activity. Other kinds of events come from hardware, such as
a GPIB service request (SRQ) or a message coming in from a network
connection. Clearly, these random events don’t fall within the simple
boundaries of sequential programming.
There are lots of ways to handle events in LabVIEW, and many of
them involve creating parallel paths of execution where each path is
responsible for a sequential or nonsequential (event-driven) activity.
Since G is intrinsically parallel, you can create an unlimited num-
ber of parallel loops or VIs and have confidence that they’ll run with
some kind of timesharing. But how do you handle those “simultaneous”
events in such a way as to avoid collisions and misordering of execution
when it matters? And what about passing information between loops
or VIs? The complexity and the list of questions just seem to grow and
grow.
LabVIEW has some powerful synchronization features, and there are
standard techniques that can help you deal with the challenges of par-
allel programming and event handling. By the way, this is an advanced
topic, so if some of the material here seems strange, it’s because you
might not need many of these features very often (we’ll start with the
151
152 Chapter Six
easier ones, so even you beginners can keep reading). But once you see
how these features work, you’ll soon become an armed and dangerous
LabVIEW dude.*
Polling
Let’s start off with a straightforward event-handling technique that
does not use any esoteric functions or tricks: polling. When you are
polling, you periodically check the status of something (a flag), look-
ing for a particular value or change of value. Polling is very simple to
program (Figure 6.1), is easy to understand and debug, is very widely
used (even by highly experienced LabVIEW dudes!), and is acceptable
practice for thousands of applications in all programming languages.
The main drawback to polling is that it adds some overhead because
of the repeated testing and looping. (If you do it wrong, it can add lots
of overhead.)
In the left example in Figure 6.1, we’re doing a very simple
computation—the comparison of two values—and then deciding
whether to take action. There’s also a timer in the loop that guaran-
tees that the computation will be done only 10 times per second. Now
that’s pretty low overhead. But what if we left the timer out? Then
LabVIEW would attempt to run the loop as fast as possible—and
that could be millions of times per second—and it might hog the
CPU. Similarly, you might accidentally insert some complex calcula-
tion in the loop where it’s not needed, and each cycle of the polling
loop would then be more costly.
Figure 6.1 A simple polling loop handles a user-interface event. The left example
watches for a change in a numeric control. The right example awaits a True value
from the Boolean button.
*There really is an official LabVIEW Dude. On the registry system at Lawrence Liver-
more National Lab, Gary got tired of getting confused with the other two Gary Johnsons
at the Lab, so he made his official nickname “LabVIEW Dude.” It was peculiar, but effec-
tive. Just like Gary.
Synchronization 153
The other drawback to polling is that you only check the flag’s state
every so often. If you’re trying to handle many events in rapid suc-
cession, a polling scheme can be overrun and miss some events. An
example might be trying to handle individual bits of data coming in
through a high-speed data port. In such cases, there is often a hard-
ware solution (such as the hardware buffers on your serial port inter-
face) or a lower-level driver solution (such as the serial port handler in
your operating system). With these helpers in the background doing
the fast stuff, your LabVIEW program can often revert to polling for
larger packets of data that don’t come so often.
Events
Wouldn’t it be great if we had a way to handle user-interface events
without polling? We do and it’s called the Event structure. Remem-
ber, an event is a notification that something has happened. The Event
structure bundles handling of user-interface events (or notifications)
into one structure without the need for polling front panel controls, and
with a lot fewer wires! The Event structure sleeps without consuming
CPU cycles until an event occurs, and when it wakes up, it automati-
cally executes the correct Event case. The Event structure doesn’t miss
any events and handles all events in the order in which they happen.
The Event structure looks a lot like a Case structure. You can add,
edit, or delete events through the Event structure’s pop-up menu.
Events are broken down into two types: Notify events and Filter
events. A Notify event lets you know that something happened and
LabVIEW has handled it. Pressing a key on the keyboard can trig-
ger an event. As a notify event, LabVIEW lets you know the key was
pressed so you can use it in your block diagram, but all you can do is
react to the event. Figure 6.2 shows a key down Notify event. A Filter
event (Figure 6.3) allows you to change the event’s data as it happens
or even discard the event. Filter events have the same name as Notify
events, but end with a question mark (“?”).
The Event structure can save a lot of time and energy, but it can
also drive you crazy if you aren’t alert. The default configuration of
all events is to lock the front panel until the Event case has finished
execution. If you put any time-consuming processing inside the Event
structure, your front panel will lock up until the event completes. Rule:
Handle all but the most basic processing external to the Event structure.
Later in this chapter we’ll show you a powerful design pattern you can
use to pass commands from an event loop to a parallel processing loop
with queues.
Use events to enhance dataflow, not bypass it. This is especially true
with the mechanical action of booleans. A latching boolean does not
reset until the block diagram has read its value. If your latching bool-
ean is outside floating on the block diagram, it will never reset. Be
sure to place any latching booleans inside their Event case so their
mechanical action will be correct. Changing a control’s value through
a local variable will not generate an event – use the property “Value
(Signaling)” to trigger an event.
The advanced topic of Dynamic Events allows you to define and
dynamically register your own unique events and pass them between
subdiagrams. Dynamic events can unnecessarily complicate your
application, so use them only when needed and make sure to docu-
ment what you are doing and why. Figure 6.4 illustrates configuring a
dynamic event to trigger a single Event case for multiple controls. This
eliminates manually configuring an Event case for each control—a
tedious task if you have tens, or even hundreds, of controls. Property
nodes are used to get the control reference to every control on every
Synchronization 155
Figure 6.4Use control references for dynamic event registration. A single Event case handles multiple Value
Change events.
Occurrences
An occurrence is a synchronization mechanism that allows paral-
lel parts of a LabVIEW program to notify each other when an event
has occurred. It’s a kind of software trigger. The main reason for using
occurrences is to avoid polling and thus reduce overhead. An occurrence
does not use any CPU cycles while waiting. For general use, you begin
by calling the Generate Occurrence function to create an occurrence
refnum that must be passed to all other occurrence operations.
Then you can either wait for an occurrence to happen by calling the
Wait On Occurrence function or use Set Occurrence to make an
event happen. Any number of Wait On Occurrence nodes can exist
within your LabVIEW environment, and all will be triggered simulta-
neously when the associated Set Occurrence function is called.
Figure 6.5 is a very simple demonstration of occurrences. First, we
generate a new occurrence refnum and pass that to the Wait and Set
functions that we wish to have related to one another. Next, there’s our
old friend the polling loop that waits for the user to click the Do it but-
ton. When that happens, the occurrence is set and, as if by magic, the
event triggers the Wait function, and a friendly dialog box pops up. The
magic here is the transmission of information (an event) through thin
156 Chapter Six
air from one part of the diagram to another. In fact, the Wait function
didn’t even have to be on the same piece of diagram; it could be out
someplace in another VI. All it has to have is the appropriate occur-
rence refnum.
It appears that, as with global variables, we have a new way to vio-
late dataflow programming, hide relationships among elements in our
programs, and make things occur for no apparent reason in unrelated
parts of our application. Indeed we do, and that’s why you must always
use occurrences with caution and reservation. As soon as the Set and
Wait functions become dislocated, you need to type in some comments
explaining what is going on.
Well, things seem pretty reasonable so far, but there is much more to
this occurrence game. First, there is a ms time-out input on the Wait
On Occurrence function that determines how long it will wait before
it gets bored and gives up. The default time-out value, −1, will cause
it to wait forever until the occurrence is set. If it times out, its output
is set to True. This can be a safety mechanism of a sort, when you’re
not sure if you’ll ever get around to setting the occurrence and you
want a process to proceed after a while. A very useful application for
the time-out is an abortable wait, first introduced by Lynda Gruggett
(LabVIEW Technical Resource, vol. 3, no. 2). Imagine that you have a
While Loop that has to cycle every 100 s, as shown in the left side of
Figure 6.6. If you try to stop the loop by setting the conditional termi-
nal to False, you may have to wait a long time. Instead, try the abort-
able wait (Figure 6.6, right).
To implement the abortable wait, the loop you want to time and abort
is configured with a Wait On Occurrence wired to a time-out value. When-
ever the time-out period has expired, the Wait returns True, and the
While Loop cycles again, only to be suspended once again on the Wait.
Synchronization 157
When the occurrence is set (by the lower loop, in this example), the
Wait is instantly triggered and returns False for the time-out flag, and
its loop immediately stops.
Now for the trickiest feature of the Wait On Occurrence function: the
ignore previous input. It’s basically there to allow you to determine
whether to discard occurrence events from a previous execution of the
VI that it’s in. Actually, it’s more complicated than that, so here is a
painfully detailed explanation in case you desperately want to know
all the gory details.
Each Wait On Occurrence function “remembers” what occurrence it
last waited on and at what time it continued (either because the occur-
rence triggered or because of a time-out). When a VI is loaded, each
Wait On Occurrence is initialized with a nonexisting occurrence.
When a Wait On Occurrence is called and ignore previous is False,
there are four potential outcomes:
1. The occurrence has never been set. In this case Wait On Occurrence
simply waits.
2. The occurrence has been set since this Wait On Occurrence last exe-
cuted. In this case Wait On Occurrence does not wait.
3. The occurrence was last set before this Wait last executed, and last
time this Wait was called it waited on the same occurrence. Wait On
Occurrence will then wait.
4. The occurrence was last set before this Wait last executed, but last
time this Wait was called it waited on a different occurrence. In this
case Wait will not wait!
The first three cases are pretty clear, but the last one may seem a bit
strange. It will arise only if you have a Wait On Occurrence inside a
158 Chapter Six
Notifiers
As Stepan Rhia indicated, adding functionality around occurrences can
make them more useful. And that’s exactly what a notifier does: It
gives you a way to send information along with the occurrence trigger.
Although notifiers still have some of the hazards of occurrences
(obscuring the flow of data), they do a good job of hiding the low-level
trickery.
To use a notifier, follow these general steps on your diagram. First,
use the Create Notifier VI to get a notifier refnum that you pass
to your other notifier VIs. Second, place a Wait On Notification VI
Synchronization 159
Figure 6.7 A notifier sends data from the upper loop to the lower one when an
alarm limit is exceeded.
160 Chapter Six
If you actually create and run this VI, you’ll find one misbehavior.
When the notifier is destroyed and the bottom loop terminates, the
Wait On Notification VI returns an empty number, which is passed to
the dialog box to be displayed as a value of 0; it’s a false alarm. To elimi-
nate this bug, all nodes that use the data string should be enclosed in
a Case structure that tests the error cluster coming from the Wait On
Notification VI. In the remainder of this chapter, you’ll see that tech-
nique used as a standard practice.
Wait On Notification includes the time-out. Ignore previous features
of Wait On Occurrence, and they work the same way. However, most of
the time you don’t need to worry about them. The only trick you have
to remember is to include a way to terminate any loops that are wait-
ing. The error test scheme used here is pretty simple. Alternatively, you
could send a notification that contains a message explicitly telling the
recipient to shut down.
One important point to keep in mind about notifiers is that the data
buffer is only one element deep. Although you can use a single notifier
to pass data one way to multiple loops, any loop running slower than
the notification will miss data.
Queues
Queues function a lot like notifiers but store data in a FIFO (first in,
first out) buffer. The important differences between queues and noti-
fiers are as follows:
1. A notifier can pass the same data to multiple loops, but a notifier’s
data buffer is only one element deep. All notifier’s writes are destruc-
tive.
2. A queue can have an infinitely deep data buffer (within the limits
of your machine), but it’s difficult to pass the data to more than one
loop. A queue’s reads are destructive.
Figure 6.8 Alarm notification using queues. No alarms will be dropped because of user inactivity.
Use a queue when you can’t afford to miss any data passed between
parallel loops.
The asynchronous nature of queues makes for a great way to pass
messages between two loops running at different rates. Figure 6.9
shows an implementation of this using an Event structure in one loop
and a message-driven state machine in the other. The Queued Mes-
sage Handler is one of the most powerful design patterns in LabVIEW
for handling user-interface-driven applications. Notice we’re giving the
queue a unique name, to avoid namespace conflicts. All we need to
insert and remove data from a queue is the queue refnum or its name.
If we have another queue in memory using a string data type, the two
queues will use the same data buffer. This can be hard to troubleshoot!
Rule: Give each queue a unique name at creation.
Semaphores
With parallel execution, sometimes you find a shared resource called
by several VIs, and a whole new crop of problems pop up. A shared
resource might be a global variable, a file, or a hardware driver that
could suffer some kind of harm if it’s accessed simultaneously or in an
undesirable sequence by several VIs. These critical sections of your
program can be guarded in several ways.
162 Chapter Six
Figure 6.9 Queued Message Handler. Each event is placed in queue and handled in order by the con-
sumer loop.
task can access the critical section. If you’re trying to avoid a race con-
dition, use 1 as the size.
A task indicates that it wants to use a semaphore by calling the Acquire
Semaphore VI. When the size of the semaphore is greater than 0, the
VI immediately returns and the task proceeds. If the semaphore size is
0-, the task waits until the semaphore becomes available. There’s a time-
out available in case you want to proceed even if the semaphore never
becomes available. When a task successfully acquires a semaphore and
is finished with its critical section, it releases the semaphore by calling
the Release Semaphore VI. When a semaphore is no longer needed,
call the Destroy Semaphore VI. If there are any Acquire Semaphore
VIs waiting, they immediately time out and return an error.
Figure 6.11 is an example that uses a semaphore to protect access
to a global variable. Only one loop is shown accessing the critical read-
modify-write section, but there could be any number of them. Or the
critical section could reside in another VI, with the semaphore refnum
passed along in a global variable. This example follows the standard
sequence of events: Create the semaphore, wait for it to become avail-
able, run the critical section, and then release the semaphore. Destroy-
ing the semaphore can tell all other users that it’s time to quit.
Semaphores are widely used in multithreaded operating systems
where many tasks potentially require simultaneous access to a common
resource. Think about your desktop computer’s disk system and what
would happen if there were no coordination between the applications
that needed to read and write data there. Since LabVIEW supports
multithreaded execution of VIs, its execution system uses semaphores
internally to avoid clashes when VIs need to use shared code. Where
will you use semaphores?
Figure 6.11 Semaphores provide a way to protect critical sections of code that can’t tolerate simultaneous
access from parallel calling VIs.
Synchronization 165
sources and then analyze it. In this solution, there are four parallel
While Loops that could just as well reside in separate VIs. (Leaving
everything on one diagram indicates a much simpler way to solve this
problem: Just put the two Read Data VIs in a single loop, and wire
them up to the analysis section. Problem solved. Always look for the
easiest solution!)
Start at the top of the diagram, while loop A is responsible for
stopping the program. When that loop terminates, it destroys the
rendezvous, which then stops the other parallel loops. Loops B and
C asynchronously acquire data and store their respective measure-
ments in global variables. Loop D reads the two global values and then
does the analysis and display. The Wait On Rendezvous VIs suspend
execution of all three loops until all are suspended. Note that the Cre-
ate Rendezvous VI has a size parameter of 3 to arrange this condi-
tion. When the rendezvous condition is finally satisfied, loops B and C
begin acquisition while loop D computes the result from the previous
acquisition.
There are always other solutions to a given problem. Perhaps you
could have the analysis loop trigger the acquisition VIs with occur-
rences, and then have the analysis loop wait for the results via a pair
of notifiers. The acquisition loops would fire their respective notifiers
when acquisition was complete, and the analysis loop could be arranged
so that it waited for both notifiers before doing the computation. And
there is no doubt a solution based on queues. Which way you solve
your problem is dependent upon your skill and comfort level with each
technique and the abilities of anyone else who needs to understand and
perhaps modify your program.
7
Chapter
Files
Sooner, not later, you’re going to be saving data in disk files for future
analysis. Perhaps the files will be read by LabVIEW or another applica-
tion on your computer or on another machine of different manufacture.
In any case, that data is (hopefully) important to someone, so you need
to study the techniques available in LabVIEW for getting the data on
disk reliably and without too much grief.
Before you start shoveling data into files, make sure that you under-
stand the requirements of the application(s) that will be reading your files.
Every application has preferred formats that are described in the appro-
priate manuals. If all else fails, it’s usually a safe bet to write out numbers
as ASCII text files, but even that simple format can cause problems at
import time—things including strange header information, incorrect com-
binations of carriage returns and/or line feeds, unequal column lengths, too
many columns, or wrong numeric formats. Binary files are even worse,
requiring tight specifications for both the writer and the reader. They are
much faster to write or read and more compact than text files though, so
learn to handle them as well. This chapter includes discussions of some
common formats and techniques that you can use to handle them.
Study and understand your computer’s file system. The online
LabVIEW help discusses some important details such as path names
and file reference numbers (refnums). If things really get gritty, you
can also refer to the operating system reference manuals, or one of the
many programming guides you can pick up at the bookstore.
Accessing Files
File operations are a three-step process. First, you create or open a
file. Second, you write data to the file and/or read data from the file.
Third, you close the file. When creating or opening a file, you must
167
168 Chapter Seven
Macintosh
HD80:My Data Folder:Data 123
Windows
C:\JOE\PROGS\DATA\DATA123.DAT
UNIX
/usr/johnny/labview/examples/data_123.dat
LabVIEW’s Path control (from the String and Path palette) automat-
ically checks the format of the path name that you enter and attempts
to coerce it into something valid for your operating system.
Paths can be built and parsed, just as strings can. In fact, you can
convert strings to paths and back by using the conversion functions
String To Path and Path To String. There are also several functions
in the File I/O function palette to assist you in this. Figure 7.1 shows
how the Build Path function can append a string to an existing path.
You might use this if you had a predefined directory where a file should
be created and a file name determined by your program. Given a valid
path, you can also parse off the last item in the path by using the Strip
LabVIEW has functions to build, parse, and compare path names. These are useful when you are
Figure 7.1
programmatically manipulating file names.
Files 169
Path function. In Figure 7.1, the path is parsed until you get an empty
path. Constants, such as Empty Path and Not A Path, are useful for
evaluating the contents of a path name. The Default Directory con-
stant leads you to a location in the file system specified through the
LabVIEW preferences. You can also obtain a path name from the user
by calling the File Dialog function. This function allows you to prompt
the user and determine whether the chosen file exists.
Once you have selected a valid path name, you can use the Open/
Create/Replace File function either to create a file or to gain access
to an existing one. This function returns a file refnum, a magic num-
ber that LabVIEW uses internally to keep track of the file’s status. This
refnum, rather than the path name, is then passed to the other file I/O
functions. When the file is finally closed, the refnum no longer has any
meaning, and any further attempt at using it will result in an error.
You can navigate and directly manipulate your computer’s file system
from within LabVIEW. In the Advanced File palette are several useful
functions, such as File/Directory Info, List Folder, Copy, Move,
and Delete. This is a pretty comprehensive set of tools, although the
methods by which you combine them and manipulate the data (which
is mostly strings) can be quite complex.
File Types
LabVIEW’s file I/O functions can read and write virtually any file for-
mat. The three most common formats are
ASCII text files are readable in almost any application and are the
closest thing to a universal interchange format available at this time.
Your data must be formatted into strings before writing. The resulting
file can be viewed with a word processor and printed, so it makes sense
for report generation problems as well. A parsing process, as described
in the strings section, must be used to recover data values after read-
ing a text file. The disadvantages of text files are that all this conver-
sion takes extra time and the files tend to be somewhat bulky when
used to store numeric data.
Binary-format byte stream files typically contain a bit-for-bit
image of the data that resides in your computer’s memory. They cannot
be viewed by word processors, nor can they be read by any program with-
out detailed knowledge of the files’ format. The advantage of a binary
file is that little or no data conversion is required during read and write
170 Chapter Seven
Figure 7.2 uses several of the built-in file I/O functions to perform
these steps. First, the user receives a dialog box requesting the name
and location of a new file. The Open/Create/Replace File function
has an operation mode input that restricts the possible selections to
existing files, nonexisting files, or both. In this example, we wanted to
create a new data file, so we set the mode to create, which forces the
user to choose a new name. The function creates the desired file on
the specified path, and returns a refnum for use by the rest of the file
I/O functions. An output from the Open/Create/Replace File function,
Figure 7.2 The built-in file I/O functions are used to create, open, and write text data to a
file. The Case structure takes care of user cancellations in the file dialog.
Files 171
Figure 7.3 A simple data logger that uses an open refnum for all the file management. In the loop, data is
acquired, formatted into a suitable string, and appended to the file. Simple, eh?
172 Chapter Seven
Figure 7.5 Use the Read Binary function to read text file from disk.
Figure 7.5 shows an example that is symmetric with Figure 7.2. The
File Dialog function provides a dialog box that only lets the user pick
an existing file. This is configured from File Dialog’s pop-up configura-
tion menu which is accessible from the block diagram. Open File opens
the file for read-only access and passes the refnum to Read Binary.
The Read Binary function can read any number of bytes wired to the
count input. By default, Read Binary starts reading at the beginning
of the file, and count is –1 (it reads all data). The default data type
returned is string. For the simple case where you wish to read part of
the file, just wire a constant to count and set it to the expected amount
of data. We’re using Read Binary in this example instead of Read Text
because the default behavior of Read Text is to stop at the first EOL.
Figure 7.6 shows several ways to use the Read Text function.
In the Read Text pop-up menu you select whether to read bytes or
lines, and whether or not to convert EOL (Windows = CR/LF, Mac OS =
CR, Linux = LF) characters. The default behavior is to convert EOL
and read lines. Recognizing lines of text is a handy feature, but may not
be what you expect when Read Text returns only the first line of your
file. Figure 7.6A shows Read Text configured not to read lines of text.
Setting count equal to –1 reads an entire file. In Figure 7.6B, Read Text
is configured to read lines and returns the number of lines equal to
count as an array. In Figure 7.6C, Read Text increments through a text
file one line at a time. This can be especially useful when you are incre-
menting through a text file containing an automated test sequence.
Another useful file utility VI is Read From Spreadsheet File,
shown in simplified form in Figure 7.7. This VI loads one or more lines
of text data from a file and interprets it as a 2D array of numbers. You
can call it with number of rows set to 1 to read just one line of data at
a time, in which case you can use the first row output (a 1D array). If
the rows and columns need to be exchanged, set transpose to True. By
default, it assumes that the tab character delimits values on each line.
174 Chapter Seven
Figure 7.6 The Read Text function can read bytes or lines from a text file
depending on its configuration. (A) It reads bytes (−1 = all). (B) It reads lines.
(C) It reads a file one line at a time.
Figure 7.7The file utility VI Read From Spreadsheet File.vi interprets a text file containing
rows and columns of numbers as a matrix (2D array). This is a simplified diagram.
Files 175
12.34V,9
Figure 7.8 The (A) Format Into File and (B) Scan
From File functions are great for accessing files con-
taining text items with simple formatting.
176 Chapter Seven
Binary Files
The main reasons for using binary files as opposed to ASCII text are
that (1) they are faster for both reading and writing operations and (2)
they are generally smaller. They are faster because they are smaller
and because no data conversion needs to be performed (see the section
on conversions). Instead, an image of the data in memory is copied byte
for byte out to the disk and then back in again when it is read. Convert-
ing to ASCII also requires more bytes to maintain numerical precision.
For example, a single-precision floating-point number (SGL; 4 bytes)
has 7 significant figures. Add to that the exponent field and some ±
signs, and you need about 13 characters, plus a delimiter, to represent
Files 177
To make a binary file readable, you have several options. First, you
can plan to read the file back in LabVIEW because the person who
wrote it should have darned well be able to read it. Within LabVIEW,
the data could then be translated and written to another file format
or just analyzed right there. Second, you can write the file in a for-
mat specified by another application. Third, you can work closely with
the programmer on another application or use an application with the
ability to import arbitrary binary formatted files. Applications such as
Igor, S, IDL, MATLAB, and Diadem do a credible job of importing arbi-
trary binary files. They still require full information about the file for-
mat, however. These are several common data organization techniques
we’ve seen used with binary files:
1. One file, with a header block at the start that contains indexing
information, such as the number of channels, data offsets, and data
lengths. This is used by many commercial graphing and analysis
programs.
178 Chapter Seven
Figure 7.10 This subVI appends an array of SGL floats to a binary file and updates the sample count, an
I32 integer, located at the beginning of the file. (Error handling is omitted for clarity.)
Files 179
Figure 7.12 This VI reads random blocks of data from a binary file written by the LabVIEW DAQ example VI,
Cont Acq to File (scaled). I couldn’t make it much simpler than this.
If you do not wire to type, then the default type, string, is used, and
your data is returned as a string. Next, Read File is called again to read
the data, which is in SGL format. Because count is wired, Read File
returns an array of the specified type. No special positioning of the file
marker was required in this example because everything was in order.
Things are more complex when you have multiple data arrays and/or
headers in one file.
Random access reading is, as we said earlier, a bit tricky because
you have to keep track of the file mark, and that’s a function of the
data in the file. To illustrate, we created a fairly simple VI that allows
you to read random blocks of data from a file created by the LabVIEW
DAQ example VI, Cont Acq to File (scaled), located in the directory
examples/daq/analogin/strmdisk.llb/. The binary file created by that
example contains an I32 header whose value is the number of channels,
followed by scans of SGL-format data. A scan consists of one value for
each channel. The objective is to read and display several scans (we’re
calling this a block of data), starting at an arbitrary location in the file.
Figure 7.12 is a solution (not the best, most feature-packed solution, we
assure you, but it’s simple enough to explain). Here’s how it works.
feature of this file VI: It understands the concept of rows and col-
umns when reading 2D arrays. You can see the row and col values
on the diagram bundled into the count input of Read File.
4. The data is loaded and displayed on a waveform graph. You must
pop up on the graph and select Transpose Array; otherwise, the x
and y axes are swapped.
5. A small While Loop runs until the user clicks the Update button.
When the program is done, the file is closed and errors are checked.
As you can see, even this simple example is rather tricky. That’s the
way it always is with binary files.
Figure 7.13Writing data to a datalog file. The data type in this case is a cluster containing a string and a
numeric array. Records are appended every second. (Error handling is removed for clarity.)
182 Chapter Seven
Figure 7.14This VI reads a single datalog of the format written by the previous example.
Note the value of creating a typedef control to set the datalog type. (Error handling is
removed for clarity.)
Figure 7.15 This code fragment reads all the records from a
datalog file.
This page intentionally left blank
Chapter
Building an Application
8
Many applications we’ve looked at over the years seemed as though
they had just happened. Be it Fortran, Pascal, C, or LabVIEW, it was
as if the programmer had no real goals in mind—no sense of mis-
sion or understanding of the big picture. There was no planning. Gary
actually had to use an old data acquisition system that started life
as a hardware test program. One of the technicians sat down to test
some new I/O interface hardware one day, so he wrote a simple pro-
gram to collect measurements. Someone saw what was going on and
asked him if he could store the data on disk. He hammered away for
a few days, and, by golly, it worked. The same scenario was repeated
over and over for several years, culminating in a full-featured . . .
mess. It collected data, but nobody could understand the program,
let alone modify or maintain it. By default, this contrivance became
a standard data acquisition system for a whole bunch of small labs.
It took years to finally replace it, mainly because it was a daunting
task. But LabVIEW made it easy. These days, even though we start by
writing programs in LabVIEW, the same old scenario keeps repeating
itself: Hack something together, then try to fix it up, and document it
sometime later.
Haphazard programming need not be the rule. Computer scientists
have come up with an arsenal of program design, analysis, and quality
management techniques over the years, all based on common sense
and all applicable to LabVIEW. If you happen to be familiar with such
formalisms as structured design, by all means use those methods. But
you don’t need to be a computer scientist to design a quality applica-
tion. Rather, you need to think ahead, analyze the problem, and gener-
ally be methodical. This chapter should help you see the big picture
185
186 Chapter Eight
and design a better application. Here are the steps we use to build a
LabVIEW application:
Vacuum Belljar
Type S
Thermocouple
Eurotherm
Triac Power
Temperature 208 VAC, 30A
Controller
Controller
4-20mA
Gary didn’t know what vacuum brazing was until he spent a couple of
hours with Larry. Here’s what he learned:
The principle of vacuum brazing is simple (Figure 8.1). Brazing involves
joining two materials (usually, but not always, metal) by melting a filler
metal (such as brass) and allowing it to wet the materials to be joined. Sol-
dering (like we do in electronics) is similar. The idea is not to melt the mate-
rials being joined, and that means carefully controlling the temperature.
Another complication is that the base metals sometimes react with the
air or other contaminants at high temperatures, forming an impervious
oxide layer that inhibits joining. Larry’s solution is to put everything in
a vacuum chamber where there is no oxygen, or anything else for that
matter. Thus, we call it vacuum brazing. The VBL has five similar electric
furnaces, each with its own vacuum bell jar. The heater is controlled by
a Eurotherm model 847 digital controller, which measures the tempera-
ture with a thermocouple and adjusts the power applied to the big electric
heater until the measured temperature matches the set point that Larry
has entered.
Gather specifications
As you interview the customers, start making a list of basic functional
requirements and specifications. Watch out for the old standoff
where the user keeps asking what your proposed system can provide.
You don’t even have a proposed system yet! Just concentrate on get-
ting the real needs on paper. You can negotiate practical limitations
later.
188 Chapter Eight
Ask probing questions that take into consideration his or her actual
understanding of the problem, his or her knowledge of instrumenta-
tion, and LabVIEW’s capabilities. If there is an existing computer
system (not necessarily in the same lab, but one with a similar pur-
pose), use that as a starting point. As the project progresses, keep
referring to this original list to make sure that everything has been
addressed. (It’s amazing, but every time I think I’m finished, I check
the requirements document and find out I’ve missed at least one
important feature.)
Review the specifications with the user when you think you under-
stand the problem. In formal software engineering, the requirements
document may adhere to some standards, such as those published by
IEEE, which can be quite involved. In that case, you will have formal
design reviews where the customer and perhaps some outside experts
will review your proposed system design. Although it is beyond
the scope of this chapter, here are some important requirements
to consider:
The major requirements for the VBL project I divided into must-haves and
future additions, which prioritized the jobs nicely. Larry was really short
of funding on this job, so the future additions are low priority.
Must-Haves
1. All five systems may operate simultaneously.
2. Procedure: ramp up to a soak setpoint, then go to manual control so
user can tweak temperature up to melting point. Resume ramp on
command.
3. LabVIEW will generate the temperature ramp profiles. User needs a
nice way to enter parameters and maintain recipes.
4. Strip-chart indicators for real-time temperature trends.
5. Notify user (beep and/or dialog) when the soak temp is reached.
6. Alarm (beep and dialog) when temperature exceeds upper limit in
controller.
7. Make post-run trend plots to paste into Microsoft Word report
document.
Future Additions
1. Control vacuum pump-down controller. This will require many RS-232
ports.
2. Trend vacuum measurement. This will require even more RS-232
ports.
Collect details on any special equipment proposed for use in the sys-
tem. Technical manuals are needed to figure out how to hook up the
signal lines and will also give you signal specifications. For all but the
simplest instruments there may be a significant driver development
Figure 8.3 A general-purpose speed tester VI that is used when you need to
check the throughput or iteration rate of a subVI or segment of code. This
example does 10 floating-point operations.
document, you can paste images of your mock front panels into the
document. That makes your intentions clearer.
Panel possibilities
Remember that you have an arsenal of graphical controls available;
don’t just use simple numerics where a slider or ring control would be
more intuitive. Import pictures from a drawing application and paste
them into Pict Ring controls or as states in boolean controls, and so
forth, as shown in Figure 8.4. Color is fully supported (except in this
book). Deal with indicators the same way. Use the Control Editor
to do detailed customization. The Control Editor is accessed by first
selecting a control on the panel and then choosing Customize Con-
trol from the Edit menu. You can first resize and color all the parts of
any control and paste in pictures for any part and then save the control
under a chosen name for later reuse. Be creative, but always try to
choose the appropriate graphic design, not just the one with the most
glitz. After all, the operator needs information, not entertainment.
Booleans with pictures pasted in are especially useful as indicators.
For instance, if the True state contains a warning message and the
False state has both the foreground and background color set to trans-
parent (T, an invisible pen pattern in the color palette when you are
using the coloring tool), then the warning message will appear as if by
magic when the state is set to True. Similarly, an item can appear to
Base Com, Emitter StepGen Base StepGen, Base Open, Base Short,
Emitter Common Emitter Common Emitter Common
o
o Vcc Vcc
Ring or Enum Vcc
List Box
Source Mode
Base StepGen, Emitter Com
Base Vcc, Emitter Com
Base Com, Emitter StepGen
Figure 8.4 Controls can be highly customized with various text styles and imported pictures.
Here are four sensible numeric controls that could be used for the same function. Pick your
favorite.
194 Chapter Eight
move, change color, or change size based on its state through the use of
these pasted-in pictures. The built-in LabVIEW controls and indicators
can be made to move or change the size programmatically through the
use of Property nodes.
Real-time trending is best handled by the various charts, which are
designed to update displayed data one point or one buffer at a time.
Arrays of data acquired periodically are best displayed on a graph,
which repaints its entire display when updated. Note that graphs and
charts have many features that you can customize, such as axis labels,
grids, and colors. If you have a special data display problem that is
beyond the capability of an ordinary graph or chart, also consider the
Picture Controls. They allow you to draw just about anything in a
rectangular window placed on the front panel. There are other, more
advanced data display toolkits available from third parties. Chapter 20,
“Data Visualization, Imaging, and Sound,” discusses aspects of graphi-
cal display in detail.
Simple applications may be handled with a single panel, but more
complex situations call for multiple VIs to avoid overcrowding the
screen. Subdivide the controls and indicators according to the function
or mode of operation. See if there is a way to group them such that
the user can press a button that activates a subVI that is set to Show
front panel when called (part of the VI Properties menu, available
by popping up on the icon). The subVI opens and presents the user with
some logically related information and controls, and it stays around
until an Exit button is pressed, after which its window closes—just as
a dialog box does, but it is much more versatile. Look at the Window
options in the VI Properties menu. It contains many useful options
for customizing the look of the VI window. Functions for configuration
management, special control modes, alarm reporting, and alterna-
tive data presentations are all candidates for these dynamic windows.
Examples of dynamic windows and programming tricks illustrating
their use are discussed in Chapter 18, “Process Control Applications.”
For demonstration purposes, these dynamic windows can be quickly
prototyped and made to operate in a realistic manner.
Your next step is to show your mock-up panels to the users and col-
lect their comments. If there are only one or two people who need to
see the demonstration, just gather them around your computer and do
it live. Animation really helps the users visualize the end result. Also,
you can edit the panels while they watch (see Figure 8.5).
If the audience is bigger than your office can handle, try using a
projection screen for your computer down in the conference room. LCD
panels and red-green-blue (RGB) projectors are widely available and
offer reasonable quality. That way, you can still do the live demonstra-
tion. As a backup, take screen shots of the LabVIEW panels that you
Building an Application 195
Furnace 3 Soaking
1.2
Furnace 4 Cooling
1.0
Furnace 5 Manual
0.8
Temperature Setpoint
0.6
0.68 0. 0.0
0.4 0.56 1. 0.0
0.79 3. 0.0
0.0
23 28 0.93 4. 0.0
Figure 8.5 Can you make that graph a little taller? “No sweat,” I replied, while calmly resizing
the graph.
application that you can update and extend on short notice. Many of
the architectures shown in Chapter 3, “Controlling Program Flow,” are
based on such practical needs and are indeed flexible and extensible.
If you’re an experienced programmer, be sure to use your hard-
learned experience. After all, LabVIEW is a programming language. All
the sophisticated tools of structured analysis can be applied (if you’re
comfortable with that sort of thing), or you can just draw pictures and
block diagrams until the functional requirements seem to match. This
world is too full of overly rigid people and theorists producing more
heat than light; let’s get the job done!
Ask a Wizard
To help you get started building data acquisition applications, National
Instruments added a DAQ Assistant to LabVIEW. The Wizard leads
you through a list of questions regarding your hardware and applica-
tion in the same way you might interview your customer. Then it places
a DAQ solution that is ready to run. From that point, you continue by
customizing the user interface and adding special features. As a design
assistant, it’s particularly valuable for DAQ-based applications, but it
can also suggest basic program designs that are useful for ordinary
applications. As time goes on, LabVIEW Wizards will expand to encom-
pass a broader range of programs.
Top-down or bottom-up?
There are two classic ways to attack a programming problem: from the
top down and from the bottom up. And everyone has an opinion—an
opinion that may well change from one project to the next—on which
way is better.
Top-down structured design begins with the big picture: “I’m going
to control this airplane; I’ll start with the cockpit layout.” When you
have a big project to tackle, top-down works most naturally. LabVIEW
has a big advantage over other languages when it comes to top-down
design: It’s easy to start with the final user interface and then animate
it. A top-down LabVIEW design implies the creation of dummy subVIs,
each with a definite purpose and interrelationship to adjacent subVIs,
callers, and callees, but at first without any programming. You define
the front panel objects, the data types, and the connector pane and its
terminal layout. When the hierarchy is complete, then you start filling
in the code. This method offers great flexibility when you have teams
of programmers because you tend to break the project into parts (VIs)
that you can assign to different people.
Bottom-up structured design begins by solving those difficult low-
level bit manipulation, number-crunching, and timing problems right
198 Chapter Eight
from the start. Writing an instrument driver tends to be this way. You
can’t do anything until you know how to pass messages back and forth
to the instrument, and that implies programming at the lowest level
as step 1. Each of these lower-level subVIs can be written in complete
and final form and tested as a stand-alone program. Your only other
concern is that the right kind of inputs and outputs be available to link
with the calling VIs.
Keeping track of inputs and outputs implies the creation of a data
dictionary, a hierarchical table in which you list important data types
that need to be passed among various subVIs. Data dictionaries are
mandatory when you are using formal software design methods, but
are quite useful on any project that involves more than a few VIs. It
is easy to maintain a data dictionary as a word processing document
that is always open in the background while you are doing the devel-
opment. A spreadsheet or database might be even better; if you use
a computer-aided software engineering (CASE) tool, it will of course
include a data dictionary utility. List the controls and indicators by
name, thus keeping their names consistent throughout the VI hier-
archy. You might even paste in screen shots of the actual front panel
items as a reminder of what they contain. This is also a chance to get
a head start on documentation, which is always easier to write at the
moment you’re involved with the nuts and bolts of the problem. Many
items that make it to the dictionary are saved as typedefs or strict
typedefs that you define in the Control Editor. This saves countless
hours of editing when, for instance, a cluster needs just one more
element.
Don’t be afraid to use both top-down and bottom-up techniques at
once, thus ending up in the middle, if that feels right. Indeed, you need
to know something about the I/O hardware and how the drivers work
before you can possibly link the raw data to the final data display.
Modularity
Break the problem into modular pieces that you can understand. This
is the divide-and-conquer technique. The modules in this case are
subVIs. Each subVI handles a specific task—a function or operation
that needs to be performed. Link all the tasks together, and an appli-
cation is born, as shown in Figure 8.6.
One of the tricks lies in knowing when to create a subVI. Don’t just
lasso a big chunk of a diagram and stick it in a subVI because you ran
out of space; that only proves a lack of forethought. Wouldn’t you know,
LabVIEW makes it easy to do: select part of your diagram, choose Cre-
ate SubVI from the Edit menu, and poof! Instant subVI, with labeled
controls and connector pane, all wired into place. Regardless of how
Building an Application 199
Eurotherm Driver-
Driver Write Operation
easy this may be, you should instead always think in terms of tasks.
Design and develop each task as if it were a stand-alone application.
That makes it easier to test and promotes reuse in other problems.
Each task, in turn, is made up of smaller tasks—the essence of top-
down hierarchical design.
Any properly designed task has a clear purpose. Think of a one-
sentence thesis statement that clearly summarizes the purpose of the
subVI: “This VI loads data from a series of transient recorders and
places the data in an output array.” (A good place to put this thesis
statement is in the VI Documentation dialog.) If you can’t write a simple
statement like that, you may be creating a catchall subVI. Include the
inputs and outputs and the desired behavior in your thesis statement,
and you have a VI specification. This enables a group of programmers
to work on a project while allowing each programmer to write to the
specification using her or his own style. As an example of how power-
ful this is, a recent coding challenge posted to the LabVIEW Developer
Zone (www.ni.com) had more than 100 submissions; although each
submission was different, they all did essentially the same thing. More
than 100 programmers, and all their code was interchangeable because
they all wrote to the same specification. Not only does it make team
development possible, but also maintenance is easier.
Consider the reusability of the subVIs you create. Can the function be
used in several locations in your program? If so, you definitely have a
reusable module, saving disk space and memory. If the subVI require-
ments are almost identical in several locations, it’s probably worth
writing it in such a way that it becomes a universal solution—perhaps
it just needs a mode control. On the other hand, excessive modularity
can lead to inefficiency because each subVI adds calling overhead at
200 Chapter Eight
Figure 8.7 LabVIEW design patterns provide a familiar starting point for many applica-
tions. You can access design patterns from the File >> New . . . dialog.
MAIN
PANEL
I/O drivers at the lowest level. You could also draw this as a nested list,
much as file systems are often diagrammed.
Each node or item in the hierarchy represents a task, which then
becomes a subVI. Use your list of functional requirements as a check-
list to see that each feature has a home somewhere in the hierarchy.
As you add features, you can keep adding nodes to this main sketch, or
make separate sketches for each major task. Modularity again! There
is no need to worry about the programming details inside a low-level
task; just make sure you know what it needs to do in a general way.
Remember that the hierarchy is a design tool that should be referred
to and updated continuously as you write your programs.
The ramp-and-soak controller (one of the major subVIs in VBL) looked
simple enough at first, but turned out to be one of the trickiest routines
I’ve ever written. Without a healthy respect for modularity, I would have
had a much more difficult time. The problems stemmed from interactions
between various modes of operation, such as manual control, ramping, and
soaking. My hierarchy sketch was a real mess by the time I was through,
but it was the only easy way to track those modes and interactions.
Figure 8.9 Sketch for the main VI used in VBL. Not very
detailed at this point, but it sure gets the point across.
Pseudocoding
If you are an experienced programmer, you may find that it’s easier to
write some parts of your program in a procedural language or pseudo-
code. Of course, you should do this only if it comes naturally. Trans-
lating Pascalese to LabVIEW may or may not be obvious or efficient,
but Gary sometimes uses this technique instead of LabVIEW sketch-
ing when he has a really tough numerical algorithm to hammer out.
Figure 8.10 shows a likely mapping between some example pseudocode
and a LabVIEW data structure, and Figure 8.11 shows the associated
LabVIEW diagram that is a translation of the same code.
There’s something bothersome about this translation process. The
folks who thought up LabVIEW in the first place were trying to free
us from the necessity of writing procedural code with all its unforgiv-
ing syntactical rules. Normally, LabVIEW makes complicated things
simple, but sometimes it also makes simple things complicated. Mak-
ing an oscilloscope into a spectrum analyzer using virtual instruments
is really easy, but making a character-by-character string parser is a
204 Chapter Eight
real mess. In such cases, we choose the best tool for the job at hand and
be happy.
To design the difficult ramp-and-soak control logic in VBL, I found that
a pseudocode approach was easier. I didn’t have to look far to recover the
example that appears in Figure 8.12; it was pasted into a diagram string
constant in the subVI called Recipe Control Logic. I won’t bother to repro-
duce the entire VI diagram here because it has too many nested Case
structures to print in a reasonable space.
Initializes Bundle
shift register
Figure 8.11 The same pseudocode translated into a LabVIEW diagram, which acts on
the data structure from the previous figure. If this is your cup of tea, by all means use
this approach.
Building an Application 205
In Range
upper limit 15.00 and Coerce
coerced value
Numeric
In Range?
lower limit 1.00
Index Array
next greater value
number to find next greater value
numeric array
0.0000 0.0000
index
index
0 number to find Round to
Threshold
1D Array +Infinity
Figure 8.14 A Range Finder utility VI. This one looks through an array of permissible values and
finds the one that is greater than or equal to the value you wish to find. It returns that value and
the index into the number to find the array.
String controls and tables are particularly obnoxious because the user
can (and will) type just about anything into one. Think about the conse-
quences of this for every string in your program. You may need to apply
a filter of some type to remove or replace unacceptable characters. For
instance, other applications may not like a data file in which the name
of a signal has embedded blanks or nonalphanumeric characters.
One nefarious character is the carriage return or end-of-line (EOL)
character for your platform. Users sometimes hit the Return key rather
than the Enter key to complete the entry of text into a LabVIEW string
control, thus appending EOL to the string. Note that training users
will not solve the problem! You have two options. In the LabVIEW
Preferences, under Front Panel, there is an option to treat Return the
same as Enter in string controls. If selected, the two keys act the same
and no EOL character is stored. The other alternative is to write a
subVI to search for EOL and kill it. The Preferences method is nice, but
what happens when you install your VI on another machine? Will that
preference be properly set? And what about all the other preferences
that you so carefully tuned for your application? If a setting is very
important, you must explicitly update the preferences when installing
your VI.
Controls should have reasonable default values. It’s nice to have a VI
that does not fail when run as-opened with default values. After enter-
ing the values you desire, you select Make Current Value Default
from the control’s pop-up or from the Edit menu. Remember to show
the default value and units in parentheses in the control’s label so that
it appears in the Help window. It’s nice to know that the default value is
1000, and whether it’s milliseconds or seconds. But don’t make the data
in graphs and arrays into default values (unless it’s really required);
that just wastes disk space when the VI is saved.
Building an Application 207
Handling errors
Another aspect of robust programs is the way they report run-time
errors. Commercial software contains an incredible amount of code
devoted to error handling, so don’t feel silly when one-half of your dia-
gram is devoted to testing values and generating error clusters or dia-
logs. Most problems occur during the initial setup of a system, but may
rarely occur thereafter. It may seem wasteful to have so much code
lying around that is used only 0.01 percent of the time. But it’s in the
start-up phase that users need the most information about what is
going wrong. Put in the error routines and leave them there. If you want,
add a boolean control to turn off dialog generation. Realize that dialog
boxes tie up the VI from which they are called until the user responds.
This is not a good mode of operation for an unattended system, but you
may still want the option of viewing dialogs during a troubleshooting
session.
There is a distinction between error handling and error recovery.
Error handling provides a graceful method to report an error condition,
and to allow the operator to intervene, without causing more errors. A
simple message “No scope at GPIB address 5” will go a long way toward
getting the problem solved. Error recovery involves programmatically
making decisions about the error and trying to correct it. Serious prob-
lems can occur when error recovery goes awry, so think carefully before
making your application so smart it “fixes” itself. The best applications
will report any and all error messages to guarantee system reliabil-
ity and data quality. Your job during the development phase is not
to ignore errors, but to eliminate the conditions that caused an error
when it crops up.
The simplest and recommended method to implement error handling
in your application is to use an error I/O cluster to connect one VI to
the next. The error I/O cluster works as a common dataflow thread, as
you have already seen in many examples in this book. The error cluster
enforces dataflow and allows VIs downstream to take appropriate action.
208 Chapter Eight
Figure 8.15 Test the error cluster inside each subVI. If there is an input error,
no work is performed by the subVI and the error is passed through without
modification. The Scan From String function does this internally, as do almost
all LabVIEW functions.
Figure 8.16 Error handling in producer/consumer design pattern. Any error terminates a loop. Is
that what you want in a robust application?
The messages are put into a queue (enqueued) by the producer and
removed from the queue (dequeued) by the bottom consumer loop.
Stopping the producer loop releases the queue and stops the consumer
loop when the dequeue function generates an error. If any subVIs in the
consumer loop were included in the error chain between the dequeue
function and the stop condition, it would programmatically alter the
behavior by forcing the loop to terminate on any error. The queued
message handler is a common design pattern for programming user-
interface applications; however, because there is only one-way commu-
nication between the loops, the producer loop will never know if the
consumer loop terminated—it will just go on putting messages into the
queue. Clearly this is not a robust solution!
One flexible error-handling scheme we’ve used, and seen used by
others, is based on the addition of an extra top-level error-handling
loop to which all error reports are passed via a global queue. Any VI
is permitted to deposit an error in the queue at any time, which causes
the error handler to wake up and evaluate that error. Figure 8.17
shows the top-level error-handling loop. The reference to Error Queue
is stored in ErrorQueue.vi (Figure 8.18). Any incoming error placed
into the queue is handled by the Simple Error Handler.vi. Figure 8.19
shows one way to put the code into action.
Building an Application 211
Error loop handles all errors. The Error Queue reference is stored in
Figure 8.17
ErrorQueue.vi (Figure 8.18).
Figure 8.19 ErrorQueue.vi and the error-handling loop in action. All errors are
reported via the Error Queue.
212 Chapter Eight
Figure 8.20 The event structure responds to front panel events and programmatic events. Dynamic, user-defined
events pass errors from the consumer loop back to the event loop.
Reporting all the errors via a global queue has a lot of advantages,
but we have to add an extra dequeue loop and don’t have a way to
trigger the event loop. Figure 8.20 shows an advanced technique using
dynamic events to pass a user-defined event to the event loop. Using
a single event loop to listen to front panel user events and program-
matic user-defined events enables two-way communications between
the loops. Figure 8.21 shows the Error Event.VI. It is a functional
global that retains the reference to the dynamic event in a shift regis-
ter. Any incoming errors fire the user event and pass the error cluster
to the Simple Error Handler.vi in the Event Loop.
Whichever error-handling method you put into place needs to be
robust and flexible enough to grow with your application. Above all,
the requirements for your application should dictate what level of error
handling or error recovery you need to put in place. The methods we’ve
shown you are by no means the only way to do things. If they don’t
work for you, get creative and invent your own.
Building an Application 213
Figure 8.21 Incoming errors fire the user-defined event, and the error cluster is
passed to the waiting Event structure.
VBL had to run reliably when unattended. Therefore, I had to avoid any
error dialogs that would lock up the system. When you call one of the
dialog subVIs, the calling VI must wait for the user to dismiss the dialog
before it can continue execution. That’s the reason you should never gen-
erate an error dialog from within a driver VI; do it at the top level only.
My solution was to add a switch to enable dialog generation only for test
purposes. If a communications error occurred during normal operation,
the driver would retry the I/O operation several times. If that failed, the
main program would simply go on as if nothing had happened. This was
a judgment call based on the criticality of the operation. I decided that no
harm would come to any of the furnaces because even the worst-case error
(total communications failure) would only result in a really long constant
temperature soak. This has proven to be the correct choice after several
years of continuous operation.
Tracing execution
Many procedural languages contain a trace command or mode of oper-
ation, in which you can record a history of values for a given variable.
A string indicator on the panel of a VI will suffice when the message is
more of an informative nature. A handy technique is to use a functional
global with a reference to a queue to pass informative messages. This
is very useful when you’re not sure about the order in which events
occurred or values were computed. An idea Gary got from Dan Snider,
who wrote the Motion Toolbox for Parker Compumotor, is a trace VI
Building an Application 215
that you can insert in your program to record important events. Val-
ues can be recorded in two ways: to disk or by appending to an array.
The VI can record values each time it’s called, as a simple datalogger
does, or it can watch for changes in a particular value and record only
the changes. This implementation is not perfect, however. If you have
a race condition on a diagram in which you are writing to a global
variable at two locations almost simultaneously, the trace VI may not
be quick enough to separate the two updates. This is where a built-in
LabVIEW trace command would save the day. Perhaps a future version
of LabVIEW will have this feature.
Figure 8.22 shows the connector pane and front panel of the Trace VI.
In this example, trace information may be appended to a file or to a
string displayed on the panel of the VI. Each trace message contains a
timestamp, the value being traced, an optional message string, and the
error code. A single numeric value is traced, but you can replace it with
whatever data you may have. Edit the diagram to change the format-
ting of the string as desired.
The When to log control determines when a trace record will be
logged to the file or string. Log always means a record will be saved
every time the VI is called. Log on change means only record data if
Message ("")
Mode Path in use
When to log Trace summary string
Value to trace error out
error in
Trace.vi
Log on change
Value to trace
4.00
Message ("")
My notes Path in use error out
RAM Disk:a
status code
error in
ERROR 44
status code
source
ERROR 44
source
Figure 8.22The Trace VI can help debug some applications by recording changes
in important variables. You can record to the string indicator on the panel or to a
file.
216 Chapter Eight
Value has changed since the last time the VI was called. The Mode
control determines what operation to perform. You can clear the string,
choose a new file, log to the string, or log to the file. If you are routing
trace information to the string, you may open the VI to display the
trace summary. We also include a mode called nothing, which makes
the VI do exactly that. Why include such a mode? So that you can leave
the Trace VI in place on a diagram but in a disabled condition.
This VI is usually wired into a diagram to monitor a specific variable.
You can also place it in an independent While Loop, perhaps in its own
top-level VI, and have it monitor a global variable for any changes. It’s
important to remember that the Trace VI takes some time to do its job.
Appending to the string starts out by taking a few milliseconds, but it
gradually slows down as the string grows. Logging to a file will also
take some milliseconds, but it’s a constant delay. Displaying the panel
while logging to the string can seriously degrade performance. You may
want to modify the VI for higher performance by stripping out all the
extra features you don’t need. In particular, to trace a single numeric
value, all you have to do is to see if its value has changed, then append
it to an array. That’s much faster than all the string handling, but it’s
not quite as informative.
Checking performance
During development, you should occasionally check the performance of
your application to make sure that it meets any specifications for speed
or response time. You should also check memory usage and make sure
that your customers will have sufficient memory and CPU speed for
the application. LabVIEW has a VI performance profiler that makes it
much easier to verify speed and memory usage for a hierarchy.
Select Profile VIs from the Tools >> Profile >> Performance and
Memory . . . menu to open the Profile Window and start a profiling
session. There are check boxes for selecting timing statistics, timing
details, and memory usage. It’s OK to turn everything on. Click the
Start button to enable the collection of performance data, and then
run your VI and use it in a normal manner. The Profile Window must
stay open in the background to collect data. At any time, you can click
the Snapshot button to view the current statistics. Statistics are pre-
sented in a large table, with one row for each VI. You can save this data
in a tab-delimited text file by clicking the Save button.
Scroll through the table and note the almost overwhelming bounty of
information. (See the LabVIEW user’s manual for details on the mean-
ing of each value.) Click on any column header to sort the information
by that statistic. For instance, you will probably want to know which
VIs use the most memory. If you see something surprising in the list,
Building an Application 217
Final Touches
If you follow the directions discussed here, you will end up with a well-
designed LabVIEW application that meets the specifications deter-
mined early in the process. A few final chores may remain.
Test and verify your program. First, compare your application with
the original specifications one more time. Make sure that all the needs
are fulfilled—controls and indicators are all in place, data files contain
the correct formats, throughput is acceptable, and so forth. Second, you
should abuse your program in a big way. Try pressing buttons in the
wrong order and entering ridiculous values in all controls (your users
will!). When data files are involved, go out and modify, delete, or other-
wise mangle the files without LabVIEW’s knowledge, just to see what
happens. Remember that error handling is the name of the game in
robust programming. If you are operating under the auspices of formal
software quality assurance, you will have to write a detailed valida-
tion and verification plan, followed by a report of the test results. A
phenomenal amount of paperwork will be generated. Formal software
quality assurance methods and software engineering are discussed in
detail in LabVIEW Power Programming (Johnson 1998).
Train your users. Depending on the complexity of the application,
this may require anything from a few minutes to several hours. Gary
once developed a big general-purpose data acquisition package and
had about 15 people to train to use it. The package had lots of features,
and most of the users were unfamiliar with LabVIEW. His strategy
was to personally train two people at a time in front of a live worksta-
tion. The user’s manual he wrote was the only other teaching aid. After
going through the basic operations section of the manual in order dur-
ing the training session, the students could see that the manual was
a good reference when they had questions later. It took about 2 h for
each training session, much of which consisted of demonstration, with
hands-on practice by the users at each major step. Simple reinforce-
ment exercises really help students to retain what you have taught
them. Everyone really appreciated these training sessions because
they helped them get a jump start in the use of the new system.
Last, but not least, make several backup copies of all your VIs and
associated documents. Hopefully, you have been making some kind of
backups for safety’s sake all along. It’s an important habit to get into
because system failures and foul-ups on your part should never become
218 Chapter Eight
VBL epilogue
It turns out that it took longer to get the equipment fabricated and installed
in Larry’s lab than it did for me to write the application. I did all the test-
ing in my office with a Eurotherm controller next to my Mac, including all
the demonstrations. Since there was plenty of time, I finished off the docu-
mentation and delivered a copy in advance—Larry was pleasantly sur-
prised. Training was spread out over several sessions, giving him plenty of
time to try out all the features. We tested the package thoroughly without
power applied to the furnaces for safety’s sake. Only a few final modifica-
tions and debugging sessions were needed before the package was ready
for real brazing runs. It’s been running constantly since the end of 1992
with excellent results. Larry is another happy LabVIEW customer.
CLAD
The CLAD is a 1-h, computer-based multiple-choice examination admin-
istered by Pearson Vue. You can register for the CLAD online by following
Building an Application 219
That’s a lot of stuff to know, but it’s basic LabVIEW knowledge you
should have if you are going to call yourself a LabVIEW programmer.
The test itself is not too hard. The questions are designed to test your
knowledge, not to confuse or trick you. Here is our own example of one
type of programming problem and question found on the CLAD.
What are the values in Array after the For Loop in Figure 8.23 has
finished?
A) 1, 2, 3, 4, 5
B) 0, 1, 2, 3, 4
C) 1, 3, 6, 10, 15
D) 0, 1, 3, 6, 10
In our example you have to do a little addition to get the answer; but
if you understood For Loops, shift registers, and arrays, it was easy.
The rest of the examination is practical questions on the subjects we
covered in this book.
CLD
The Certified LabVIEW Developer examination is a 4-h test of your
LabVIEW application development skills. You take the examination
at a National Instruments–approved location on a PC with a current
version of LabVIEW installed. You are not allowed to use any toolkits
or outside LabVIEW code other than the templates and examples that
normally come with LabVIEW. All that you are given is a sealed enve-
lope with two things: a blank floppy for your completed application,
and a multipage document with application requirements and func-
tional description(s). And 240 min later you should have a completed
application that
■ Functions as specified. It is always important to have an application
do what the customer wants.
■ Conforms to LabVIEW coding style and documentation standards.
The LabVIEW Development Guidelines is available in the online
help if you need to refer to it.
■ Is created expressly for the examination using VIs and functions
available in LabVIEW.
■ Is hierarchical. All major functions should be performed in subVIs.
■ Uses a state machine. Your state machine should use a type-defined
enumerated control, a queue, or an Event structure for state
management.
■ Is easily scalable to more states and/or features without having to
manually update the hierarchy. Think typedef enum here.
■ Minimizes the use of excessive structures, variables (locals/globals),
and Property nodes.
■ Responds to front panel controls (within 100 ms) and does not use
100 percent of the CPU cycles.
■ Closes all opened references and handles.
■ Is well documented and includes
■ Labels on appropriate wires within the main VI and subVIs
■ Descriptions for each algorithm
■ Documentation in VI Properties >> Documentation for the main
VI and subVIs
222 Chapter Eight
■ Tip strip and descriptions for front panel controls and indicators
■ Labels on all constants
around this function can drive you nuts because Elapsed Time will
reset itself each time you push the subVI’s Run button.
Now that you know what to expect, we’re going to give you three
sample examinations to work through. These are retired CLD exams,
so they are what you can really expect to see when you take the exami-
nation. Remember, you only have 4 h from the time you see the exami-
nation to deliver a documented, scalable, working application that is
data-flow-driven and doesn’t use a ton of locals or Property nodes. Are
you ready? On your mark, get set, Go!
General operation. The traffic light controller provides for green, yel-
low, and red lights for each direction. In addition, a second green-
yellow-red light must be provided for the left-hand turn lane.
The periods of time the lights should be on are as follows:
Red: 4 s
Yellow: 2 s
Green: 4 s
of the procedure. For this application, the car wash switches are simu-
lated by switches on the front panel. The car wash starts when a pur-
chase switch has been selected. The execution of a cycle is denoted
by illuminating the appropriate light-emitting diode (LED). For this
application, each cycle’s duration should be set to 5 s unless otherwise
stated in the Operating Rules below.
2. Not all cycles are performed for each wash. The more expensive
wash performs more cycles. The cycles performed include
■ Cycles performed for the deluxe wash: 1 - 2 - 3 - 4 - 5
■ Cycles performed for the economy wash: 2 - 3 - 4
3. Each cycle is initiated by a switch. If the vehicle rolls off of the switch,
the wash immediately pauses and illuminates an indication to the
driver to reposition the vehicle. The amount of time that expires while
the car is out of position should not count against the wash time.
4. Underbody wash cycle: The spray heads for this wash are located
near the entrance of the car wash. The underbody spray heads are
fixed in position, and they require the vehicle to slowly drive over
them to wash the underbody of the vehicle. The underbody wash is
activated under the following conditions:
5. Main wash cycle: Main Wash Position Switch verifies the vehicle is in
the correct location for the wash cycles (cycles 2, 3, and 4) to operate.
Each cycle should last for 5 s. If the vehicle rolls off of the Main Wash
Position Switch, the wash immediately pauses and illuminates an
indication to the driver to reposition the vehicle. The amount of time
that expires while the car is out of position should not count against
the wash time. The wash resumes after the vehicle is properly posi-
tioned. Upon completion of this cycle the controller should signal mov-
ing to the next cycle by activating the Vehicle Out of Position LED.
6. Air dry cycle: The air drier is a set of fixed-position blowers located
near the exit of the car wash. They require the vehicle to drive slowly
out of the car wash through the airstream to dry the vehicle. The air
dry cycle activates on the following conditions:
If the vehicle rolls off of the Air Dry Position Switch, the wash
immediately pauses and illuminates an indication to the driver to
Building an Application 227
reposition the vehicle. The amount of time that expires while the car
is out of position should not count against the air dry time. The wash
resumes after the vehicle is properly positioned. This cycle of the
wash should last for 10 s. Upon completion of this cycle the controller
should allow the next vehicle in line to select another wash.
7. The car wash must respond to the STOP boolean and Vehicle Posi-
tion Switches within 100 ms. The STOP boolean aborts the opera-
tion of the VI.
Definitions
Zone: A perimeter area with its own security status indication.
Alarm: A zone condition indicating an intrusion into a zone.
Bypass: A state in which a zone will not indicate an alarm condition.
Placing a zone in bypass prevents nuisance alarms when mainte-
nance work is performed in the area near the perimeter fence, or
when severe weather may cause false alarms.
Tamper: A condition where the wiring of a zone has been altered
in some way. All zone wiring is supervised. That is, if an individual
attempts to circumvent the system by altering the wiring, that zone
will indicate a tamper condition.
The security system provides for one indicator light for each zone.
The color of the light provides the status information for that zone. The
colors are
■ Green: normal
■ Red: alarm
■ Blue: bypass
■ Orange: tamper
File logging. The security system should produce an ASCII disk file
in a spreadsheet-readable format. The data, when imported into a
spreadsheet, should be in the format shown in Table 8.3, where XXXX
represents logged data.
The Status field should contain a string indicating the condition of
the zone: normal, larm, Bypass, or Tamper.
The log file should be stored in a location relative to the location of
the security application and should be updated only when the status of
a zone changes and closes after every transaction.
Bibliography
Bonal, David: “Mastering the National Instruments Certification Exams,” NIWeek 2005,
National Instruments Corporation, 11500 N. Mopac Expressway, Austin, Tex., 2005.
Brunzie, Ted J.: “Aging Gracefully: Writing Software that Takes Changes in Stride,”
LabVIEW Technical Resource, vol. 3, no. 4, Fall 1995.
Fowler, Gregg: “Interactive Architectures Revisited,” LabVIEW Technical Resource,
vol. 4, no. 2, Spring 1996.
Gruggett, Lynda, “Getting Your Priorities Straight,” LabVIEW Technical Resource, vol. 1,
no. 2, Summer 1993. (Back issues are available from LTR Publishing.)
Johnson, Gary W. (Ed.): LabVIEW Power Programming, McGraw-Hill, New York, 1998.
Ritter, David: LabVIEW GUI, McGraw-Hill, New York, 2001.
Sample Exam, “Certified LabVIEW Developer Examination—Car Wash Controller,”
National Instruments Corporation, 11500 N. Mopac Expressway, Austin, Tex., 2004.
Sample Exam,“Certified LabVIEW Developer Examination—Security System,” National
Instruments Corporation, 11500 N. Mopac Expressway, Austin, Tex., 2004.
Sample Exam, “Certified LabVIEW Developer Examination—Traffic Light Controller,”
National Instruments Corporation, 11500 N. Mopac Expressway, Austin, Tex., 2004.
This page intentionally left blank
Chapter
Documentation
9
Most programmers hold documentation in the same regard as root
canal surgery. Meanwhile, users hold documentation dear to their
hearts. The only resolution is for you, the LabVIEW programmer, to
make a concerted effort to get that vital documentation on disk and
perhaps on paper. This chapter describes some of the documentation
tools and formats that have been accepted in commercial LabVIEW
software.
The key to good documentation is to generate it as you go. For
instance, when you finish constructing a new subVI, fill in the Get
Info VI description item, described later. Then when you want to put
together a software maintenance document, you can just copy and paste
that information into your word processor. It’s much easier to explain
the function of a VI or control when you have just finished working on
it. Come back in a month, and you won’t remember any of the details.
If you want to write commercial-grade LabVIEW applications, you
will find that documentation is at least as important as having a well-
designed program. As a benchmark, budget 25 percent of your time on
a given contract for entering information into various parts of VIs and
the production of the final document.
VI Descriptions
The VI description in the Documentation dialog box from the VI
Properties is often a user’s only source of information about a VI. Think
about the way that you find out how someone else’s VIs work, or even
how the ones in the LabVIEW libraries work. Isn’t it nice when there’s
231
232 Chapter Nine
online help, so that you don’t have to dig out the manual? Important
items to include in the description are
■ An overview of the VI’s function, followed by as many details about
the operation of the VI as you can supply
■ Instructions for use
■ Description of inputs and outputs
■ List of global variables and files accessed
■ Author’s name and date
Information can be cut and pasted into and out of this window. If
you need to create a formal document, you can copy the contents of
this description into your document. This is how you manage to deliver
a really nice-looking document with each driver that you write with-
out doing lots of extra work. You just write the description as if you
were writing that formal document. You can also use the techniques
described in the following section on formal documents to extract these
comments automatically.
You can also display the VI description by
■ Selecting the Connector Pane and Description option when using the
Print command from the File menu
■ Showing the Context Help window and placing the wiring tool on a
subVI’s icon on the block diagram
Control Descriptions
Every control and indicator should have a description entered through
the Description and Tip pop-up menu. You can display this informa-
tion by showing the Context Help window and placing the cursor on
the control (Figure 9.1). You can also display it by placing the wiring
tool on the control’s terminal on the block diagram. This is very handy
when you are starting to wire up a very complex VI with many front
panel items. The description should contain the following general infor-
mation, where applicable:
You can programmatically access the online help system through the
Control Online Help function from the Programming >> Dialog and
User Interface >> Help function palette. This function lets you open
and close the online help system, display the main Contents menu, or
search for a specific keyword or HTML file. It’s a way to make a Help
button on a LabVIEW panel do something more than display a dialog
box.
VI History
During development, you can track changes to VIs through the History
Window, which you display by selecting the VI Revision History item
from the Edit menu. In the History Window, you can enter descriptions
of important changes to the VI and then add your comments to the his-
tory log. Revision numbers are kept for each VI, and you can reset the
revision number (which also erases all history entries) by clicking the
Reset button in the History Window.
Documentation 235
The Print Window command (from the File menu) does just that:
Whichever LabVIEW window is on top gets sent to the printer, using
the current page setup. Limited customization of the printout is avail-
able through Print Options in the VI Properties menu. In particular,
you can request scale-to-fit, which really helps if the panel is large.
When you print a front panel, note that all items on the panel will
be printed, including those you have scrolled out of the usual viewing
area. To avoid printing those items, you should hide them (choose Hide
Control by popping up on the control’s terminal).
The Print command is quite flexible, depending on the edition of
LabVIEW that you have. You can choose which parts of the VI you wish
to print (panel, diagram, connector pane, description, etc.) through the
feature-laden Print dialog box. Printouts are autoscaled (if desired),
which means that reasonably large panels and diagrams will fit on a
single sheet of paper.
The big decision is whether you care if the objects are bitmaps or
fully editable vector graphic objects. Bitmaps generally take up more
memory and may have fairly low resolution (say., 72 dots per inch),
but are generally resistant to corruption when printed. Vector graphics
(such as PICT, EPS, and EMF) are more versatile because you can edit
most of the objects in the image, but they sometimes fly apart when
displayed or printed in certain applications.
On the Macintosh and Windows, you can select objects from a
LabVIEW panel or diagram, copy them to the clipboard, and then
paste them into any application that displays PICT (Macintosh) or
EMF (Windows) objects, including word processors. If you paste them
into a drawing application, such as Canvas or CorelDraw, the objects
are accurately colored and fully editable. Diagrams look pretty good,
though you may lose some wire textures, such as the zigzag pattern for
Documentation 237
strings and the dotted lines for booleans. This is surely the easiest way
to obtain a quality image from LabVIEW.
Screen captures are another quick way to obtain a bitmap image
of anything on your computer’s screen, including LabVIEW panels
and diagrams. Both Macintosh and Windows machines have a built-
in screen-capture capability. On the Macintosh, use command-shift-3,
which saves a bitmap image of the entire screen to a file on the start-up
disk, or command-control-shift-4 to obtain a cursor that you can use to
lasso a portion of the screen and copy it to the clipboard. On Windows,
the Print Screen key copies the complete screen image to the clipboard,
and ALT-Print Screen copies the top window (the one you clicked last)
to the clipboard. Alternatively, you can obtain one of several screen-
capture utilities that can grab a selected area of the screen and save it
to the clipboard or to a file. They’re available as commercial packages
or shareware. The nice thing about many of these utilities is that you
can capture pulled menus and cursors, which is really helpful for user
documentation.
LabVIEW’s Print command can produce a variety of panel and dia-
gram image types, along with all the other documentation text. If you
print to an RTF file, it’s easy to load into word processors. In RTF mode,
you can also ask LabVIEW to save the images external to the main
RTF file (they come out as a set of BMP files). Printing to HTML gives
you a browser-friendly form of documentation. Images are external
since HTML is intrinsically a text-only language, and you can choose
from JPEG, PNG, and GIF for the image files. Keep in mind that you
can use the VI Server features (as we already discussed in the Docu-
ment Directory VI) to automatically generate image files, albeit with
fewer choices in content and file type.
What if you’re creating a really formal document or presentation and
you want the very finest-quality images of your LabVIEW screens?
Here’s Gary’s recipe, using the PostScript printing feature of
LabVIEW. It works on any Macintosh and Windows systems with
certain printer drivers, such as the one for the HP LaserJet 4m; check
your driver to see if it has an EPS file generation feature. You must
have LabVIEW 4.01 or later and Adobe Illustrator 6.0 or later to edit
the resulting images. Here are the steps:
1. Use LabVIEW’s Print command with a custom setup to print what-
ever is desired. Your life will be simpler if you use the scale to fit
option to get everything on one page. Otherwise, objects will be clipped
or lost since EPS records only one page’s worth of information.
2. In the print dialog, print to a file. Macintosh lets you choose whether
the file is pure PostScript or EPS and whether it has a screen pre-
view. Make the file type EPS with a preview.
238 Chapter Nine
3. Open the resulting file with Illustrator 6.0 or later (prior versions
will fail to properly parse the EPS file). The panel and diagram are
fully editable, color PostScript objects. You will need to ungroup and
delete some of the surrounding boxes. Also, many items are masked
and use PostScript patterns. Be careful what you change or delete.
4. Save the file as Illustrator EPS for importing into a page layout pro-
gram. EPS files are absolutely, positively the most robust graphics
files when you’re dealing with complex compositions and you’re nit-
picky about the outcome. Note: If you send the file to a service bureau,
make sure you take along any strange fonts that you might have cho-
sen in LabVIEW, or have Illustrator embed them in the file.
While it takes several steps, the results are gorgeous when printed
at high resolution, such as 1200 dots per inch. This book is an example
of the results.
Document outline
For an instrument driver or similar package, a document might consist
of the following basic elements:
1. Cover page
2. Table of contents
3. Introduction—“About This Package”
4. Programming notes—help the user apply the lower-level function
VIs
5. Using the demonstration VI
6. Detailed description of each lower-level function VI
The first five items are general in nature. Use your best judgment
as to what they should contain. The last item, the detailed description
of a particular VI, is the subject of the rest of this section. For major
Documentation 239
VI description
If you have already entered a description of the VI in the Documenta-
tion box with the VI, open it, select all the text, and copy it to the clip-
board to import it into a document in your word processor.
Channel Numbers
DO IT (T) Multiplot Graph
Points in window text time
Scroll Last Values
Time Scale (1=minutes)
CircBuf to Multiplot Graph.vi
Figure 9.2 The connector pane is an important part of the
document for any VI. Different text styles can indicate
the relative importance of each terminal.
240 Chapter Nine
Terminal descriptions
Every input and output needs a description that includes the following
information.
■ The data type
■ What it does
■ Valid range (for inputs)
■ Default value (for inputs)
This may be the same information that you entered in the control’s
description dialog box. A recommended standard for terminal labeling
makes use of text style to indicate how frequently you need to change
each item. When you create the VI, modify the font for each front panel
item’s label as follows: Use boldface for items that you will have to change
often, use the normal font for items that you plan to use less often, and
put square brackets [label] around the labels of items where the default
value usually suffices. This method maps directly to the required con-
nections behavior previously described. When you write the terminal
descriptions into a document, put them in this prioritized order so the
user will see the most-used items first. We like to start each line with a
picture of the appropriate LabVIEW terminal data type, followed by the
terminal’s name, then its description, range, and defaults:
Peak Voltage sets the maximum peak output voltage. Note
that the peak power setting for 2000 V cannot be greater than 50 W.
Documentation 241
0: 16 V (Default)
1: 80 V
2: 400 V
3: 2000 V
Try it out, and feel free to add your favorite documentation tidbits.
Programming examples
If you are producing a commercial package, it’s a good idea to include
some examples using your VIs. Show how the VI is used in a loop or
in conjunction with graphs or an analysis operation. Above all, have a
“try me first” example that is guaranteed to work! This will give the
user greater confidence that the rest of the package is similarly func-
tional. Put completed examples on disk where the interested user can
find them and try them out. Make prominent mention of the existence
of these examples in your documentation, perhaps in the quick start
section.
Distributing Documents
In this modern age, more and more documents are distributed in
electronic form. Except for commercially marketed software packages,
the user is expected to download some kind of document for personal
viewing and/or printing. Text files are of course the least-common-
denominator format, but they leave much to be desired: You can’t include
graphics at all. Word processors and page layout programs can create
nice documents, but they are not generally portable (although Microsoft
242 Chapter Nine
has done a good job of Mac-PC portability with Word, except for graph-
ics items). Besides text and word processor files, there are some good
formats available nowadays, thanks to the World Wide Web.
Portable Document Format (PDF) was created by Adobe Sys-
tems to provide cross-platform portability of elaborate documents. It
based PDF on PostScript technology, which supports embedded fonts
and graphics. You can buy Adobe Acrobat for almost any computer and
use it to translate documents from word processors and page layout
applications into this universal format. Users can obtain a free copy of
Adobe Acrobat Reader from online sources (Adobe Systems has it on
its Web site, www.adobe.com), or you can include it on your distribution
media. Again, Acrobat Reader is available for many platforms. This is
probably the best way to distribute high-quality documentation.
HyperText Markup Language (HTML) is the native format for
many items on the World Wide Web, and all Web browsers can read
files in this format—including local files on disk. The file itself is just
ASCII text, but it contains a complex coding scheme that allows docu-
ments to contain not only text, but also links to other files, including
graphics and sound. You can use Web authoring software, or LabVIEW
automatically generates portable documentation in HTML through the
Print command.
Custom online help is another viable way to distribute documents.
By definition, all LabVIEW users have a way to view such documents.
They can be searched by keyword, and of course you can include graph-
ics. See “Custom Online Help,” or the LabVIEW user’s manual for more
information on creating these files.
Chapter
243
244 Chapter Ten
black boxes that only the manufacturer can modify. Thus you can start
with an existing driver and adapt it to your needs as you see fit. And
you should adapt it because no off-the-shelf driver is going to match
your application perfectly. Another important asset is the National
Instruments online instrument driver repository at ni.com/idnet. It
contains drivers for hundreds of instruments using a variety of hard-
ware standards such as GPIB, RS-232/422, USB, even ancient VXI, and
CAMAC. Each driver is fully supported by National Instruments. They
are also free. Obtaining drivers online is easy with the Instrument
Driver Finder in LabVIEW 8. From the LabVIEW Help menu select
Help >> Find Instrument Driver…. The Instrument Driver Finder con-
nects you to the instrument driver network at ni.com/idnet and a col-
lection of several thousand instrument drivers. Most of the drivers are
commercial-grade software, meaning that they are thoroughly tested,
fairly robust, documented, and supported. If you can’t find the driver
you need, you might call the manufacturer of the instrument or check
its Web site. Many times the manufacturer will have an in-house devel-
opment version you can use as a starting point. There is no need to
reinvent the wheel!
National Instruments welcomes contributions to the instrument net-
work. If you have written a new driver that others might be interested
in using, consider submitting it. Consultants who are skilled in writing
instrument drivers can join the Alliance Program and become Certi-
fied Instrument Driver Developers. If you have that entrepreneurial
spirit, you could even sell your driver package, providing that there’s
a market. Software Engineering Group (SEG) was probably the first
to do this with its HighwayView package that supports Allen-Bradley
programmable logic controllers (PLCs). It’s been a successful venture.
A few manufacturers of programmable instruments also sell their driv-
ers rather than place them in the library.
Existing instrument drivers are also one of your best resources for
instrument programming examples. Whenever you need to write a new
driver, look at an existing driver that is related to your new project and
see how it was done. Standardization is key to software reuse. A stan-
dard instrument driver model enables engineers to swap instruments
without requiring software changes, resulting in significant savings in
time and money. You should design drivers for similar instruments (for
instance, all oscilloscopes) that have a common interface in the soft-
ware world. This means all the connector panes should be the same.
Rule: Every instrument should be a drop-in replacement for every other
instrument of its genre. This is especially important in the world of
automated test systems requiring interchangeable hardware and soft-
ware components.
Instrument Driver Basics 245
Driver Basics
Writing a driver can be trivial or traumatic; it depends on your pro-
gramming experience, the complexity of the instrument, and the app-
roach you take. Start by going to the Web site and reading National
Instruments’ Instrument Driver Guidelines, which will give you an
overview of the preferred way to write a driver. Then spend time with
your instrument and the programming manual. Figure out how every-
thing is supposed to work. Next decide what your objectives are. Do
you just need to read a single data value, or do you need to implement
every command? This has implications with respect to the complexity
of the project.
Communication standards
The kinds of instruments you are most likely to use are “smart,” stand-
alone devices that use one of several communication standards, nota-
bly serial, GPIB, USB, and Ethernet. Serial and GPIB are ancient
by computer standards, but still a reliable and common interface on
stand-alone instruments. USB 2.0 adds a high-speed data bus designed
for multimedia but is perfectly suitable for message-based instruments.
Ethernet-based instruments have to share the network bandwidth
with everything else on the network. This can limit their effective-
ness for deterministic acquisition or high-speed transfer, but makes
it great for placing an instrument in a remote location, or sharing an
instrument between multiple controllers. Fortunately VISA provides a
common framework and application programming interface (API) for
controlling instruments over all these busses. Instrument drivers prop-
erly written using VISA API calls can transparently change between
serial, GPIB, USB, or Ethernet without modification.
Adding serial ports. If you need more serial ports on your computer,
consider a multiport plug-in board. One important feature to look for
is a 16550-compatible UART. This piece of hardware includes a 16-byte
buffer to prevent data loss (the nefarious overrun error), which other-
wise will surely occur. For any computer with PCI slots, there are sev-
eral sources. Keyspan makes some excellent multiport plug-in boards
for the Macintosh. National Instruments has multiport boards for PCI,
PXI, and PCMCIA busses. You can also use a USB serial port adapter.
Keyspan makes a two-port model that is very popular with LabVIEW
users. For remote systems you might look at National Instruments’
Ethernet Device Server for RS-232 or RS-485. Serial ports on the
Instrument Driver Basics 247
Ethernet Device Server look like and act as locally attached serial
ports configured with NI VISA.
SENSE:VOLTAGE:RANGE:AUTO
Measurement Function
Signal
Routing
INPut SENSe CALCulate FORMat
data
bus
Signal
OUTput SOURce CALCulate FORMat
Routing
data
bus
Signal Generation
■ Basic communications. You need the ability to write to and read from
the instrument, with some form of error handling.
■ Sending commands. You need the ability to tell the instrument to
perform a function or change a setting.
■ Transferring data. If the instrument is a measurement device such as
a voltmeter or an oscilloscope, you probably need to fetch data, scale
it, and present it in some useful fashion. If the instrument generates
Instrument Driver Basics 251
The basic question to ask is, Will the user do most of the setup manu-
ally through the instrument’s front panel, or will LabVIEW have to do
most of the work? If you’re using LabVIEW on the test bench next to an
oscilloscope, manually adjust the scope until the waveform looks OK,
and then just have LabVIEW grab the data for later analysis. No soft-
ware control functions are required in this case, and all you really want
is a Read Data VI. If the application is in the area of automated test
equipment (ATE), instrument setups have to be controlled to guaran-
tee that they are identical from test to test. In that case, configuration
management will be very important and your driver will have access
to many control functions.
Think about the intended application, look through the programming
manual again, and pick out the important functions. Make a checklist
and get ready to talk to your instrument.
Establish Communications
After you study the problem, the next step is to establish communica-
tions with the instrument. You may need to install communications
hardware (such as a GPIB interface board); then you need to assemble
the proper cables and try a few simple test commands to verify that
the instrument “hears” you. If you are setting up your computer or
interface hardware for the first time, consider borrowing an instru-
ment with an available LabVIEW driver to try out. It’s nice to know
that there are no problems in your development system.
LabVIEW’s Express VI, Instrument I/O Assistant, is an interac-
tive general-purpose instrument controller that you can use to test com-
mands one at a time. Figure 10.2 shows the “*IDN?” query and response
in the I/O Assistant. It’s easy to type in a command and see how the
instrument responds. The parsing window lets you sort out the reply
and assign data types to data transmitted from your instrument. You
can even group bytes together into multibyte data types. In Figure 10.2
252 Chapter Ten
Figure 10.2The Instrument I/O Assistant is an interactive tool for establishing communication with
instruments. The instrument in this example was built with LabVIEW Embedded for the ADI Blackfin
DSP. LabVIEW not only can control instruments, but also can be the instrument firmware.
1
Serial cable connections for
DSR 6
IBM compatibles using a DB-
2 XMIT 9 connector.
RTS 7
3 RCV If you need a DB-25
CTS 8 connection, use a DB-9 to
4 DTR DB-25 adapter.
RI 9
5 GND
Figure 10.3 For PCs using a DB-9 connector, you can get cables and
connectors like this at your local electronics emporium. DB-9 to DB-25
adapters are also handy.
up than the other types. If you use a lot of serial devices, a breakout box
will quickly pay for itself.
Figure 10.3 shows serial port connections for the very common PC
version. Cables are available with the required DB-9 on one end and a
DB-9 or DB-25 on the free end to mate with most instruments. If you
install a board with extra serial ports or with special interfaces (such
as RS-422 or RS-485), special connectors may be required.
If you must extend the serial cable beyond a few feet, it is important
to use the right kind of cable. Transmit and receive lines should reside
in separate twisted, shielded pairs to prevent crosstalk (Belden 8723
or equivalent). If you don’t use the correct type of cable, the capacitive
coupling will yield puzzling results: Every character you transmit will
appear in the receive buffer.
Your instrument’s manual has to supply information about connector
pin-outs and the requirements for connections to hardware handshaking
lines (such as CTS and RTS). The usual wiring error is to swap transmit
and receive lines on RS-232. The RS-422 adds to the problem by permit-
ting you to swap the positive and negative lines as well. One trick you
can use to sort out the wires is to test each pin with a voltmeter.
The negative lead of the meter should go to the ground pin. Transmit
lines will be driven to a nice, solid voltage (such as +3 V), while receive
lines will be 0 V, or floating randomly. You can also use an oscilloscope
to look for bursts of data if you can force the instrument to go into Talk
mode. A null modem cable or adapter usually does the trick for RS-
232 signal mix-ups. It effectively swaps the transmit and receive lines
and provides jumpers for CTS and RTS. Be sure to have one handy,
along with some gender changers (male-to-male and female-to-female
connectors), when you start plugging together serial equipment.
GPIB. You need some kind of IEEE 488 interface in your computer.
Plug-in boards from National Instruments are the logical choice since
254 Chapter Ten
they are all supported by LabVIEW and they offer high performance.
External interface boxes (SCSI, RS-232, USB, or Ethernet to GPIB)
also use the same driver. Cabling is generally easy with GPIB because
the connectors are all standard. Just make sure that you don’t vio-
late the 20-m maximum length; otherwise, you may see unexplained
errors and/or outright communication failures. Incredibly, people have
trouble hooking up GPIB instruments. Their main problem is failure
to make sure that the connector is pushed all the way in at both ends.
Always start your testing with just one instrument on the bus to avoid
unexpected addressing clashes.
ASCII messages or raw binary. You just need to know your instrument
(Read the Manual).
Be sure to set the basic serial parameters: speed (baud rate), bits per
character, stop bits, parity, and XON/XOFF action. LabVIEW’s VISA
Configure Serial Port function sets these parameters and more. If
you use a terminal emulator, you will have to do the same thing in
that program. Check these parameters very carefully. Next to swapped
transmit and receive lines, setting one of these items improperly is the
easiest way to fail to establish communication.
Instruments that use those nasty binary protocols with handshaking
are nearly impossible to test by hand, but the I/O Assistant gives you at
least a fighting chance. For these instruments, try to muddle through
at least one command just to find out if the link is properly connected;
then plunge right in and write the LabVIEW code to handle the proto-
col. Hopefully, you will be 90 percent successful on the first try because
debugging is difficult.
As with GPIB, you can use a serial line analyzer (made by HP, Tek-
tronix, and National Instruments) to eavesdrop on the communications
process. The analyzer stores lots of characters and decodes them to
whatever format you choose (ASCII, hexadecimal, octal, etc.). This really
helps when your driver almost works but still has reliability problems.
Bibliography
1999 SCPI Syntax & Style, SCPI Consortium 2515 Camino del Rio South, Suite 340 San
Diego, CA, 1999
Chapter
Instrument Driver
11
Development Techniques
257
258 Chapter Eleven
%.;%g
%.;%g
instrument drivers that truly work well together, we need to use the
same templates and follow formal guidelines. After all, you must have
standards, recommended practices, and quality assurance mechanisms
in place if you expect to produce reliable, professional-grade software.
Figure 11.3 The Instrument Driver Project Wizard has templates for common
instrument types.
260 Chapter Eleven
But don’t rely totally on the wizard; it’s a great framework to start
with, but properly designed instrument drivers need to conform to
the guidelines found at the National Instruments(NI) Web site www
.ni.com/idnet. The two most important documents that you must read
before proceeding with driver programming are as follows:
To help you get started in the right direction, we’ve incorporated NI’s
recommendations and provided a few examples using the Instrument
Driver Project Wizard that represent good driver design techniques.
Even if you’re writing a driver only for yourself, it’s worth following
these guidelines because they will help you structure your drivers and,
in the end, become a better programmer.
On the diagram, the error in boolean is tested, and if it’s True, you
may run a special error response program or simply pass the error along
to the error out indicator and do nothing else. If no incoming error
is detected, then it’s up to your program to generate an error cluster
source source
Canonical Error I/O.vi
Read Data.vi
No Error Error
No incoming error. Run your program. Generate
Incoming error;
error in an error cluster that reports the results
error just pass it along.
(no error)
True=error out
error code
Alternatively, you could take
Current VI's Path special action in response to
the error condition
Strip Path
Figure 11.4 The Canonical error I/O VI. You can encapsulate any program inside the False case.
Instrument Driver Development Techniques 263
Measurement
Configuration Number (1) Measurement Display
VISA session dup VISA session
Figure 11.5 An example of error in/error out, the AOIP Instrumentation OM22 microohmme-
ter, a GPIB instrument. Note how clear and simple this application is, with no need for flow
control structures. Error I/O is an integral part of the underlying VISA driver.
drivers, the General Error Handler VI that comes with LabVIEW will
be able to interpret them. (Codes from 5000 to 9999 are reserved for
miscellaneous user-defined errors; they are not automatically inter-
preted by the error handlers.)
The General Error Handler utility VI (Figure 11.6) accepts the
standard error cluster or the individual items from an error cluster and
decodes the code numbers into a message that it can display in an error
dialog and in an output string. A useful feature of this handler is its
[user-defined descriptions]
[user-defined codes]
[error code] (0) error?
[error source] (" ") code out
type of dialog (OK msg:1) source out
error in (no error) message
[exception action] (none:0) error out
[exception code]
[exception source]
General Error Handler
source out
My Bad VI
type of dialog (OK msg:1)
no dialog 0
Figure 11.6 The General Error Handler utility VI is all you need to decode and display errors.
The arrows indicate how the VI can decode user-defined error codes and messages that you
supply, in addition to the predefined error codes.
Instrument Driver Development Techniques 265
True
Error Code
0
Error Message
The Error Query VI reads the instrument’s error buffer. Run this VI after each command or set of
Figure 11.7
commands to get fresh error messages.
ability to decode user-defined error codes. You supply two arrays: user-
defined codes and user-defined descriptions. The arrays have a one-to-
one correspondence between code numbers and messages. With this
handler, you can automatically decode both predefined errors (such as
GPIB and VXI) and your own special cases. We would suggest using the
codes in Table 11.1 because they are reserved for instrument-specific
errors.
Error I/O simplifies diagrams by eliminating the need for many flow
control structures. By all means, use it in all your drivers and other
projects.
If you are programming a SCPI-compliant instrument, you can read
errors from the standardized SCPI error queue. A driver support VI,
Error Query, polls the instrument and returns a string containing
any error messages via the regular error out cluster. (See Figure 11.7.)
Add this VI after every command or series of commands. Events are
stored in a queue—oldest events are read out first—so you need to read
events out after every command to keep from filling the queue with old,
uninteresting events. This technique will add additional communica-
tions overhead in the event of an error, but it will catch every error as
it happens.
Guess what happens when you find a mistake that is common to all
352 VIs? You get to edit every single one of them.
Having no modularity is the other extreme. You may find it in trivial
drivers in which there are only a couple of commands and hence just
one VI. You may also find a lack of modularity in complex drivers writ-
ten by a novice, in which case it’s a disaster. There is nothing quite like
a diagram with sequences inside of sequences inside of sequences that
won’t even fit on a 19-in monitor. In fact, these aren’t really drivers at
all. They are actually dedicated applications that happen to implement
the commands of a particular instrument. This kind of driver is of little
use to anyone else.
Project organization
The correct way to modularize a driver in LabVIEW is to build VIs that
group the various commands by function. In the LabVIEW Plug-and-
Play Instrument Driver model, VIs are organized into six categories:
Initialize, Configuration, Action/Status, Data, Utility, and Close. (See
Figure 11.8.) These intuitive groupings make an instrument easier to
learn and program. The wizard automatically creates stub VIs for you
Single Waveform
Low Level
Figure 11.9 Default VI Tree for oscilloscopes. Common commands, grouped by function, are easy to find and use.
You create Action/Status VIs as needed.
that implement the basic commands for each instrument. The project
is ordered and grouped according to function, and a VI Tree is created
to make the interrelationship easy to see. The default VI Tree for oscil-
loscopes is shown in Figure 11.9. Once you’ve created the project by
using the wizard, you will need to modify the specific commands inside
each VI to match your instrument. Any additional functions should
be created as new VIs in the project hierarchy and placed on the VI
Tree.VI. The one thing you never want to do is to change the connector
pane or the functionality of the default VIs! Doing so will break their
plug-and-play nature. The templates are designed to expose the com-
mon functionality of each class of instrument in a way that will make
the instruments and the instrument drivers interchangeable in auto-
mated test systems. If you’ve developed a test system before, you can
appreciate the value of replacing hardware without having to rewrite
software. Because the instrument drivers are plug-and-play with the
same connector pane and the same functionality, an application can be
written using VI Server to dynamically call instrument driver plug-ins
that initialize, configure, and take data. Which plug-in VI Server calls
depends on the hardware attached to the computer. Of course, this all
works seamlessly only if each of the VIs has the same connector pane
and generic functionality.
Initialization
The cleanest way to begin any conversation with an instrument is to
create an initialization VI that starts out by calling VISA Open to
establish a connection. The initialization VI should take care of any
special VISA settings, such as serial port speed or time-outs, by call-
ing a VISA Property node, and it should also perform any instru-
ment initialization operations, such as resetting to power up defaults
or clearing memory. Instruments that are SCPI-compliant can be que-
ried with *IDN?, which will return an identification string that you can
check to verify that you are indeed talking to the right instrument.
268 Chapter Eleven
True
Serial Interface C0 30
VISA resource name Instr Instr
error in (no error) Timeout ASRL End In
TermChar
Intf Type ASRL End Out
0 10000 4096
TermChar
Serial Baud
0
Serial Configuration Flow Cntrl
Parity
Baud Rate Data Bits
Flow Control Stop Bits
Parity
Data Bits
Stop Bits
Figure 11.10 Initialize.VI opens the VISA connection and configures port speeds.
Configuration
Configuration VIs are the set of software routines that get the instru-
ment ready to take data. They are not a set of VIs to store and save
instrument configuration—those go under the utility section. A typical
configuration VI, Configure Edge Trigger.VI, is shown in Figure 11.11.
Source Command
Edge Trigger Mode :TRIG:MAIN:EDGE:SOU CH%d;
:TRIG:MAIN:TYP EDGE;
Slope Command
:TRIG:MAIN:EDGE:SLO RIS;
:TRIG:MAIN:EDGE:SLO FALL;
Data
Data VIs should transfer data to or from the instrument with a mini-
mum of user involvement. The VI in Figure 11.12 queries the scope
for scaling information (t0, dt, Y scale, and Y offset) and then builds a
waveform with the data. The VI is clean and simple, VISA and error I/O,
and waveform data. In Figure 11.12 the data is returned as a comma-
separated ASCII string. Sometimes transferring data as ASCII is
Instrument Driver Development Techniques 271
%f;%f;%f;%f;%f
t0, dt, Y scale, Y offset
Get Waveform preamble Get Waveform t0
WFMPRE:XZE?;XIN?;YZE?;YMU?;YOFF? CURV? dt Waveform Graph
Y
VISA session 80000 VISA session
,
200 ASCII data
%f
error in (no error) error out
Convert comma separated
0 0 ASCII to floating point
the easiest, and most instruments support it. But if you have a choice,
binary protocols are faster and more compact.
The VI in Figure 11.13 is the same as that in Figure 11.12 except the
data has been returned as 16-bit signed binary. You can easily change
from a binary string to an array of I16 with the type cast function.
Additional formatting may be required if your binary waveform has
a length header or termination characters. Consult your instrument’s
manual for the proper handling method. LabVIEW provides all the
tools you need to reconstruct your data, including functions to swap
bytes and words. When you are transferring binary data, be sure your
VISA Read function does not terminate on an end-of-line character.
Use a VISA Property node to disable this before the read and reenable
after the read; otherwise, you may not get all your data.
Utility
Sometimes a command doesn’t fit with anything else, and it is nor-
mally used by itself, for instance, utility VIs used for instrument reset,
self-test, calibration, or store/recall configuration. It would be illogical
to throw those VIs in with other unrelated functions.
A good driver package allows you to save and recall collections of
control settings, which are usually called instrument setups. This saves
%f;%f;%f;%f;%f
t0, dt, Y scale, Y offset
Get Waveform preamble Get Waveform t0
WFMPRE:XZE?;XIN?;YZE?;YMU?;YOFF? CURV? dt Waveform Graph
Y
VISA session 80000 VISA session
much time, considering how many controls are present on the front
panels of some instruments. Sophisticated instruments usually have
built-in, nonvolatile storage for several settings, and your driver should
have a feature that accesses those setup memories. If the instrument
has no such memory, then write a program to upload and save setups
on disk for cataloging and later downloading. You may want to provide
this local storage even if the instrument has setup memory: What if the
memory fails or is accidentally overwritten?
Historical note: Digital storage oscilloscopes (DSOs) gradually became so
complex and feature-laden that it was considered a badge of honor for
a technician to obtain a usable trace in less than 5 minutes—all those
buttons and lights and menus and submenus were really boggling! The
solution was to provide an auto-setup button that got something visible on
the display. Manufacturers also added some means by which to store and
recall setups. All the good DSOs have these features, nowadays.
Some instruments have internal setup memory, while others let you
read and write long strings of setup commands with ease. Still others
offer no help whatsoever, making this task really difficult. If it won’t
send you one long string of commands, you may be stuck with the task
of requesting each individual control setting, saving the settings, and
then building valid commands to send the setup back again. This can
be pretty involved; if you’ve done it once, you don’t want to do it again.
Instead, you probably will want to use the instrument’s built-in setup
storage. If it doesn’t even have that, write a letter to the manufacturer
and tell the company what you think of its interface software.
Close
The Close VI is at the end of the error chain. Remember that an impor-
tant characteristic of a robust driver is that it handles error conditions
well. You need to make a final check with your instrument and clear
out any errors that may have occurred. Error I/O is built into the VISA
functions. (See Figure 11.14.) If there is an incoming error, the function
will do nothing and pass the error on unmodified. The VISA Close func-
tion is the exception. VISA Close will always close the session before
passing any error out.
Documentation
One measure of quality in instrument driver software is the documen-
tation. Drivers tend to have many obscure functions and special appli-
cation requirements, all of which need some explanation. A driver that
implements more than a few commands may even need a function
index so that the user can find out which VI is needed to perform a
desired operation. And establishing communications may not be triv-
ial, especially for serial instruments. Good documentation is the key
to happy users. Review Chapter 9, “Documentation.” It describes most
of the techniques and recommended practices that you should try
to use.
When you start writing the first subVI for a driver, type in control
and indicator descriptions through the Description and Tip pop-up
menus. This information can be displayed by showing the Help window
and is the easiest way for a user to learn about the function of each
control. If you enter this information right at the start, when you’re
writing the VI, you probably know everything there is to know about
the function, so the description will be really easy to write. You can
copy text right out of the programming manual, if it’s appropriate. It
pays to document as you go.
The Documentation item in the VI Properties dialog box from the
File menu is often a user’s only source of information about a VI. The
information you enter here shows up, along with the icon, in the Con-
text Help window. Try to explain the purpose of the VI, how it works,
and how it should be used. This text can also be copied and pasted into
the final document.
Try to include some kind of document on disk with your driver. The
issue of platform portability of documents was mentioned in Chapter 9,
and with drivers, it’s almost a certainty that users of other systems
will want to read your documents. As a minimum, paste all vital infor-
mation into the Get Info box or a string control on a top-level example
VI that is easy to find. Alternatively, create a simple text document. If
you’re energetic and you want illustrations, generate a PDF (Portable
Document Format) document by using Adobe Acrobat, and make sure
it stays with your driver VIs; or create a LabVIEW Help document and
create hyperlinks to appropriate sections in the VI Properties dialog
box. The links will show up in the Context Help window.
If you’re just doing an in-house driver, think about your coworkers
who will one day need to use or modify your driver when you are not
available. Also, remember that the stranger who looks at your pro-
gram 6 months from now may well be you. Include such things as how
to connect the instrument, how to get it to “talk,” problems you have
274 Chapter Eleven
Bibliography
Developing LabVIEW Plug and Play Instrument Drivers, www.ni.com/idnet, National
Instruments Corporation, 11500 N. Mopac Expressway, Austin, Tex., 2006.
Instrument Communication Handbook, IOTech, Inc., 25971 Cannon Road, Cleveland,
Ohio, 1991.
Instrument Driver Guidelines, www.ni.com/idnet, National Instruments Corporation,
11500 N. Mopac Expressway, Austin, Tex., 2004.
Chapter
Origins of Signals
Data acquisition deals with the elements shown in Figure 12.1.
The physical phenomenon may be electrical, optical, mechanical, or
something else that you need to measure. The sensor changes that
phenomenon into a signal that is easier to transmit, record, and
analyze—usually a voltage or current. Signal conditioning amplifies
and filters the raw signal to prepare it for analog-to-digital conversion
(ADC), which transforms the signal into a digital pattern suitable for
use by your computer.
275
276 Chapter Twelve
G fl 10110011
Sensor
Physical Primary Conversion Signal Computations Signal Display
Phenomenon Detector Element Conditioner Transmission
Figure 12.2 A completely general sensor model. Many times, your sensor is just a trans-
ducer that ends after the conversion element.
Inputs and Outputs 277
Example A is a temperature transmitter using a thermocouple, with cold-junction compensation, linearization, and
analog output.
Example B is a pressure sensor using a linear variable differential transformer (LVDT) to detect diaphragm displace-
ment, with analog output.
Example C is a magnetic field measurement using an optical technique with direct signal transmission to a computer.
This represents a sophisticated state-of-the-art sensor system (3M Specialty Optical Fibers).
When you start to set up your system, try to pick sensors and design
the data acquisition system in tandem. They are highly interdependent.
278 Chapter Twelve
Actuators
An actuator, which is the opposite of a sensor, converts a signal (per-
haps created by your LabVIEW program) into a physical phenomenon.
Examples include electrically actuated valves, heating elements, power
supplies, and motion control devices such as servomotors. Actuators
are required any time you wish to control something such as tempera-
ture, pressure, or position. It turns out that we spend most of our time
measuring things (the data acquisition phase) rather than controlling
them, at least in the world of research. But control does come up from
time to time, and you need to know how to use those analog and digital
outputs so conveniently available on your interface boards.
Almost invariably, you will see actuators associated with feedback
control loops. The reason is simple. Most actuators produce responses
in the physical system that are more than just a little bit nonlinear and
are sometimes unpredictable. For example, a valve with an electropneu-
matic actuator is often used to control fluid flow. The problem is that the
flow varies in some nonlinear way with respect to the valve’s position.
Inputs and Outputs 279
Categories of signals
You measure a signal because it contains some type of useful infor-
mation. Therefore, the first questions you should ask are, What infor-
mation does the signal contain, and how is it conveyed? Generally,
information is conveyed by a signal through one or more of the follow-
ing signal parameters: state, rate, level, shape, or frequency content.
These parameters determine what kind of I/O interface equipment and
analysis techniques you will need.
Any signal can generally be classified as analog or digital. A digital,
or binary, signal has only two possible discrete levels of interest—an
active level and an inactive level. They’re typically found in computer
logic circuits and in switching devices. An analog signal, on the other
hand, contains information in the continuous variation of the signal
with respect to time. In general, you can categorize digital signals as
either on/off signals, in which the state (on or off ) is most important,
or pulse train signals, which contain a time series of pulses. On/off sig-
nals are easily acquired with a digital input port, perhaps with some
signal conditioning to match the signal level to that of the port. Pulse
trains are often applied to digital counters to measure frequency,
period, pulse width, or duty cycle. It’s important to keep in mind that
280 Chapter Twelve
digital signals are just special cases of analog signals, which leads to
an important idea:
Tip: You can use analog techniques to measure and generate digital
signals. This is useful when (1) you don’t have any digital I/O hardware
handy, (2) you need to accurately correlate digital signals and analog
signals, or (3) you need to generate a continuous but changing pattern
of bits.
Among analog signal types are the dc signal and the ac signal. Ana-
log dc signals are static or vary slowly with time. The most important
characteristic of the dc signal is that information of interest is con-
veyed in the level, or amplitude, of the signal at a given instant. When
measuring a dc signal, you need an instrument that can detect the
level of the signal. The timing of the measurement is not difficult as
long as the signal varies slowly. Therefore, the fundamental operation
of the dc instrument is an ADC, which converts the analog electric sig-
nal into a digital number that the computer interprets. Common exam-
ples of dc signals include temperature, pressure, battery voltage, strain
gauge outputs, flow rate, and level measurements. In each case, the
instrument monitors the signal and returns a single value indicating
the magnitude of the signal at a given time. Therefore, dc instruments
often report the information through devices such as meters, gauges,
strip charts, and numerical readouts.
Tip: When you analyze your system, map each sensor to an appropri-
ate LabVIEW indicator type.
Analog ac time domain signals are distinguished by the fact that
they convey useful information not only in the level of the signal, but
also in how this level varies with time. When measuring a time domain
signal, often referred to as a waveform, you are interested in some
characteristics of the shape of the waveform, such as slope, locations
and shapes of peaks, and so on. You may also be interested in its fre-
quency content.
To measure the shape of a time domain signal with a digital com-
puter, you must take a precisely timed sequence of individual amplitude
measurements, or samples. These measurements must be taken closely
enough together to adequately reproduce those characteristics of the
waveform shape that you want to measure. Also, the series of measure-
ments should start and stop at the proper times to guarantee that the
useful part of the waveform is acquired. Therefore, the instrument used
to measure time domain signals consists of an ADC, a sample clock, and
a trigger. A sample clock accurately times the occurrence of each ADC.
Figure 12.3 illustrates the timing relationship among an analog
waveform, a sampling clock, and a trigger pulse. To ensure that the
desired portion of the waveform is acquired, you can use a trigger
to start and/or stop the waveform measurement at the proper time
Inputs and Outputs 281
A Sample
Analog
Waveform
Sampling
Clock
Time
Figure 12.4These plots illustrate the striking differences between a joint time-
frequency analysis plot of a chirp signal analyzed with the short-time FFT spec-
trogram and the same chirp signal analyzed with the Gabor spectrogram
Signal
L ow 0.0
0.00 0.10 0.20 0.30 0.40 0.0 1.0 2.0 3.0 4.0 5.0
Figure 12.5 Five views of a signal. A series of pulses can be classified in several
different ways depending on the significance of the signal’s time, amplitude,
and frequency characteristics.
Inputs and Outputs 283
ON-OFF, COMBINATIONAL
TTL I/O PORT LOGIC
BOOLEAN
DIGITAL SINGLE BIT
COUNTER/
PULSE TRAIN ARITHMETIC NUMERIC
TIMER
Figure 12.6 This road map can help you organize your information about each signal to logically
design your LabVIEW system.
one type of information. In fact, the digital on/off, pulse train, and dc
signals are just simpler cases of the analog time domain signals that
allow simpler measuring techniques.
The preceding example demonstrates how one signal can belong to
many classes. The same signal can be measured with different types of
instruments, ranging from a simple digital state detector to a complex
frequency analysis instrument. This greatly affects how you choose sig-
nal conditioning equipment.
For most signals, you can follow the logical road map in Figure 12.6
to determine the signal’s classification, typical interface hardware, pro-
cessing requirements, and display techniques. As we’ll see in coming
sections, you need to characterize each signal to properly determine
the kind of I/O hardware you’ll need. Next, think about the signal attri-
butes you need to measure or generate with the help of numerical meth-
ods. Then there’s the user-interface issue—the part where LabVIEW
controls and indicators come into play. Finally, you may have data stor-
age requirements. Each of these items is directly affected by the signal
characteristics.
284 Chapter Twelve
Connections
Professor John Frisbee is hard at work in his lab, trying to make a
pressure measurement with his brand-new computer:
Let’s see here. . . . This pressure transducer says it has a 0- to 10-V dc out-
put, positive on the red wire, minus on the black. The manual for my data
acquisition board says it can handle 0 to 10 V dc. That’s no problem. Just
hook the input up to channel 1 on terminals 5 and 6. Twist a couple of
wires together, tighten the screws, run the data acquisition demonstra-
tion program, and voila! But what’s this? My signal looks like . . . junk! My
voltmeter says the input is dc, 1.23 V, and LabVIEW seems to be working
OK, but the display is really noisy. What’s going on here?
Signal common
(heavy copper braid)
Figure 12.7 Taking the system approach to grounding in a laboratory. Note the
use of a signal common (in the form of heavy copper braid or cable) to tie every-
thing together. All items are connected to this signal common.
286 Chapter Twelve
Here are some basic practices you need to follow to block the effects
of these electromagnetic noise sources:
■ Put sensitive, high-impedance circuitry and connections inside a
metallic shield that is connected to the common-mode voltage (usu-
ally the low side, or common) of the signal source. This will block
capacitive coupling to the circuit (principle 1), as well as the entry of
any stray electric fields (principle 2).
■ Avoid closed, conductive loops—intentional or unintentional—often
known as ground loops. Such loops act as pickups for stray magnetic
Inputs and Outputs 287
fields (principle 3). If a high current is induced in, for instance, the
shield on a piece of coaxial cable, then the resulting voltage drop
along the shield will appear in series with the measured voltage.
■ Avoid placing sensitive circuits near sources of intense magnetic
fields, such as transformers, motors, and power supplies. This will
reduce the likelihood of magnetic pickup that you would otherwise
have trouble shielding against (principle 4).
Vout = G ( Vin )
Single-Ended
Amplifier
V in G
V out
Signal common
Figure 12.8 A single-ended amplifier
has no intrinsic noise rejection prop-
erties. You need to carefully shield
signal cables and make sure that the
signal common is noise-free as well.
Acceptable use of single-ended inputs. The signal cable could also be coax
for volt-level signals. Data Acquisition Hardware
Inst. Amp.
in+
Floating Signal Source, G ADC
Vm
such as battery- Vd Shield
powered equipment
Gnd Circuit Common
and electric field pickup. Twisted, shielded pairs are best, but coax-
ial cable will do in situations where you are careful not to create the
dreaded ground loop that appears in the bottom segment of the figure.
V
SNR = 20 log sig
Vnoise
where Vsig is the signal amplitude and Vnoise is the noise amplitude,
both measured in volts rms. The 20 log() operation converts the simple
ratio to decibels (dB), a ratiometric system used in electrical engi-
neering, signal processing, and other fields. Decibels are convenient
units for gain and loss computations. For instance, an SNR or gain of
20 dB is the same as a ratio of 10 to 1; 40 dB is 100 to 1; and so forth.
Note that the zero noise condition results in an infinite SNR. May you
one day achieve this.
*Manufacturers say you don’t need a ground reference, but experience in the field says
it ain’t so. George Wells of JPL reports that Analog Devices 5B modules sometimes cou-
ple large common-mode voltages to adjacent modules. This may be so because of internal
transformer orientation. Other brands may have their own problems. Your best bet is to
provide a proper ground to all signals.
292 Chapter Twelve
Instrumentation Lowpass
+ Amplifier Filter
+ To ADC
–
G fl
–
Signal Common
computer (and maybe all the way to the mouse or keyboard!). A robust
amplifier package can easily be protected against severe overloads
through the use of transient absorption components (such as varistors,
zener diodes, and spark gaps), limiting resistors, and fuses. Medical
systems are covered by federal and international regulations regard-
ing isolation from stray currents. Never connect electronic instruments
to a live subject (human or otherwise) without a properly certified isola-
tion amplifier and correct grounding.
Special transducers may require excitation. Examples are resis-
tance temperature detectors (RTDs), thermistors, potentiometers, and
strain gauges, as depicted in Figure 12.11. All these devices produce an
output that is proportional to an excitation voltage or current as well as
the physical phenomenon they are intended to measure. Thus, the exci-
tation source can be a source of noise and drift and must be carefully
designed. Modular signal conditioners, such as the Analog Devices 5B
series, and SCXI hardware from National Instruments include high-
quality voltage or current references for this purpose.
Some signal conditioning equipment may also have built-in multi-
plexing, which is an array of switching elements (relays or solid-state
analog switches) that route many input signals to one common output.
For instance, using SCXI equipment, you can have hundreds of analog
inputs connected to one multiplexer at a location near the experiment.
Then only one cable needs to be run back to a plug-in board in your
computer. This drastically reduces the number and length of cables.
Most plug-in boards include multiplexers.
Multiplexing can cause some interesting problems when not prop-
erly applied, as we mentioned in regard to the amplifier settling time.
Compare the two signal configurations in Figure 12.12. In the top con-
figuration, there is an amplifier and filter per channel, followed by the
294 Chapter Twelve
G fl
Signal A
ADC
G fl
Signal B
Signal A
G fl ADC
Signal B
Figure 12.12 The preferred amplifier and filter per channel configura-
tion (top); an undesirable, but cheaper, topology (bottom).Watch your
step if your system looks like the latter.
On-board ground
reference connection
Leak
resistor
Leak
resistors
Figure 12.13 Floating source connections: (A) This is the simplest and most
economical; (B) this is similar but may have noise rejection advantages;
and (C) this is required for ac-coupled sources.
296 Chapter Twelve
a small bias current from its inputs back into the signal connections.
If there is no path to ground, as is the case for a true floating source,
then the input voltage at one or both inputs will float to the amplifier’s
power supply rail voltage. The result is erratic operation because the
amplifier is frequently saturated—operating out of its linear range.
This is a sneaky problem! Sometimes the source is floating and you
don’t even know it. Perhaps the system will function normally for a few
minutes after power is turned on, and then it will misbehave later. Or,
touching the leads together or touching them with your fingers may
discharge the circuit, leading you to believe that there is an intermit-
tent connection. Yikes!
Rule: Use an ohmmeter to verify the presence of a resistive path to
ground on all signal sources.
What is the proper value for a leak resistor? The manual for your
input device may have a recommendation. In general, if the value is too
low, you lose the advantage of a differential input because the input is
tightly coupled to ground through the resistor. You may also overload
the source in the case where leak resistors are connected to both inputs.
If the value is too high, additional dc error voltages may arise due to
input offset current drift (the input bias currents on the two inputs dif-
fer and may vary with temperature). A safe value is generally in the
range of 1 to 100 kΩ. If the source is truly floating, such as battery-
powered equipment, then go ahead and directly ground one side.
+5 V on plug-in board
Pull-up resistor TTL digital input
10k on plug-in board
Switch contact
Figure 12.15 Simple circuits you can use with digital outputs on MIO and DIO
series boards. Drive low-current loads like LEDs directly (left). MOSFETs are
available in a wide range of current and voltage ratings for driving heavy dc
loads (right).
Inputs and Outputs 299
Figure 12.16 EG&G Automotive Research (San Antonio, Texas) has the type of industrial
environment that makes effective use of SCXI signal conditioning products. (Photo cour-
tesy of National Instruments and EG&G Automotive Research.)
302 Chapter Twelve
lF
wo
C
tno
lor
aP
len
rP
sse
eru
alA
mr
oC
idn
oit
sn
TS
RS-232
PO
RS-485
4-Channel
Backplane
(6BP04-1)
6B50-1
To SSR Series
1-Channel Backplane,
Backplanes SC-2051, or CB-50
(6BP01-1)
Figure 12.17 The Analog Devices 6B series, available through National Instruments,
provides distributed I/O capability with serial communications.
Inputs and Outputs 303
Network everything!
Ethernet connections are just about everywhere these days, including
wireless links. There are many ways that you can take advantage of
these connections when configuring a distributed measurement and
control system, courtesy of National Instruments. Here are a few con-
figurations and technologies to consider.
■ Data can be published via DataSockets, a cross-platform technol-
ogy supported by LabVIEW that lets you send any kind of data to
another LabVIEW-based system or to a Web browser. It’s as easy as
writing data to a file, except it is being spun out on the Web. And it’s
as easy to access as a file, too.
■ Remote data acquisition (RDA) is an extension of the NI DAQ
driver for Windows where a DAQ board plugged into one computer
is accessible in real time over the network on another computer.
304 Chapter Twelve
Bibliography
Beckwith, Thomas G. and R. D. Marangoni, Mechanical Measurements, Addison-Wesley,
Reading, Massachusetts, 1990. (ISBN 0-201-17866-4) Gunn, Ronald, “Designing Sys-
tem Grounds and Signal Returns,” Control Engineering, May, 1987.
Lancaster, Donald, Active Filter Cookbook, Howard W. Sams & Co., Indianapolis, 1975.
(ISBN 0-672-21168-8) Lipshitz, Stanley P., R. A. Wannamaker, and J. Vanderkooy, “Quan-
tization and Diter: A Theoretical Survey,” J. Audio Eng. Soc., 40:(5):355–375 (1992).
Morrison, Ralph, Grounding and Shielding Techniques in Instrumentation, Wiley-
Interscience, New York, 1986.
Norton, Harry R. Electronic Analysis Instruments, Prentice-Hall, Englewood Cliffs,
New Jersey, 1992. (ISBN 0-13-249426-4) Omega Engineering, Inc., Temperature
Handbook, Stamford, Connecticut, 2000. (Available free by calling 203-359-1660 or
800-222-2665.)
Ott, Henry W., Noise Reduction Techniques in Electronic Systems, John Wiley & Sons,
New York, 1988. (ISBN 0-471-85068-3) Pallas-Areny, Ramon and J. G.Webster, Sen-
sors and Signal Conditioning, John Wiley & Sons, New York, 1991. (ISBN 0-471-
54565-1) Qian, Shie and Dapang Chen, Joint Time-Frequency Analysis—Methods and
Applications,
Prentice-Hall, Englewood Cliffs, New Jersey, 1996. (ISBN 0-13-254384-2. Call Prentice-
Hall at 800-947-7700 or 201-767-4990.)
Sheingold, Daniel H., Analog-Digital Conversion Handbook, Prentice-Hall, Englewood
Cliffs, New Jersey, 1986. (ISBN 0-13-032848-0) Steer, Robert W., Jr., “Anti-aliasing
Filters Reduce Errors in ADC Converters,” EDN, March 30, 1989.
3M Specialty Optical Fibers, Fiber Optic Current Sensor Module (product information
and application note),West Haven, Connecticut, (203) 934-7961.
White, Donald R. J. Shielding Design Methodology and Procedures, Interference Control
Technologies, Gainesville, Virginia, 1986. (ISBN 0-932263-26-7)
Chapter
Sampling Signals
13
Up until now, we’ve been discussing the real (mostly analog) world of
signals. Now it’s time to digitize those signals for use in LabVIEW. By
definition, analog signals are continuous-time, continuous-value
functions. That means they can take on any possible value and are
defined over all possible time resolutions. (By the way, don’t think that
digital pulses are special; they’re just analog signals that happen to be
square waves. If you look closely, they have all kinds of ringing, noise,
and slew rate limits—all the characteristics of analog signals.)
An analog-to-digital converter (ADC) samples your analog signals
on a regular basis and converts the amplitude at each sample time to
a digital value with finite resolution. These are termed discrete-time,
discrete-value functions. Unlike their analog counterparts, discrete
functions are defined only at times specified by the sample interval
and may only have values determined by the resolution of the ADC. In
other words, when you digitize an analog signal, you have to approxi-
mate. How much you can throw out depends on your signal and your
specifications for data analysis. Is 1 percent resolution acceptable? Or
is 0.0001 percent required? And how fine does the temporal resolution
need to be? One second? Or 1 ns? Please be realistic. Additional ampli-
tude and temporal resolution can be expensive. To answer these ques-
tions, we need to look at this business of sampling more closely.
Sampling Theorem
A fundamental rule of sampled data systems is that the input signal
must be sampled at a rate greater than twice the highest-frequency
component in the signal. This is known as the Shannon sampling
theorem, and the critical sampling rate is called the Nyquist rate.
305
306 Chapter Thirteen
0 1 2 3 4 5
Time (ms)
Figure 13.1 Graphical display of the effects of sampling rates. When the
original 1-kHz sine wave is sampled at 1.2 kHz (too slow), it is totally unrec-
ognizable in the data samples. Sampling at 5.5 kHz yields a much better rep-
resentation. What would happen if there was a lot of really high-frequency
noise?
Stated as a formula, it says that fs /2 > fa, where fs is the sampling fre-
quency and fa is the maximum frequency of the signal being sampled.
Violating the Nyquist criterion is called undersampling and results in
aliasing. Look at Figure 13.1 which simulates a sampled data system.
I started out with a simple 1-kHz sine wave (dotted lines), and then I
sampled it at two different frequencies, 1.2 and 5.5 kHz. At 5.5 kHz, the
signal is safely below the Nyquist rate, which would be 2.75 kHz, and
the data points look something like the original (with a little bit of infor-
mation thrown out, of course). But the data with a 1.2-kHz sampling
rate is aliased: It looks as if the signal is 200 Hz, not 1 kHz. This effect
is also called frequency foldback: Everything above fs/2 is folded back
into the sub-fs/2 range. If you undersample your signal and get stuck
with aliasing in your data, can you undo the aliasing? In most cases,
no. As a rule, you should not undersample if you hope to make sense
of waveform data. Exceptions do occur in certain controlled situations.
An example is equivalent-time sampling in digital oscilloscopes, in
which a repetitive waveform is sampled at a low rate, but with careful
control of the sampling delay with respect to a precise trigger.
Let’s go a step further and consider a nice 1-kHz sine wave, but this
time we add some high-frequency noise to it. We already know that a
5.5-kHz sample rate will represent the sine wave all right, but any noise
that is beyond fs/2 (2.75 kHz) will alias. Is this a disaster? That depends
on the power spectrum (amplitude squared versus frequency) of the
noise or interfering signal. Say that the noise is very, very small in
amplitude—much less than the resolution of your ADC. In that case
it will be undetectable, even though it violates the Nyquist criterion.
Sampling Signals 307
10
0
Nyquist Limit (fs /2)
-10
1 KHz Signal
-20
Aliasing occurs
-30
Power (dB)
-40
-50
This area represents aliased energy
-60
-70
-80
-90
-100dB
0 1000 2000 3000 4000 5000 Hz
Frequency
Figure 13.2 Power spectrum of a 1-kHz sine wave with low-pass-filtered noise added. If the ADC
resolves 16 bits, its spectral noise floor is ≈115 dB. Assuming that we sample at 5.5 kHz, any
energy appearing above fs /2 (2.75 kHz) and above ≈115 dB will be aliased.
on the ADC board, or in all three places. One problem with analog fil-
ters is that they can become very complex and expensive. If the desired
signal is fairly close to the Nyquist limit, the filter needs to cut off very
quickly, implying lots of stages (this is more formally known as the
order of the filter’s transfer function). High-performance antialias-
ing filters, such as the SCXI-1141, have such high-order designs and
meet the requirements of most situations.
Digital filters can augment, but not replace, analog filters. Digital
filter VIs are included with the LabVIEW analysis library, and they are
functionally equivalent to analog filters. The simplest type of digital
filter is a moving averager (examples of which are available with
LabVIEW) which has the advantage of being usable in real time on
a sample-by-sample basis. One way to simplify the antialiasing filter
problem is to oversample the input. If your ADC hardware is fast
enough, just turn the sampling rate way up, and then use a digital
filter to eliminate the higher frequencies that are of no interest. This
makes the analog filtering problem much simpler because the Nyquist
frequency has been raised much higher, so the analog filter doesn’t
have to be so sharp. A compromise is always necessary: You need to
sample at a rate high enough to avoid significant aliasing with a mod-
est analog filter; but sampling at too high a rate may not be practical
because the hardware is too expensive and/or the flood of extra data
may overload your poor CPU.
A potential problem with averaging arises when you handle non-
linear data. The process of averaging is defined to be the summation
of several values, divided by the number of values. If your data is, for
instance, exponential, then averaging values (a linear operation) will
tend to bias the data. (Consider the fact that ex + ey is not equal to e(x + y).)
One solution is to linearize the data before averaging. In the case of
exponential data, you should take the logarithm first. You may also
be able to ignore this problem if the values are closely spaced—small
pieces of a curve are effectively linear. It’s vital that you understand
your signals qualitatively and quantitatively before you apply any
numerical processing, no matter how innocuous it may seem.
If your main concern is the rejection of 60-Hz line frequency interfer-
ence, an old trick is to average an array of samples over one line period
(16.66 ms in the United States). For instance, you could acquire data at
600 Hz and average groups of 10, 20, 30, and so on, up to 600 samples.
You should do this for every channel. Using plug-in boards with Lab-
VIEW’s data acquisition drivers permits you to adjust the sampling
interval with high precision, making this a reasonable option. Set up
a simple experiment to acquire and average data from a noisy input.
Vary the sampling period, and see if there isn’t a null in the noise level
at each 16.66-ms multiple.
Sampling Signals 309
5
ADC Counts
-1
0 10 20 30 40 50 60 70 80 90 100
Time (ms)
Figure 13.3 A sine wave and its representation by a 3-bit ADC sampling every 5 ms.
Range refers to the maximum and minimum voltage levels that the
ADC can quantize. Exceeding the input range results in what is vari-
ously termed clipping, saturation, or overflow/underflow, where the
ADC gets stuck at its largest or smallest output code. The code width
of an ADC is defined as the change of voltage between two adjacent
quantization levels or, as a formula,
range
Code width =
2N
where N is the number of bits and code width and range are mea-
sured in volts. A high-resolution converter (lots of bits) has a small code
width. The intrinsic range, resolution, and code width of an ADC can
be modified by preceding it with an amplifier that adds gain. The code
width expression then becomes
range
Code width =
gain × 2 N
High gain thus narrows the code width and enhances the resolution
while reducing the effective range. For instance, a common 12-bit ADC
with a range of 0 to 10 V has a code width of 2.44 mV. By adding a gain
of 100, the code width becomes 24.4 µV, but the effective range becomes
10/100 = 0.1 V. It is important to note the tradeoff between resolution
and range when you change the gain. High gain means that overflow
may occur at a much lower voltage.
Conversion speed is determined by the technology used in design-
ing the ADC and associated components, particularly the sample-and-
hold (S/H) amplifier that freezes the analog signal just long enough to
do the conversion. Speed is measured in time per conversion or samples
Sampling Signals 311
dB Digits Bits
8.5
168 28
156 26
7.5 Traditional Instruments, 2000
144 24
Dynamic Range or Resolution
132 6.5 22
108 5.5 18
96 16
A/D Components, 1995
4.5
84 14
72 12
3.5
60 10
48 2.5 8
per second. Figure 13.4 compares the typical resolutions and conversion
speeds of plug-in ADC boards and traditional instruments. A very com-
mon tradeoff is resolution versus speed; it simply takes more time or is
more costly to precisely determine the exact voltage. In fact, high-speed
ADCs, such as flash converters that are often used in digital oscilloscopes,
decrease in effective resolution as the conversion rate is increased. Your
application determines what conversion speed is required.
There are many sources of error in ADCs, some of which are a little
hard to quantify; in fact, if you look at the specification sheets for ADCs
from different manufacturers, you may not be able to directly compare
the error magnitudes because of the varying techniques the manufac-
turers may have used.
Simple errors are gain and offset errors. An ideal ADC follows the
equation for a straight line, y = mx + b, where y is the input voltage, x is
the output code, and m is the code width. Gain errors change the slope
m of this equation, which is a change in the code width. Offset errors
change the intercept b, which means that 0-V input doesn’t give you
zero counts at the output. These errors are easily corrected through
calibration. Either you can adjust some trimmer potentiometers so
that zero and full-scale match a calibration standard, or you can make
312 Chapter Thirteen
Multiplexer
(switches every 4 µs)
Sig. Gen. Chan. 1
f = 20 kHz Digital
Chan. 2
ADC
Code
Chan. 3
Chan. 4
1.0
Chan. 4
Chan. 3
Chan. 2
0.5
Chan. 1
0.0
-0.5
-1.0
0 20 40 60 80
Time (µsec)
4 µsec skew
Figure 13.5 Demonstration of skew in an ADC with a multiplexer.
Ideally, there would be zero switching time between channels on the
multiplexer; this one has 4 µs. Since the inputs are all the same sig-
nal, the plotted data shows an apparent phase shift.
Sampling Signals 313
you must wait for these aberrations to decay away. Settling time is the
amount of time required for the output voltage to begin tracking the
input voltage within a specified error band after a change of channels
has occurred. It is clearly specified on all ADC system data sheets. You
should not attempt to acquire data faster than the rate determined by
the settling time plus the ADC conversion time.
Another unexpected source of input error is sometimes referred to
as charge pump-out. When a multiplexer switches from one channel
to the next, the input capacitance of the multiplexer (and the next cir-
cuit element, such as the sample-and-hold) must charge or discharge to
match the voltage of the new input signal. The result is a small glitch
induced on the input signal, either positive or negative, depending
upon the relative magnitude of the voltage on the preceding channel.
If the signal lines are long, you may also see ringing. Charge pump-out
effects add to the settling time in an unpredictable manner, and they
may cause momentary overloading or gross errors in high-impedance
sources. This is another reason to use signal conditioning; an input
amplifier provides a buffering action to reduce the glitches.
Many systems precede the ADC with a programmable-gain instru-
mentation amplifier (PGIA). Under software control, you can change
the gain to suit the amplitude of the signal on each channel. A true
instrumentation amplifier with its inherent differential connections is
the predominant type. You may have options, through software regis-
ters or hardware jumpers, to defeat the differential mode or use various
signal grounding schemes, as on the National Instruments multifunc-
tion boards. Study the available configurations and find the one best
suited to your application. The one downside to having a PGIA in the
signal chain is that it invariably adds some error to the acquisition
process in the form of offset voltage drift, gain inaccuracy, noise, and
bandwidth. These errors are at their worst at high-gain settings, so
study the specifications carefully. An old axiom in analog design is that
high gain and high speed are difficult to obtain simultaneously.
Digital-to-analog converters
a clean, crisp step will be produced. In actuality, the output may over-
shoot and ring for awhile or may take a more leisurely, underdamped
approach to the final value. This represents the settling time. If you
are generating high-frequency signals (audio or above), you need
faster settling times and slew rates. If you are controlling the current
delivered to a heating element, these specifications probably aren’t
much of a concern.
When a DAC is used for waveform generation, it must be followed
by a low-pass filter, called a reconstruction filter, that performs an
antialiasing function in reverse. Each time the digital-to-analog (D/A)
output is updated (at an interval determined by the time base), a step
in output voltage is produced. This step contains a theoretically infinite
number of harmonic frequencies. For high-quality waveforms, this out-
of-band energy must be filtered out by the reconstruction filter. DACs
for audio and dynamic signal applications, such as National Instru-
ments’ 4450 series dynamic signal I/O boards, include such filters and
have a spectrally pure output. Ordinary data acquisition boards gener-
ally have no such filtering and will produce lots of spurious energy. If
spectral purity and transient fidelity are important in your applica-
tion, be mindful of this fact.
Digital codes
The pattern of bits—the digital word—used to exchange information
with an ADC or DAC may have one of several coding schemes, some
of which aren’t intuitive. If you ever have to deal directly with the I/O
hardware (especially in lower-level driver programs), you will need to
study these schemes. If the converter is set up for unipolar inputs (all-
positive or all-negative analog voltages), the binary coding is straight-
forward, as in Table 13.1. But to represent both polarities of numbers
for a bipolar converter, a sign bit is needed to indicate the signal’s
TABLE 13.1 Straight Binary Coding Scheme for Unipolar, 3-Bit Converter
Decimal fraction of full-scale
Decimal equivalent Positive Negative Straight binary
7 ⁄
78 –7⁄8 111
6 6⁄8 –6⁄8 110
5 5⁄8 –5⁄8 101
4 4⁄8 –4⁄8 100
3 3⁄8 –3⁄8 011
2 2⁄8 –2⁄8 010
1 1⁄8 –1⁄8 001
0 0⁄8 –0⁄8 000
316 Chapter Thirteen
TABLE 13.2 Some Commonly Used Coding Schemes for Bipolar Converters, a 4-Bit Example
Decimal equivalent Fraction of full-scale Sign and magnitude Two’s complement Offset binary
7 ⁄
78 0111 0111 1111
6 6⁄8 0110 0110 1110
5 5⁄8 0101 0101 1101
4 4⁄8 0100 0100 1100
3 3⁄8 0011 0011 1011
2 2⁄8 0010 0010 1010
1 1⁄8 0001 0001 1001
0 0+ 0000 0000 1000
0 0– 1000 0000 1000
–1 –1⁄8 1001 1111 0111
–2 –2⁄8 1010 1110 0110
–3 –3/8 1011 1101 0101
–4 –4⁄8 1100 1100 0100
–5 –5⁄8 1101 1011 0011
–6 –6⁄8 1110 1010 0010
–7 –7⁄8 1111 1001 0001
–8 –8⁄8 Not represented 1000 0000
NOTE: The sign and magnitude scheme has two representations for zero, but can’t represent –8. Also, the only differ-
ence between offset binary and two’s complement is the polarity of the sign bit.
polarity. The bipolar coding schemes shown in Table 13.2 are widely
used. Each has advantages, depending on the application.
between two quantization levels of the A/D system and there is some
dither noise present, the least-significant bit (LSB) tends to toggle
among a few codes. For instance, the duty cycle of this toggling action
is exactly 50 percent if the voltage is exactly between the two quanti-
zation levels. Duty cycle and input voltage track each other in a nice,
proportional manner (except if the converter demonstrates some kind
of nonlinear behavior). All you have to do is to filter out the noise, which
can be accomplished by averaging or other forms of digital filtering.
A source of uncorrelated dither noise, about 1 LSB peak to peak or
greater, is required to make this technique work. Some high-performance
ADC and DAC systems include dither noise generators; digital audio
systems and the National Instruments dynamic signal acquisition
boards are examples. High-resolution converters (16 bits and greater)
generally have enough thermal noise present to supply the necessary
dithering. Incidentally, this resolution enhancement occurs even if you
don’t apply a filter; filtering simply reduces the noise level.
Figure 13.6 demonstrates the effect of dither noise on the quantiza-
tion of a slow, low-amplitude ramp signal. To make this realistic, say
that the total change in voltage is only about 4 LSBs over a period
of 10 s (Figure 13.6A). The vertical axis is scaled in LSBs for clarity.
5 A 5 C
4 4
3 3
2 Actual analog signal, 2
without noise. Actual analog signal,
1 1 with 1.5 LSB peak-peak noise.
0 0
0 2 4 6 8 10s 0 2 4 6 8 10s
5 5
B D
4 4
3 3
2 Digitized signal. Quantization 2
Digitized signal. Quantization
1 steps are objectionable and 1 steps are mixed in noise.
independent of sample rate. Sampling freq = 20 Hz.
0 0
0 2 4 6 8 10s 0 2 4 6 8 10s
5
E
4
3
2
1 Recovered signal, averaged every 0.5 s.
0
0 2 4 6 8 10s
Figure 13.6 Graphs in (A) and (B) represent digitization of an ideal analog ramp of 4-LSB
amplitude which results in objectionable quantization steps. Adding 1-LSB peak-peak
dither noise and low-pass filtering [graphs in (C) through (E)] improves results.
Sampling Signals 319
The sampling rate is 20 Hz. In Figure 13.6B, you can see the coarse
quantization steps expected from an ideal noise-free ADC. Much imagi-
nation is required to see a smooth ramp in this graph. In Figure 13.6C,
dither noise with an amplitude of 1.5 LSB peak to peak has been added
to the analog ramp. Figure 13.6D is the raw digitized version of this
noisy ramp. Contrast this with Figure 13.6B, the no-noise case with its
coarse steps. To eliminate the random noise, we applied a simple boxcar
filter where every 10 samples (0.5 s worth of data) is averaged into a
single value; more elaborate digital filtering might improve the result.
Figure 13.6E is the recovered signal. Clearly, this is an improvement
over the ideal, noiseless case, and it is very easy to implement in Lab-
VIEW. In Chapter 14, “Writing a Data Acquisition Program,” we’ll show
you how to oversample and average to improve your measurements.
Low-frequency analog signals give you some opportunities to further
improve the quality of your acquired data. At first glance, that ther-
mocouple signal with a sub-1-Hz bandwidth and little noise could be
adequately sampled at 2 or 3 Hz. But by oversampling—sampling at
a rate several times higher than the minimum specified by the Nyquist
rate—you can enhance resolution and noise rejection.
Noise is reduced in proportion to the square root of the number of
samples that are averaged. For example, if you average 100 samples,
the standard deviation of the average value will be reduced by a factor
of 10 when compared to a single measurement. Another way of express-
ing this result is that you get a 20-dB improvement in signal-to-noise
ratio when you average 100 times as many samples. This condition
is true as long as the A/D converter has good linearity and a small
amount of dither noise. This same improvement occurs with repetitive
signals, such as our 1-kHz sine wave with dither noise. If you syn-
chronize your ADC with the waveform by triggering, you can average
several waveforms. Since the noise is not correlated with the signal,
the noise once again averages to zero according to the square root rule.
How you do this in LabVIEW is discussed in Chapter. 14, “Writing a
Data Acquisition Program.”
Throughput
A final consideration in your choice of converters is the throughput
of the system, a yardstick for overall performance, usually measured
in samples per second. Major factors determining throughput are as
follows:
■ A/D or D/A conversion speed
■ Use of multiplexers and amplifiers, which may add delays between
channels
320 Chapter Thirteen
The glossy brochure or data sheet you get with your I/O hardware
rarely addresses these very real system-oriented limitations. Maxi-
mum performance is achieved when the controlling program is written
in assembly language, one channel is being sampled with the amplifier
gain at minimum, and data is being stored in memory with no analy-
sis or display of any kind. Your application will always be somewhat
removed from this particular benchmark.
Practical disk systems have many throughput limitations. The disk
itself takes a while to move the recording heads around and can only
transfer so many bytes per second. The file system and any data conver-
sion you have to do in LabVIEW are added as overhead. If everything
is working well, a stream-to-disk LabVIEW VI will continuously run in
excess of 6 Mbyes/s. For really fast I/O, you simply have to live within
the limits of available memory. But memory is cheap these days, and
the performance is much better than that of any disk system. If you
really need 1 Gbyte of RAM to perform your experiment, then don’t fool
around, just buy it. Tell the purchasing manager that Gary said so.
Double-buffered DMA for acquisition and waveform generation is
built into the LabVIEW support for many I/O boards and offers many
advantages in speed because the hardware does all the real-time work
of transferring data from the I/O to main memory. Your program can per-
form analysis, display, and archiving tasks while the I/O is in progress
(you can’t do too much processing though). Refer to your LabVIEW data
acquisition VI library reference manual for details on this technique.
Performance without DMA is quite limited. Each time you send a
command to a plug-in board, the command is processed by the NI DAQ
driver, which in turn must get permission from the operating system to
perform an I/O operation. There is a great deal of overhead in this pro-
cess, typically on the order of a few hundred microseconds. That means
you can read one sample at a time from an input at a few kilohertz
without DMA. Clearly, this technique is not very efficient and should
be used only for infrequent I/O operations.
If you need to do very much on-the-fly analysis, adding a DSP board
can augment the power of your computer’s CPU by off-loading tasks
such as FFT computations. Using a DSP board as a general-purpose
computer is another story—actually another book! Programming such
a machine to orchestrate data transfers, do control algorithms, and so
on generally requires programming in C or assembly language, using
Sampling Signals 321
the support tools for your particular DSP board. If you are an experi-
enced programmer, this is a high-performance alternative.
Access to external DSP horsepower without low-level programming
is available from Sheldon Instruments with its LabVIEW add-on
product called QuVIEW. It works with Sheldon Instruments’ PCI-
based DSP boards that feature AT&T’s 32C or TI’s C3x processors
coupled to multifunction I/O modules. With QuVIEW, you use a col-
lection of VIs that automatically download code to the DSP memory
for fast execution independent of the host PC. It’s especially good for
continuous algorithms such as signal generation, filtering, FFTs, and
streaming data to disk. If the extensive library of built-in functions
isn’t enough, you can create your own external code, using C compil-
ers or assemblers from TI or Tartan, or graphical DSP code generators
from MathWorks or Hyperception. In either case, your external code is
neatly encapsulated in familiar VI wrappers for easy integration into
a LabVIEW application.
Bibliography
3M Specialty Optical Fibers: Fiber Optic Current Sensor Module (product information
and application note), West Haven, Conn., 1996
Beckwith, Thomas G., and R. D. Marangoni: Mechanical Measurements, Addison-Wesley,
Reading, Mass., 1990.
Gunn, Ronald: “Designing System Grounds and Signal Returns,” Control Engineering,
May 1987.
Lancaster, Donald: Active Filter Cookbook, Howard W. Sams & Co., Indianapolis, Ind.,
1975.
Lipshitz, Stanley P., R. A. Wannamaker, and J. Vanderkooy: “Quantization and Dither: A
Theoretical Survey,” J. Audio Eng. Soc., vol. 40, no. 5, 1992, pp. 355–375.
Morrison, Ralph: Grounding and Shielding Techniques in Instrumentation, Wiley-
Interscience, New York, 1986.
Norton, Harry R.: Electronic Analysis Instruments, Prentice-Hall, Englewood Cliffs, N.J.,
1992.
Omega Engineering, Inc.: Temperature Handbook, Stamford, Conn., 2000. (Available free
by calling 203-359-1660 or 800-222-2665.)
Ott, Henry W.: Noise Reduction Techniques in Electronic Systems, Wiley, New York,
1988.
Pallas-Areny, Ramon, and J. G. Webster: Sensors and Signal Conditioning, Wiley,
New York, 1991.
Qian, Shie, and Dapang Chen: Joint Time-Frequency Analysis—Methods and Applica-
tions, Prentice-Hall, Englewood Cliffs, N.J., 1996. (Call Prentice-Hall at 800-947-7700
or 201-767-4990.)
Sheingold, Daniel H.: Analog-Digital Conversion Handbook, Prentice-Hall, Englewood
Cliffs, N.J., 1986.
Steer, Robert W., Jr.: “Anti-aliasing Filters Reduce Errors in ADC Converters,” Electronic
Design News (EDN), March 30, 1989.
White, Donald R. J.: Shielding Design Methodology and Procedures, Interference Control
Technologies, Gainesville, Va., 1986.
This page intentionally left blank
Chapter
323
324 Chapter Fourteen
1. Define and understand the problem; define the signals and deter-
mine what the data analysis needs are.
2. Specify the type of I/O hardware you will need, and then determine
sample rates and total throughput.
3. Prototype the user interface and decide how to manage
configurations.
4. Design and then write the program.
Initialize
hardware; Graph
Open files
config Handle
data errors
error
path
error
Figure 14.1 A generic data acquisition program includes the functions shown
here.
in detail, how the data will be analyzed. Make sure he or she under-
stands the implications of storing the megabytes or gigabytes of data
that an automated data acquisition system may collect. If there is a
collective shrug of shoulders, ask them point-blank, “. . . Then why are
we collecting data at all?” Do not write your data acquisition program
until you understand the analysis requirements.
Finally, you can get started. Divide the analysis job into real-time
and postrun tasks, and determine how each aspect will affect your
program.
Postrun analysis
You can analyze data with LabVIEW, another application, or a custom
program written in some other language. Sometimes, more than one
analysis program will have to read the same data file. In all cases, you
need to decide on a suitable data file format that your data acquisi-
tion program has to write. The file type (typically text or binary), the
structure of the data, and the inclusion of timing and configuration
information are all important. If other people are involved in the analy-
sis process, get them involved early in the design process. Write down
clear file format specifications. Plan to generate sample data files and
do plenty of testing so as to assure everyone that the real data will
transfer without problems.
It’s a good idea to structure your program so that a single data
saver VI is responsible for writing a given data file type. Raw data and
configuration information go in, and data files go out. You can easily
test this data saver module as a stand-alone VI or call it from a test
program before your final application is completed. The result is a mod-
ule with a clear purpose that is reliable and reusable. LabVIEW can
read and write any file format (refer to Chapter 7, “Files,” for a general
discussion of file I/O). Which data format you use depends on the pro-
gram that has to read it.
Datalog file format. If you plan to analyze data only in LabVIEW, the
easiest and most compact format is the datalog file, discussed in
detail in Chapter 7. A datalog file contains a sequence of binary data
records. All records in a given file are of the same type, but a record
can be a complex data structure, for instance, a cluster containing
strings and arrays. The record type is determined when you create the
file. You can read records one at a time in a random-access fashion or
read several at once, in which case they are returned as an array. This
gives your analysis program the ability to use the data file as a simple
database, searching for desired records based on one or more key fields
in each record, such as a timestamp.
Writing a Data Acquisition Program 327
The disadvantage of datalog format files is that they can only be read
by LabVIEW or by a custom-written program. However, you can eas-
ily write a translator in LabVIEW that reads your datalog format and
writes out files with another format. Another hazard (common to all
binary file formats) is that you must know the data type used when the
file was written; otherwise, you may never be able to decipher the file.
You might be able to use the automatic front panel datalogging fea-
tures of LabVIEW. They are very easy to use. All you have to do is to
turn on the datalogging by using the Datalogging submenu in the
Operate menu for a subVI that displays the data you wish to save.
Every time that the subVI finishes executing (even if its front panel
is not displayed), the front panel data are appended to the current log
file. The first time the subVI is called, you will receive a dialog ask-
ing for a new datalog file. You can also open the subVI and change log
files through the Datalogging menu. To access logged data, you can
use the file I/O functions, or you can place the subVI of interest in a
new diagram and choose Enable Database Access from its pop-up
menu. You can then read the datalog records one at a time. All the front
panel controls are available—they are in a cluster that is conveniently
accessed by Unbundle By Name. But for maximum file performance,
there’s nothing better than wiring the file I/O functions directly into
your diagram. Open the datalog file at the start of the experiment, and
don’t close it until you’re done.
Ram Disk:datalog.dat
5 Data cluster
Cluster in
Records to read (1) Timestamp
Timestamp
6 2924459674.000
0.000
Data
Data Start record (0)
99 -1.44E-2
99 0
1.23E+2
Timestamps
Cluster in and out are
Typedefs; edit with 5 0.00
Control Editor
Figure 14.2 The Datalog File Handler VI encapsulates datalog file I/O operations into
a single, integrated VI.
ASCII text format. Good old ASCII text files are your best bet for por-
table data files. Almost every application can load data from a text
file that has simple formatting. The ubiquitous tab-delimited text
format is a likely choice. Format your data values as strings with a tab
character between each value, and place a carriage return at the end of
the line; then write it out. The only other thing you need to determine
is the type of header information. Simple graphing and spreadsheet
applications are happy with column names as the first line in the file:
Write header
This is my header.
Write To
Spreadsheet File.vi
Figure 14.3This simple datalogger example writes a text header and then tab-delimited data.
The String List Converter VI initializes the Channels control with all the channel names.
be reading your file. Figure 14.3 shows a simple datalogger that writes
a text header, followed by tab-delimited text data. It uses the easy-
level file VIs for simplicity. We wrote a little utility VI, String List
Converter, to make it easier for the user to enter channel names. The
names are typed into a string control, separated by carriage returns.
String List Converter translates the list into a set of tab-delimited
names for the file header and into a string array that you can wire to
the Strings[] attribute for a ring control, as shown.
The disadvantages of ASCII text files are that they are bulkier than
binary files, and they take much longer to read and write (often sev-
eral hundred times longer) because each value has to be converted to
and from strings of characters. For high-speed data-recording applica-
tions, text files are out of the question. You might be able to store a few
thousand samples per second as text on a fast computer, but be sure to
benchmark carefully before committing yourself to text files.
Custom binary formats. LabVIEW can write files with arbitrary binary
formats to suit other applications. If you can handle the requisite pro-
gramming, binary files are really worthwhile—high on performance and
very compact. It’s also nice to open a binary file with an analysis pro-
gram and have it load without any special translation. Keep in mind the
fact that LabVIEW datalog files are also binary format (and fast, too),
but are significantly easier to use, at least within LabVIEW. If you don’t
really need a custom binary format, stick with datalogs for simplicity.
330 Chapter Fourteen
V Y
s X
Wave Note ("")
X scale & offset (1,0) This note will ride along with the data, right
into Igor.
1.00 hsA
0.00 hsB
0.00 max
Y scale is valid (F)
0.00 min
Figure 14.4 This VI creates a file suitable for direct loading with Wavemetrics’
Igor.
Writing a Data Acquisition Program 331
Macintosh users can exchange data via AppleEvents; there are exam-
ple VIs that read and write data. Windows users can implement DDE
or Active X connections. See the LabVIEW examples. Using ActiveX,
you can make Excel do just about anything, from creating and saving
workbooks to executing macros and reading or writing data. Again, the
LabVIEW examples will get you started.
The data analysis program should then be able to reconstruct the time
base from this simple scheme.
A technique used in really fast diagnostics is to add a timing fidu-
cial pulse to one or more data channels. Also known as a fid, this pulse
occurs at some critical time during the experiment and is recorded on
all systems (and maybe on all channels as well). It’s much the same
as the room full of soldiers and the commander says, “Synchronize
watches.” For example, when you are testing explosives, a fiducial pulse
is distributed to all the diagnostic systems just before detonation. For
analog data channels, the fiducial pulse can be coupled to each chan-
nel through a small capacitor, creating a small glitch in the data at the
critical moment. You can even synchronize nonelectronic systems by
generating a suitable stimulus, such as flashing a strobe in front of a
movie or video camera. Fiducial pulses are worth considering any time
you need absolute synchronization among disparate systems.
If you need accurate time-of-day information, be sure to reset the
computer clock before the experiment begins. Personal computer clocks
are notorious for their long-term drift. If you are connected to the Inter-
net, you can install a utility that will automatically reset your system
clock to a standard time server. For Windows, there are several public
domain utilities, such as WNSTIME (available from sunsite.unc.edu),
that you can install. For Macintosh, you can use the Network Time
Server option in the Date and Time Control Panel. These utilities even
take into consideration the time zone and the daylight savings time
settings for your machine. The only other trick is to find a suitable
Network Time Protocol server. One we’ve had good luck with is NASA’s
norad.arc.nasa.gov server.
In Chapter 5, “Timing,” we list a host of precision timing devices that
you might consider, such as GPS and IRIG standards. Such hardware
can provide absolute timing standards with accuracy as good as a few
hundred nanoseconds. That’s about as good as it gets.
Experiment identification
Channel name
Creation time and date
Data type (single- or double-precision)
Data file version (to avoid incompatibility with future versions)
The y-axis and x-axis units number of data points
The y-axis and x-axis scale factors
Flag to indicate whether user comments follow the data segment
As always, the format of this binary file header will be highly speci-
fied on a byte-by-byte basis. You need to make sure that each item is of
the proper data type and length before writing the header out to the file.
An effective way to do this is to assemble all the items into a cluster,
Type Cast it (see Chapter 4, “LabVIEW Data Types”) to a string, and
then write it out, using byte stream mode with the Write File function.
Alternatively, the header can be text with flag characters or end-of-line
characters to help the reading application parse the information.
For text files, generating a header is as simple as writing out a series
of strings that have been formatted to contain the desired information.
Reading and decoding a text-format header, on the other hand, can be
quite challenging for any program. If you simply want an experimental
record or free-form notepad header for the purposes of documentation,
that’s no problem. But parsing information out of the header for pro-
grammatic use requires careful design of the header’s format. Many
graphing and analysis programs can do little more than read blocks of
text into a long string for display purposes; they have little or no capac-
ity for parsing the string. Spreadsheets (and of course programming
languages) can search for patterns, extract numbers from strings, and
so forth, but not if the format is poorly defined. Therefore, you need to
work on both ends of the data analysis problem—reading as well as
writing—to make sure that things will play together.
Another solution to this header problem is to use an index file that is
separate from the data file. The index file contains all the information nec-
essary to successfully load the data including pointers into the data file.
Writing a Data Acquisition Program 335
It can also contain configuration information. The data file can be binary
or ASCII format, containing only the data values. We’ve used this tech-
nique on several projects, and it adds some versatility. If the index file is
ASCII text, then you can print it out to see what’s in the data file. Also,
the data file may be more easily loaded into programs that would other-
wise choke on header information. You still have the problem of loading
the configuration, but at least the data can be loaded and the configura-
tion is safely stored on disk. One caution: Don’t lose one of the files!
The configuration file can be formatted for direct import into a
spreadsheet and used as a printable record of the experiment. This
turns out to be quite useful. Try to produce a complete description of
the hardware setup used for a given test, including module types, chan-
nel assignments, gain settings, and so forth. Here’s what a simple con-
figuration file might look like:
Using a real database. If your data management needs are more com-
plex than the usual single-file and single-spreadsheet scheme can han-
dle, consider using a commercial database application for management
of configurations, experimental data, and other important information.
The advantages of a database are the abilities to index, search, and sort
data with concurrent access from several locations but with explicitly
regulated access to the data, thus enhancing security and reliability.
The Enterprise Connectivity Toolkit (formerly the SQL Toolkit)
from National Instruments enables LabVIEW to directly communicate
with any Open Database Connectivity (ODBC)–compliant database
application using Structured Query Language (SQL) commands.
The toolkit is available for all versions of Windows as well as Macintosh
and is compatible with nearly all major databases. It was originally cre-
ated by Ellipsis Products and marketed as DatabaseVIEW.
The SQL Toolkit can directly access a database file on the local disk
using SQL commands to read or write information, or the database
can exist on a network—perhaps residing on a mainframe computer
or workstation. You begin by establishing a session, or connection to
336 Chapter Fourteen
Figure 14.5 The Enterprise Connectivity Toolkit connects LabVIEW to commercial databases.
This example inserts simulated weather data into a climate database. (Example courtesy of
Ellipsis Products, Inc.)
the target database(s), and then build SQL commands using LabVIEW
string functions. The commands are then executed by the SQL Toolkit
if you are connecting directly to a file or by a database application that
serves as a database engine. Multiple SQL transactions may be active
concurrently—an important feature, since it may take some time to
obtain results when you are using a complex SQL command to access
a very large database.
Figure 14.5 shows an example that simulates a year of weather in
Boston, Massachusetts, and inserts each day’s statistics into a dBASE
database. The dataflow is pretty clear. First, a connection is made to
the database by using the Connect VI for the data source DBV DEMO
DBASE. The first two frames of the sequence (not shown) create the
empty climate table in the database. Second, frame 2 begins by creat-
ing the year of simulated data. A dynamic SQL INSERT statement is
also created by using wildcard characters (?) to designate the values
that will be inserted later. In the For Loop, each field value is bound
to its associated parameter in the wildcard list, and then the Execute
Prepared SQL VI inserts data into the climate table. The dynamic
SQL method can speed inserts up to 4 times over the standard con-
vert, parse, and execute method. Third, the SQL statement reference is
discarded in the End SQL VI. When the loop is complete, the last frame
of the Sequence structure (not shown) disconnects from the database.
Writing a Data Acquisition Program 337
Figure 14.6 Statistics are easy to calculate by using the built-in Stan-
dard Deviation and Variance function when data is acquired as a single
shot (or buffer), in this case using the DAQmx Assistant.
Writing a Data Acquisition Program 339
Compute statistics
Compute statistics after the run
during the run
200 Mean
Figure 14.7 Gary had to write a special function, Running Mean & Sigma, that
accumulated and calculated statistics during execution of a continuous data
acquisition process. Building an array for postrun calculations consumes much
memory.
You might note that there is a conversion function before the input ter-
minal to Standard Deviation. That’s because DAQmx uses the dynamic
data type, while most of the analysis library uses double-precision. The
result is extra memory management and some loss of efficiency and
speed, but it is currently unavoidable.
Continuous data. You can collect one sample per cycle of the While
Loop by calling AI Single Scan, as shown in Figure 14.7. If you want
to use the built-in Standard Deviation function, you have to put all
the samples into an array and wait until the While Loop finishes
running—not exactly a real-time computation. Or, you could build the
array one sample at a time in a shift register and call the Standard
Deviation function each time. That may seem OK, but the array grows
without limit until the loop stops—a waste of memory at best, or you
may cause LabVIEW to run out of memory altogether. The best solu-
tion is to create a different version of the mean and standard deviation
algorithm, one that uses an incremental calculation.
Gary wrote a function called Running Mean & Sigma that recom-
putes the statistics each time it is called by maintaining intermediate
computations in uninitialized shift registers (Figure 14.8). It is fast and
efficient, storing just three numbers in the shift registers. A Reset switch
sets the intermediate values to zero to clear the function’s memory.
The idea came right out of the user’s manual for his HP-45 calcula-
tor, proving that inspiration is wherever you find it. Algorithms for this
and hundreds of other problems are available in many textbooks and
in the popular Numerical Recipes series (Press 1990). You can use the
concept shown here for other continuous data analysis problems.
National Instruments introduced another option for single-shot data
analysis called the Point-by-Point library. It contains nearly all the
analysis functions including signal generation, time and frequency
domain, probability and statistics, filters, windows, array operations, and
340 Chapter Fourteen
x mean
0.00 0.00 x mean
iteration (1; 0=init) std dev
iteration (1; 0=init) std dev
Running Mean & Sigma
1 0.00
True x
∑x
0.0
∑x^2
sx m = sx / N; mean
iteration (1; 0=init) s = sqrt( 1/(N-1) * m
N
x2 (x2-(sx^2)/N) ); s
std dev
Figure 14.8Running Mean & Sigma calculates statistics on an incremental basis by stor-
ing intermediate computations in uninitialized shift registers. The Reset switch clears the
registers.
linear algebra. Internally, each function uses the same basic techniques
we’ve just discussed for the Running Mean and Sigma VI. But there’s
some real numerical sophistication in there, too. Consider what it means
to do an FFT calculation on a point-by-point basis. A simple solution
would be to just save up a circular buffer of data and then call the regular
FFT function on each iteration. Instead, NI implements the mathemati-
cal definition of the FFT, which is the integral of a complex exponential.
This is, in fact, much faster, but makes sense only when performed on a
sliding buffer. As a benchmark, you’ll find that a 1024-point FFT runs
about 50 percent faster in the point-by-point mode. It’s an even larger dif-
ference when you’re processing a buffer that’s not a power of 2.
■ Figure out ways to reduce the amount of data used in the calcula-
tions. Decimation is a possibility.
■ Do the analysis postrun instead of in real time.
■ Get a faster computer.
■ Use a DSP coprocessor board.
can be sized to produce any number of output arrays. Or, you could
write a program that averages every n incoming values into a smaller
output array. Naturally, there is a performance price to pay with these
techniques; they involve some amount of computation or memory man-
agement. Because the output array(s) is (are) not the same size as the
input array, new memory buffers must be allocated, and that takes
time. But the payoff comes when you finally pass a smaller data array
to those very time-consuming analysis VIs.
A B
True True
Waveform Waveform
Data Source Graph Data Source Graph
Update
graph (sec)
Figure 14.9 Limit the updating rates of graphics (or anything else) by using (A) the Interval
Timer VI or (B) modulo arithmetic with the Quotient and Remainder function.
Writing a Data Acquisition Program 343
VI or both VIs. The advantage is that the client can be set to execute
at lower priority than the server, as well as execute at a slower rate.
Also, an arbitrary number of clients can run simultaneously. You could
really get carried away and use another global that indicates that the
display VI wants an update, thus creating a handshaking arrangement
that avoids writing to the global on every acquisition. The disadvan-
tage of the client-server scheme is that even more copies of the data are
required, trading memory usage for speed.
Graphics accelerators are now common. All new PCs and Macs
include graphics coprocessors that significantly reduce the overhead
associated with updating the display (by as much as a factor of 50 in
some cases). LabVIEW still has to figure out where the text, lines, and
boxes need to go, and that takes some main CPU time. But the graphics
board will do most of the low-level pixel manipulation, which is cer-
tainly an improvement. The LabVIEW Options item Smooth Updates
makes a difference in display performance and appearance as well.
Smooth updates are created through a technique called off-screen bit-
maps where graphics are drawn to a separate memory buffer and then
quickly copied to the graphics display memory. The intent is to enhance
performance while removing some jumpiness in graphics, but smooth
updates may actually cause the update time to increase, at least on
some systems. Experiment with this option, and see for yourself.
Signal bandwidth
As we saw in Chapter 13, “Sampling Signals,” every signal has a mini-
mum bandwidth and must be sampled at a rate at least 2 times this
bandwidth, and preferably more, to avoid aliasing. Remember to
include significant out-of-band signals in your determination of the
sampling rate.
344 Chapter Fourteen
run time. The toolkit makes it easy by graphically displaying all results
and allowing you to save specifications and coefficients in files for com-
parison and reuse. Displays include magnitude, phase, impulse, and
step response, a z plane plot, and the z transform of the designed filter.
Once you have designed a filter and saved its coefficients, an included
LabVIEW VI can load those coefficients for use in real time.
FIR filters. If you are handling single-shot data, the built-in Finite
Impulse Response (FIR) filter functions are ideal. The concept behind
an FIR filter is a convolution (multiplication in the frequency domain) of
a set of weighting coefficients with the incoming signal. In fact, if you look
at the diagram of one of the FIR filters, you will usually see two VIs: one
that calculates the filter coefficients based on your filter specifications,
and the Convolution VI that does the actual computation. The response
of an FIR filter depends only on the coefficients and the input signal, and
as a result the output quickly dies out when the signal is removed. That’s
why they call it finite impulse response. FIR filters also require no initial-
ization since there is no memory involved in the response.
For most filters, you just supply the sampling frequency (for calibra-
tion) and the desired filter characteristics, and the input array will be
accurately filtered. You can also use high-pass, bandpass, and bandstop
filters, in addition to the usual low-pass, if you know the bandwidth of
interest. Figure 14.10 shows a DAQ-based single-shot application where
Figure 14.10 A practical application of FIR low-pass filtering applied to an array of single-shot
data, a noisy sine wave. Bandpass filtering could also be used in this case, since we know the
exact frequency of interest.
346 Chapter Fourteen
Conditioning palette, the gain for each type of window is properly com-
pensated. The Digital FIR Filter VI is also properly compensated. There
are quite a few window functions available, and each was designed
to meet certain needs. You should definitely experiment with various
time windows and observe the effects on actual data.
1000.00
noise IIR filter specifications
scan rate
amplitude 0.1
Periodic Random
Noise Waveform.vi
0=init 50
stop
Figure 14.11 A continuous acquisition process with a Butterworth IIR filter. Once ini-
tialized, this digital filter works just as an analog filter does.
5 Averaged 1
Strip chart
Samples to avg 2.00
6 1.40
Average 0.80
ON 0.20
-0.40
STOP -1.00
5 Samples
Display chan
to avg
Average 250
0=initialize
Stop
Figure 14.12 Demonstration of the Moving Avg Array function, which applies a low-pass filter
to an array of independent channels. The For Loop creates several channels of data, like a data
acquisition system. Each channel is filtered, and then the selected channel is displayed in both
raw and filtered forms.
input, and the same array, but low-pass-filtered, is the output. Any
number of samples can be included in the moving average, and the
averaging can be turned on and off while running. This is a particular
kind of FIR filter where all the coefficients are equal to 1; it is also
known as a boxcar filter.
Other moving averagers are primarily for use with block-mode con-
tinuous data. They use local memory to maintain continuity between
adjacent data buffers to faithfully process block-mode data as if it were
a true, continuous stream.
After low-pass filtering, you can safely decimate data arrays to reduce
the total amount of data. Decimation is in effect a resampling of the
data at a lower frequency. Therefore, the resultant sampling rate must
be at least twice the cutoff frequency of your digital low-pass filter, or
else aliasing will occur. For instance, say that you have applied a 1-kHz
low-pass filter to your data. To avoid aliasing, the time interval for each
sample must be shorter than 0.5 ms. If the original data was sampled
at 100 kHz (0.01 ms per sample), you could safely decimate it by a
factor as large as 0.5/0.01, or 50 to 1. Whatever you do, don’t decimate
350 Chapter Fourteen
Timing techniques
Using software to control the sampling rate for a data acquisition
system can be a bit tricky. Because you are running LabVIEW on a
general-purpose computer with lots of graphics, plus all that operating
system activity going on in the background, there is bound to be some
uncertainty in the timing of events, just as we discussed with regard
to timestamps. Somewhere between 1 and 1000 Hz, your system will
become an unreliable interval timer. For slower applications, however,
a While Loop with a Wait Until Next ms Multiple function inside
works just fine for timing a data acquisition operation.
The best way to pace any sampling operation is with a hardware timer.
Most plug-in boards, scanning voltmeters, digitizers, oscilloscopes, and
many other instruments have sampling clocks with excellent stability.
Use them whenever possible. Your data acquisition program will be
simpler and your timing more robust.
Your worst timing nightmare occurs when you have to sample one
channel at a time—and fast—from a “dumb” I/O system that has no abil-
ity to scan multiple channels, no local memory, and no sampling clock.
Gary ran into this problem with some old CAMAC A/D modules. They
are very fast, but very dumb. Without a smart controller built into the
CAMAC crate, it is simply impossible to get decent throughput to a
LabVIEW system. Remember that a single I/O call to the NI DAQ or NI
GPIB driver typically takes 1 ms to execute, thus limiting loop cycle time
to perhaps 1000 Hz. Other drivers, typically those based on peek and
poke or other low-level operations including some DLL or CIN calls, can
be much faster, on the order of microseconds per call. In that case, you at
least have a fighting chance of making reliable millisecond loops.
If the aggregate sampling rate (total channels per second) is pressing
your system’s reliable timing limit, be sure to do plenty of testing and/
or try to back off on the rate. Otherwise, you may end up with unevenly
sampled signals that can be difficult or impossible to analyze. It’s bet-
ter to choose the right I/O system in the first place—one that solves the
fast sampling problem for you.
Configuration Management
You could write a data acquisition program with no configuration man-
agement features, but do so only after considering the consequences.
The only thing that’s constant in a laboratory environment is change,
so it’s silly to have to edit your LabVIEW diagram each time someone
Writing a Data Acquisition Program 351
What to configure
Even the simplest data acquisition systems have channel names that
need to be associated with their respective physical I/O channels. As
the I/O hardware becomes more complex, additional setup information
is required. Also, information about the experiment itself may need to
be inseparably tied to the acquired data. For instance, you certainly
need to know some fundamental parameters such as channel names
and sample intervals before you can possibly analyze the data. On the
other hand, knowing the serial number of a transducer, while useful, is
not mandatory for basic analysis. Table 14.2 is a list of configuration-
related items you might want to consider for your system.
In a sense, all this information comprises a configuration data-
base, and any technique that applies to a database could be applied
here: inserting and deleting records, sorting, searching, and of course
storing and fetching database images from disk. These are tasks for
a configuration editor. Additionally, you need a kind of compiler or
translator program that reads this user-supplied information, validates
it, and then transmits it in suitable form to the I/O hardware and the
acquisition program. (See Figure 14.13.)
The level of sophistication of such an editor or compiler is limited only
by your skill as a programmer and your creative use of other applications
Di
Da sk
Ed tab
ito as
rV eV
Is Is
Co
mp
ile
rV
Is
Re
a
Ac l-T
I/O qu ime
isi
Ha tio Da
rd n V ta
wa Is
re
SCX
I-10
01
SCXI
1140
SCXI
1140
SCXI
1140
SCXI
1140
SCXI
1140
SCXI
1140
SCXI
1140
SCXI
1140
SCX SCXI
MAINF
I 1140
RAME
SCXI
1140
SCXI
1140
on your computer. The simplest editor is just a cluster array into which
you type values. The most complex editor we’ve heard of uses a com-
mercial database program that writes out a configuration file for Lab-
VIEW to read and process. You can use the SQL Toolkit (available
from National Instruments) to load the information into LabVIEW by
issuing SQL commands to the database file. When the experiment is
done, you might be able to pipe the data—or at least a summary of the
results—back to the database. Do you feel adventurous?
Assuming that you’re using DAQmx, much of this configuration
information is addressed by the Measurement and Automation
Explorer (MAX). MAX is a convenient and feature-rich graphical-
user interface that you use to maintain and test DAQmx tasks. From
within LabVIEW, about all you need to do is to let the user pick from a
list of predefined tasks. All the other information about each channel—
scale factors, hardware assignments, and so forth—is already defined
in the file, so your LabVIEW program doesn’t need to keep track of it.
Supplementary information, such as experimental descriptions, will of
course be maintained by programs that you write.
Configuration editors
By all means, use MAX if you can. But it can’t address every imagin-
able need, so you should know something about the alternatives. Aside
from that rather elaborate application of a commercial database, there
Writing a Data Acquisition Program 353
Static editors for starters.Ordinary LabVIEW controls are just fine for
entering configuration information, be it numeric, string, or boolean
format. Since a typical data acquisition system has many channels, it
makes sense to create a configuration entry device that is a cluster
array. The cluster contains all the items that define a channel. Mak-
ing an array of these clusters provides a compact way of defining an
arbitrary number of channels (Figure 14.14).
One problem with this simple method is that it’s a bit inconvenient
to insert or delete items in an array control by using the data selection
True
Empty cluster Configuration
array control
Channel is on scan-- add to array out
(hidden)
True
Channels Channel Name Empty name
Channel
Channel Number
On Scan?
needs a name.
Figure 14.14A static configuration editor using a cluster array. It checks to see that a channel
name has been entered and appends on scan channels to the output configuration array.
354 Chapter Fourteen
Voltage Inputs
0
0
Tag Name PI-203
Position
(0...15) Descriptive Name
Tank Pressure
Module Range
Mux No. 0 to 5 VDC (AD6)
0
Slope Intercept Units
1.00 0.00 PSIG
High Alarm Low Alarm
350.00 0.00
2 2
sizes out
sizes in True
Figure 14.16This VI gets rid of empty table rows. Erasing a cell completely
does not remove that row of the 2D array. But this VI does.
name 0
mux 1
slot 2
chan 3
Popup editors. Say you have several different kinds of I/O modules,
and they all have significantly different configuration needs. If you try
to accommodate them all with one big cluster array, various controls
would be invalid depending on which module was selected. You really
need separate input panels for each module type if you want to keep
the program simple.
One solution is to put several buttons on the panel of the main
configuration VI that open customized configuration editor subVIs
(Figure 14.18). Each button has its mechanical action set to Latch
When Released, and each editor subVI is set to Show front panel
when called. The editor subVIs do the real configuration work. Note
that they don’t really have to be any fancier than the static editors
we already discussed. When you create one of these pop-up subVIs,
remember to disable Allow user to close window in the Window
Options of the VI Setup dialog. Otherwise, the user may accidentally
close the window of the VI while it’s running, and then the calling VI
will not be able to continue.
Present Config
Save config to file
0
Module Type Slot #
Reload config from file SCXI-1120 1
8-channel isolated input
Module Gain
Switch config file 1
Channels CJC IC Sensor
0
Edit channel config Name
First channel
Signal Type
EXIT Type J thermocouple
Gain Scale
Path
100 1.000000 Scan?
Maxima:SCXI_Config.CFG
Offset Engr Units ON
0.000000 DegC
Present Config
Change file From file To file Edit
Figure 14.19 The SCXI Analog Config Dialog VI helps the user manage SCXI analog input modules. The
configuration is stored in a global variable for use elsewhere in the data acquisition program, and it can be
loaded from and saved to files. (False frames of all the Case structures simply pass each input to its respec-
tive output.)
Also, the Module Gain and channel Gain controls are selectively dis-
abled (dimmed) by Attribute nodes. For instance, if the user picks an
SCXI-1120 module, each channel has its own gain setting, so the Gain
control is enabled while the Module Gain control is disabled.
This is the fanciest configuration editor that Gary could design in
LabVIEW, and it uses just about every feature of the language. This
editor is particularly nice because, unlike in the static editor with its
cluster array, all the controls update instantly without any clumsy array
navigation problems. However, the programming is very involved—as
it would be in any language. So we suggest that you stick to the sim-
pler static editors and use some of the ideas described here until you’re
really confident about your programming skills. Then design a nice user
interface, or start with my example VI, and have at it. If you design the
360 Chapter Fourteen
Figure 14.20 Panel of the SCXI configuration editor, showing some of the features that you can implement if
you spend lots of time programming.
too many
Status
4
Number of ok
channels max chans
exit
Figure 14.21 How to use a ring indicator as a status display. The While
Loop runs all the time, so the user’s settings are always being evaluated.
The ring indicator contains predefined messages—in this case, item 0 is
OK to continue. Any real work for this configuration editor would be done
inside the While Loop.
continue, while item 1 is the error message shown. The status has to be
zero to permit the Exit button to terminate the While Loop.
You can also use a string indicator to display status messages, feed-
ing it various strings contained in constants on the diagram. The string
indicator can also display error messages returned from I/O operations
or from an error handler VI. You would probably use local variables to
write status messages from various places on your diagram, or keep
the status message in a shift register if you base your solution on a
state machine. That’s what we did with the SCXI configuration editor.
Some configuration situations require constant interaction with the
I/O hardware to confirm the validity of the setup. For instance, if you
want to configure a set of VXI modules, you might need to verify that
the chosen module is installed. If the module is not found, it’s nice to
receive an informative message telling you about the problem right
away. Such I/O checks would be placed inside the overall While Loop.
A good status message is intended to convey information, not admonish
the user. You should report not only what is wrong, but also how to cor-
rect the problem. “ERROR IN SETUP” is definitely not helpful, although
that is exactly what you get from many commercial software packages.
Dialogs can also be used for status messages, but you should reserve
them for really important events, such as confirming the overwriting of a
file. It’s annoying to have dialog boxes popping up all the time.
362 Chapter Fourteen
Menu-driven systems. When PCs only ran DOS, and all the world was
in darkness, menu-driven systems were the standard. They really are
the easiest user interfaces to write when you have minimal graphics
support. The classic menu interface looks like this:
Choose a function:
1: Initialize hardware
2: Set up files
3: Collect data
Enter a number >_
In turn, the user’s choice will generate yet another menu of selections.
The good thing about these menu-driven prompting systems is that the
user can be a total idiot and still run your system. On the other hand,
an experienced user gets frustrated by the inability to navigate through
the various submenus in an expedient manner. Also, it’s hard to figure
out where you are in the hierarchy of menus. Therefore, we introduce
menus as a LabVIEW technique with some reluctance. It’s up to you to
decide when this concept is appropriate and how far to carry it.
The keys to a successful menu-driven system are aids to navigation
and the ability to back up a step (or bail out completely, returning to
step 1) at any time. Using subVIs that open when called allows you to
use any kind of controls, prompting, status, and data entry devices that
might be required.
A LabVIEW menu could be made from buttons, ring controls, or slid-
ers. If you use anything besides buttons, there would also have to be a
Do it button. Programming would be much like the first pop-up editor
example, earlier in this section. To make nested menus, each subVI
that opens when called would in turn offer selections that would open
yet another set of subVIs.
If you lay out the windows carefully, the hierarchy can be visible
on the screen. The highest-level menu VI would be located toward the
upper-left corner of the screen. Lower-level menus would appear offset
a bit lower and to the right, as shown in Figure 14.22. This helps the
user navigate through nested menus. LabVIEW remembers the exact
size and location of a window when you save the VI. Don’t forget that
other people who use your VIs may have smaller screens.
Configuration compilers
A configuration compiler translates the user’s settings obtained
from a configuration editor into data that is used to set up or access
hardware. The compiler may also be responsible for storing and recall-
ing old configuration records for reuse. The compiler program may or
may not be an integral part of an editor VI.
Writing a Data Acquisition Program 363
0
Model GPIB Addr Slot # (N) This digitizer is:
User
Config LC 6810 7 11 1 2 4
# Active Chans
ON-LINE
Channel Name Coupling Sensitivity Offset
Figure 14.23 The input to this configuration compiler for digitizers is a cluster array that a
user has filled in. The four clusters (channel names, coupling, sensitivity, and offset) need to be
checked for consistency and converted to arrays for use by the acquisition VIs.
3. Check the settings for consistency (for example, there may be limita-
tions on sensitivity for certain coupling modes).
4. Initialize each module and download the settings.
5. Write the output configuration cluster array to a global variable
which will be read by the acquisition VI.
We left out the gory details of how the settings are validated and
downloaded since that involves particular knowledge of the modules
and their driver VIs. Even so, much of the diagram is taken up by
0
Model GPIB Addr Slot # (N)
Config
Out 1 LC 8210 1 1
1 2 4
# Active Chans
0
Channel Name Coupling
Channel
No. Channel_One Non-Inverting
DC Coupled
Sensitivity Offset
4.096 V Bipolar
The compiler’s output is a cluster array. Note that it carries all the
Figure 14.24
same information as the input cluster array, but in a more compact form.
Writing a Data Acquisition Program 365
CAMAC
0 [0..1]
Config Out
Do setups for 6810 modules
Model model
GPIB Addr Various driver calls would
Slot # (N) be made at this point to
User Config # Active Chans download settings to the
Channel Name module and verify that the
Coupling settings are consistent.
Sensitivity (Omitted for clarity)
Offset
On-Line?
Convert 4
clusters into
1 cluster
Figure 14.25 Diagram for the simple compiler. Data types are converted by the inner For Loop. Set-
tings are checked for consistency, and digitizers are initialized by the inner Case structure. If a digi-
tizer is online, its configuration is added to the output array, which is passed to the acquisition VIs by
a global variable.
Config Model
GPIB Addr
Slot
Channels Name\tCoupling\tSen
sitivity\tOffset\r
Channel Name
Coupling
Sensitivity
Offset Format Into String
Figure 14.26 This is a routine that converts the configuration cluster of Figure 14.24 into a
tab-delimited text file for documentation purposes. As you can see, there’s lots of string build-
ing to do.
lab notebook. The direct approach is to generate a text file that you can
load and print with a spreadsheet or word processor. Add tab characters
between fields and carriage returns where appropriate to clean up the
layout. Titles are a big help, too. Figure 14.26 is a diagram that inter-
prets the configuration cluster array of Figure 14.24 into tab-delimited
text. There is a great deal of string building to do, so don’t be surprised
if the diagram gets a little ugly.
Here is what this text converter VI produces, as interpreted by a
word processor with suitable tab stops. As you can see, most of the set-
tings are written in numeric form. You could add Case structures for
Coupling, Sensitivity, and Offset to decode the numbers into something
more descriptive.
way to print the panel is to use the Print command in the File menu,
but you can also print programmatically. Here’s how. First, create
a subVI with the displays you want to print. If there are extra inputs
that you don’t want to print, pop up on those items on the diagram
and choose Hide Control. Second, turn on the Print At Completion
option in the Operate menu to make the VI print automatically each
time it finishes execution. All the print options in the VI Setup dialog
will apply, as will the Page Setup settings. When you call this carefully
formatted subVI, its panel need not be displayed in order to print.
Another way to generate a printout automatically is through the VI
Server. This method is quite versatile since you can have the VI Server
print any part of any VI, any time, any place. For instance, you can print
the visible part of a VI’s panel to the printer (Figure 14.27). Or you could
route the front-panel image to an HTML or RTF file for archiving.
SCXI Config VI. This solution is very convenient for the user. Previous
settings can be automatically loaded from a standard file each time
the configuration editor is called, and then the file is updated when an
editing session is complete.
Gary wrote the Setup File Handler VI to make this easy (Johnson
1995). It’s also a handy way to manage front panel setups where you
want to save user entries on various controls between LabVIEW ses-
sions. For example, it’s nice to have the last settings on run-time con-
trols return to their previous states when you reload the VI. The Setup
File Handler is an integrated subVI which stores a cluster containing
all the setup data in a shift register for quick access in real time and in
a binary file for nonvolatile storage. Multiple setups can be managed
by this VI. For instance, you might have several identical instruments
running simultaneously, each with a private collection of settings.
As you can see from the panel in Figure 14.28, the setup data is
a cluster. Internally, setup clusters are maintained in an array. The
element of the array to be accessed is determined by the Setup Index
control. The Setup cluster is a typedef that you should edit with the
Control Editor, and probably rename, for your application. You usually
end up with a very large outer cluster containing several smaller clus-
ters, one for each subVI that needs setup management.
Setup Setup
String String
Figure 14.28 The Setup File Handler VI stores and retrieves setup clusters
from disk. It’s very useful for configuration management and for recalling
run-time control settings at a later time.
Writing a Data Acquisition Program 369
Initialize controls: Read setup Your main program, in which Update the setup database
database, then use Local Variables the user updates the controls. and write it out to disk.
A number
Numeric A number Numeric
A Boolean Boolean
Boolean A Boolean
Setup Index A string String Duplicate Index
String A string
index
the setup
error
Error in
Read mem Write both Error out
200
Read from memory (faster) Exit
unless you really want the
disk image.
To see how this all works, look at the Setup File Handler example
in Figure 14.29. It begins by calling the Setup File Handler VI with
Mode = Init to create or open a setup file. If the file doesn’t exist, the
settings in a default setup cluster are used for all setups. In most appli-
cations, all subsequent calls to Setup File Handler would be performed
in subVIs. In the example subVI shown in Figure 14.29, the Setup File
Handler is the source for control initialization data. Local variables
initialize controls for the user interface, or the data might be passed
to a configuration subVI. After the main part of the program finishes
execution, the new control settings or configuration data are bundled
back into the setup cluster, and the Setup File Handler updates the
memory and disk images. Bundle and Unbundle by Name are very
handy functions, as you can see. Try to keep the item names short in
the setup cluster to avoid huge Bundlers.
Where should the setup file be saved? You can use the LabVIEW file
constant This VI’s Path to supply a guaranteed-valid location. This
technique avoids a familiar configuration problem where you have a
hand-entered master path name that’s wrong every time you move the VI
to a new computer. In your top-level VI, insert the little piece of code
shown in Figure 14.30 to create a setup file path. If the current VI is in a
LabVIEW library, call the Strip Path function twice as shown. If the VI
is in a regular directory, call Strip Path once. This technique even
works for compiled, stand-alone executables created by the LabVIEW
Application Builder.
Another way to obtain a guaranteed-valid (and locatable) path is
with the Get System Directory VIs written by Jeff Parker and pub-
lished in LabVIEW Technical Resource (Parker 1994). He used CINs to
locate standard directories, such as System: Preferences on the Macin-
tosh or c:\windows on the PC. (You could simply assume that the drive
and directory names are always the same, but sooner or later someone
will rename them behind your back!)
370 Chapter Fourteen
MySetup.SET
Figure 14.30 Use the path manipulation functions
Strip Path and Build Path to create a guaranteed-
valid path for your setup files. If the current VI is in
a LabVIEW library, call Strip Path twice, as shown.
Otherwise, call it only once.
block to a standard VI. Although once it’s converted to a VI, you can no
longer use the Wizard. In our example we’ve left it as an Express VI.
For timing we’re using the Timed Loop set to run once every 1000 ms.
We covered the Timed Loop in detail in Chapter 5, “Timing.” The Timed
Loop gives us a high-priority, fairly deterministic software timing
mechanism for our data acquisition. Once the Timed Loop is started, it
will interrupt any other process in order to run at its scheduled time.
We are writing the data to a text-based LabVIEW measurement file
each iteration of the loop. The system overhead is fairly low, we only have
eight channels at 1 Hz, and so we can afford to save them as text data.
If we needed something faster or if file size were an issue, we could use
the binary TDM format. Chapter 7, “Files,” has some more information
on these and other types of file I/O in LabVIEW. Figure 14.33 shows the
configuration dialog for the Express VI Write to Measurement File. It
creates a text-based LabVIEW measurement file with an informational
header the first time it is called. Each subsequent time it appends the
timestamped data to the file. This VI takes only a few minutes to put
together, yet it can solve many problems. We hope you find it useful.
Figure 14.33 Dialog for the Express VI Write to Measurement File. It creates a
text-based LabVIEW measurement file (lvm) and appends timestamped data
on each iteration.
Writing a Data Acquisition Program 373
DMA. The technique used to get memory off the board and in to mem-
ory is called direct memory access. DMA is highly desirable because
it improves the rate of data transfer from the plug-in board to your
computer’s main memory. In fact, high-speed operations are often impos-
sible without it. The alternative is interrupt-driven transfers, where
the CPU has to laboriously move each and every word of data. Most
PCs can handle several thousand interrupts per second, and no more.
A DMA controller is built into the motherboard of every PC, and all
but the lowest-cost plug-in boards include a DMA interface. The latest
374 Chapter Fourteen
Figure 14.34 The Cont Acq&Graph Voltage To File (Binary).vi. This medium-speed example can acquire and
save binary data to disk at 10,000 samples per second until you run out of storage.
Writing a Data Acquisition Program 375
examples you can modify to fit your application needs, look at the
DAQmx examples shipping with LabVIEW.
Bibliography
Chugani, Mahesh, et al.: LabVIEW Signal Processing, Prentice-Hall, Englewood Cliffs,
N.J., 1998.
Johnson, Gary W.: “LabVIEW Datalog Files,” LabVIEW Technical Resource, Vol. 2, No. 3,
Summer 1994. (Back issues are available from LTR Publishing.)
Johnson, Gary W.: “Managing Front-Panel Setup Data,” LabVIEW Technical Resource,
Vol. 3, No. 1, Winter 1995. (Back issues are available from LTR Publishing.)
Parker, Jeff: “Put config Files in Their Place!” LabVIEW Technical Resource, Vol. 2, No. 3,
Summer 1994. (Back issues are available from LTR Publishing.)
Press, William H., et al.: Numerical Recipes in C, Cambridge Press, New York, 1990.
This page intentionally left blank
15
Chapter
LabVIEW RT
377
378 Chapter Fifteen
RT Hardware
The LabVIEW RT development environment runs on a Windows host.
The host serves as the local storage for the RT applications during
development as well as provides the user interface. When the Run but-
ton is pressed, the LabVIEW VIs are downloaded to the embedded tar-
get, where they are compiled and run on their own dedicated processor
*There’s nothing wrong with using a desktop system in a soft real-time application;
there are perhaps millions of such systems in operation today. You only have to prove
that the system meets your requirements for determinism under all circumstances. Ded-
icating a computer to a control task, deleting unneeded applications, and keeping users
away leave it lightly loaded, thus reducing latency.
380 Chapter Fifteen
Figure 15.1 The first platform for LabVIEW RT was the 7030-
series RT DAQ card with an onboard 486 processor and analog
and digital I/O. Today the PCI 7041/6040E uses a 700-MHz PIII
with 32-Mbyte RAM and 32-Mbyte Flash. (Photo courtesy of
National Instruments.)
under a real-time OS. The user interface runs only on the host, not
on the embedded target. Normal LabVIEW user-interface interaction
takes place over a TCP/IP (Ethernet) communications link using Front
Panel Protocol (FPP) routines that National Instruments provides
in LabVIEW RT.
LabVIEW RT runs on the PCI 7041/6040E series RT DAQ boards,
on selected PXI crate controllers, Compact Fieldpoint controllers, Com-
pact Vision systems, and even off-the-shelf dedicated desktop PC hard-
ware (Figures 15.1 and 15.2). The 7041 intelligent DAQ board plugs
functions, such as Make Current Values Default, don’t make any sense
on a card without any local storage, and so they have been imple-
mented only on the PXI system. These minor details are covered in the
LabVIEW RT manuals.
One of the first things you will notice after you have booted up your
PXI crate into the real-time environment is that the graphical display is
gone. You are left with a black, DOS-like screen and a cursor that doesn’t
respond to keyboard input. In the real-time system, all user-interface
code is stripped out, and only the compiled block diagram code is left.
On the PXI system there is a printf function that will print a line to the
monitor for debugging during development, but other than that there is
no resident user-interface at all! All user-interface activity is handled by
the Windows-based development system over the Ethernet link.
a safety loop that needs to reset a watchdog timer once per second to
keep the process from being shut down. The safety loop is the real-
time task, and the 1-kHz process will have to be preempted while the
safety loop executes. To build a successful real-time system, you need
to balance a limited number of real-time tasks with very close atten-
tion paid to how those tasks might interact or clash with one another.
The remainder of your code should be tolerant of preemption; that is, it
should tolerate much greater latency.
The next thing you need to know is the timing requirements that
you have to meet. LabVIEW’s built-in timing function uses the PC’s
real-time clock, which is based on milliseconds. To achieve timing loops
faster than 1 kHz, you’ll need to use hardware-timed acquisition, where
a DAQ board provides the clock. This is not a problem, but you need to
know in advance what your restrictions are so you can design for suc-
cess. Be sure to leave enough spare time in your real-time task so that
background tasks can execute. For example, a 10-kHz PID loop probably
isn’t going to do you much good if you can’t send it a new set point.
Once you have your process timing requirements, you need to ana-
lyze your code or VIs and to determine how long each VI or collection of
VIs takes to execute. After all your code is optimized and verified to fit
within your timing requirements, you construct your program so that
each time-critical piece of code can execute on time, every time.
Measuring performance
LabVIEW’s built-in VI profiler provides a good first indicator of VI
execution time. But to help you get down to the real nitty-gritty of
how many processor cycles a VI will take, National Instruments has
also given us some tools to read the Pentium’s built-in timestamp func-
tion. This allows you to get nanosecond-scale readings on execution
time. The RDTSC Timing library includes a low-level VI (RDTSC.vi)
that calls a CIN to read the CPU clock, and a utility to convert the result-
ing tick count to seconds (RDTSC Diff To Time.vi). There are also some
examples that measure and display timing jitter. All you have to do is to
insert your code to be tested. Figure 15.3 shows a very simple example.
0 [0..2] Processor
1 [0..2] 2 [0..2]
Speed (MHz)
Initial timestamp Final timestamp
733.00
Time (uS) Code to be
tested
goes here rdtsc.vi
rdtsc.vi
RDTSC Diff To
Time.vi
Figure 15.3 The RDTSC library includes a VI to read the Pentium timestamp
and convert it to microseconds. Use it for direct measurement of jitter and
other small delays in your code.
384 Chapter Fifteen
Have you ever wondered what the real effect of that coercion dot was?
This will tell you. You can find the RDTSC VIs on the Web at www
.ni.com.
An oscilloscope is an essential tool on the hardware test bench, and
it should be one of the first tools you turn to when analyzing real-time
embedded performance as well. You can bring out many of the internal
timing signals of your data acquisition card to an external terminal
block like the SCB-68. A logic analyzer is another handy tool when
most of the signals you’re concerned about are digital. Looking at your
timing signals is a great way to monitor the performance of your soft-
ware and hardware.
We once had a data acquisition problem where we had to sample
a photodetector immediately before and after a laser pulse (see the
timing diagram in Figure 15.4). The laser fired at 30 Hz, and the first
sample needed to be taken 250 µs before the laser pulse and the second
sample 500 µs after the laser pulse. To provide this uneven timing, we
programmed our DAQ card to sample the same channel twice each
time an external scan clock pulsed. Then we set the interchannel delay
on the DAQ card to 750 µs. This meant that whenever the DAQ board
received a trigger it would take the first sample and then wait 750 µs
before taking the second sample. By monitoring our board’s timing sig-
nals we were able to verify exactly when each scan was beginning and
when each sample was being taken.
Monitoring our DAQ card’s timing signals also allowed us to see a
“gotcha” of NI DAQ: When you use an external scan clock, the first
sample is not taken until one interchannel delay after the input trig-
ger. This meant that we were totally missing the event we were trying
to see. But once we saw where our timing problem was, we were able to
adjust our pretrigger input to the external scan clock so that the tim-
ing was just as specified. After the triggering was set correctly and
the board was configured to store the data into a circular buffer, our
A
C
Figure 15.4 Monitor AI CONVERT* to see when each A/D conversion is taking
place. We needed to sample a photodiode (A) 250 µs before a laser pulse (B) and
500 µs after the pulse (C). The DAQ card was configured to use an external scan
clock trigger (D) with an interchannel delay of 750 µs. An extra delay equal to the
interchannel delay (750 µs) was required by NI DAQ to set up the acquisition.
LabVIEW RT 385
DAQ processing application only had to monitor the buffer for new
data and act on it as it came in. You should use hardware triggering
any time you can, even in LabVIEW RT. It will give you the most stable
and repeatable system. It’s also fully verifiable in that you can directly
observe the signals externally.
To view your board’s internal timing signals, you need to use the Route
Signal VI (available in the DAQ Miscellaneous function palette) to bring
them out to a PFI pin on the connector where you can connect to them.
You can wire this VI into your DAQ configuration routines and leave it
there during development. It doesn’t add any overhead to the data acqui-
sition tasks, and you will find yourself relying on these signals whenever
you need to verify timing. Figure 15.5 shows an example where the AI
Read One Scan VI has been modified to include signal routing. Some of
the key pin-outs to monitor on the E series cards are as follows:
Look at the documentation that comes with your DAQ card to verify
your particular pin-outs.
iteration (init:0)
..0
signal name
coupling &
input config PFI 7
scaled data
input limits
Route Signal.vi
task id
device (1)
channels (0)
binary data
buffer size 0:
0
don't allocate AI scan start
[number of signal source error out
error in AMUX boards]
Figure 15.5 AI Read One Scan modified to bring out the data acquisition card’s internal
timing signals. The configuration case has been altered to include the Route Signal VI.
The signal being routed is AI STARTSCAN to PFI 7. Each time AI Read One Scan
executes, PFI 7 will produce a pulse.
386 Chapter Fifteen
Almost any decent bench oscilloscope will let you see these brief tran-
sitions. If you’re lucky and have a digital oscilloscope with a variable-
persistence display, you can probably get a decent measurement of the
software latencies, which will show up as jitter. But we’ll show you an
even better way to get an accurate timestamp of each and every transi-
tion by using something you’re just as likely to have around: an extra
DAQ card with a counter (Figure 15.6). There is an example VI called
Count Buffered Edges. This VI uses one of the DAQ-STC counters
on an E-series DAQ card to measure the period of a digital signal at the
gate of one of your counters.
The counter measures the period of your signal by counting a signal
connected to the counter’s source and storing that count in a buffer each
time it receives a signal at its gate. On the example VI, Count Buffered
Edges, the counter’s source is connected to the internal 20-MHz clock.
This means the count increments every 50 ns. One of the features of
buffered period measurement is that with each new period the counter
is rezeroed, and you don’t have to worry about the counter rolling over.
You can use the Count Buffered Edges VI to measure the perfor-
mance of your real-time data acquisition tasks. Use the Route Signal
VI when you configure your analog input or output to send the appro-
priate timing signal out to the connector. Then connect the output of
that PFI pin to the gate of a counter on your extra DAQ board.
Now you can use this simple tool to “peek under the hood” of your real-
time system and analyze your data acquisition tasks. Count Buffered
SOURCE0
(20 MHz Clock)
5 6 1 2 3 4 5 1 2
GATE0
(External signal)
Count during
LOW interval
Read count,
stored in buffer n=6 n=5
Counter0
Clock pulses per interval
SOURCE0
Clock
OUT0
GATE0
DIG. GND.
Figure 15.6 A DAQ counter counts the 20-MHz internal clock, which
is routed to the SOURCE input. Each time the counter’s GATE is
triggered, the total count is stored in a buffer. The counter is then
rezeroed and counting continues.
LabVIEW RT 387
Edges lets you see just how deterministic your loops are. The histogram
plot keeps a running tally of latencies that you can use for long-term
profiling to verify that your code is executing on time, every time.
Sometimes your measurements may seem very erratic, and this can
be caused by noise on the digital lines. If you have a fast oscilloscope,
you might be able to see noise or glitches that are triggering the coun-
ter. One way to cure the problem is by conditioning your signal with a
one-shot, which is a pulse generator with a programmable pulse width.
Set the pulse width to a value that’s a bit less than the expected inter-
val you’re measuring, and the extra pulses in between will be ignored.
There is a simple VI in vi.lib (vi.lib\DAQ\1EASYIO.llb\Generate
Delayed Pulse.vi) that will let you use one of your counters as a delay
generator. Set the delayed pulse width to be several tens of microsec-
onds, connect the signal you want to buffer to the counter’s gate, and
monitor your cleaned-up signal at the counter’s source.
If, for example, you want to look at AO UPDATE* to see when the
analog output circuitry on your real-time card is being updated, you
connect PFI 5 to counter 0’s GATE. The slightly delayed buffered pulse
will be at counter 0’s OUT. Buffering your signals in this way will make
them easier to see, and it will also eliminate some of the noise that
might be giving you false readings. No extra software overhead is added
to your program other than the time required to initialize the counters.
Figure 15.7 shows the results obtained when a digital line was toggled
Counter Configuration
Latency
device counter 50
1 0 40 Last Buffer
mean
task type
20
buffered semiperiod measurement 100068.78
standard deviation
Buffer Size counts to read 0
2.55
10000 100
-20
Gate Specification
-40
Gate Selection
-50
PFI Line default 0 10 20 30 40 50 60 70 80 90 100
STOP
RTSI Line RTSI 0 Histogram
500000
Misc low 100000
Gate Polarity
10000
positive (low to high) # of data points outside
1000 histogram bounds
above 0
Histogram Configuration 100
below 0
Center Period (usec) + - Range # bins
10
100068.35 150.00 100
1
-100.0 -50.0 0.0 50.0 100.0
Microseconds
Figure 15.7 Screen capture from the Count Buffered Edges VI after toggling a digital line every
100 ms for 20 h. The histogram of latencies shows that there were no latencies greater than 50 µs.
388 Chapter Fifteen
Shared resources
If unexplained timing glitches occur when you run your code, then you
may have the problem of two sections of code competing for a shared
resource. The quickest way to lose deterministic execution of your code
is to have multiple tasks in competition.
A shared resource is a software object that cannot be accessed
simultaneously by multiple threads. Examples of shared software
resources are single-threaded device drivers such as NI DAQ, global
variables, shared subVIs that are not reentrant, synchronization code
(semaphores, queues, occurrences, rendezvous), networking protocols,
file I/O, and LabVIEW’s memory manager.
To protect a shared resource such as NI DAQ calls, LabVIEW places
a mutex around it. A mutex, or mutual exclusion, is a software mecha-
nism to prevent two threads from accessing a shared resource at the
same time. You can think of a mutex as a lock on a door that has only
one key. When your program uses NI DAQ to access a data acquisi-
tion board, it must first check to see if NI DAQ is in use. If NI DAQ is
available, your program gets the “key,” unlocks the door, and enters the
protected area, and the door is locked behind it. No other program can
use the resource until the door is unlocked and the key is returned.
For example, you can write a program in regular LabVIEW that
uses two data acquisition loops running in parallel; maybe one is ana-
log input and the other is digital I/O. Because you’re not concerned
about or monitoring the real-time behavior of these DAQ loops, every-
thing appears to be fine—only the average time of the loops is impor-
tant. But if you do this in a real-time execution system where you
are paying close attention to deterministic timing, you will see how
each DAQ task is interfering with the other as they compete for the
shared resource.
If two tasks competing for a shared resource are at different pri-
orities, you can get a condition known as priority inversion, where
a high-priority task has to wait for a low-priority task to finish. If a
low-priority VI has the key to the shared resource, that VI cannot be
preempted by a higher-priority VI. The high-priority task will have
to wait for the low-priority task to finish. One method used by the
LabVIEW RT 389
Figure 15.8 You can set how many threads are assigned to each LabVIEW execu-
tion system with the Thread Configuration VI. Find it in labview/vi.lib/utility/
sysinfo.llb.
LabVIEW RT 391
continuous
device Acquire and wait for next scan
0
scaled data
channels
scan rate
Figure 15.9 AI Single Scan provides its own sleep function. The VI
shown here would typically be set to time-critical priority in LabVIEW
RT. This is not standard practice in regular LabVIEW.
LabVIEW RT 393
is loaded into the CPU so that the thread is in the same state it was
in before it was removed. This can take a significant amount of time.
Any time you force the operating system to perform a context switch by
calling a subVI in another execution system, you cause delays that will
affect how efficiently your code executes. If the subVI needs to be in a
separate execution system, then consider making it into a top-level VI,
just as you would for a time-critical VI.
Consider what happens when you have a top-level VI set to run in
the standard execution system and a subVI set to run in the DAQ exe-
cution system. Each time that subVI is called, the thread containing
the top-level VI will be removed from the processor and the thread con-
taining the subVI will run. When the subVI is finished, another context
switch occurs as the OS switches back to the top-level VI. If the subVI
is in a loop that runs very frequently, the extra overhead will be easy
to observe. This shows why you should not arbitrarily assign execution
systems to subVIs.
When a subVI is set to execute in the same execution system as the
caller, it will inherit the caller’s priority and run in whatever execu-
tion system it was called from. That is, if you have a top-level VI set
to time-critical priority in the standard execution system and it calls
a subVI set to normal priority and the same execution system as the
caller, there will not be a context switch from time-critical to normal
priority. The subVI will have its priority elevated to the same priority
as the calling VI.
LabVIEW RT 395
Scheduling
Timed structures simplify the way you schedule real-time execution
order. Efficient scheduling and the mechanisms used to enforce it used
to be a complex art form driven by which VI has the highest priority and
the earliest deadline. Now you just place down a Timed Loop, give it a
priority and a period, and off you go. If you need a parallel task at a dif-
ferent loop rate and a higher priority, that’s no problem! Once again Lab-
VIEW makes something incredibly easy that was terribly complicated.
Timed structures
There are three structures with built-in scheduling behavior: Timed
Loops, Timed Loop with Frames, and Timed Sequences. All three
provide a way to schedule real-time program execution. Each Timed
structure runs in its own thread at a priority level just beneath time
critical and above high priority. Timed Loops do not inherit the prior-
ity or execution system of the VI in which they run. This means that a
Timed structure will stop any other code from running until the struc-
ture completes. LabVIEW’s Timed Structure Scheduler runs behind the
scenes and controls execution based on each Timed structure’s schedule
and individual priority. Each Timed Loop’s priority is a positive inte-
ger between 1 and 2,147,480,000. Priorities are set before the Timed
Loop executes, through either the configuration dialog or a terminal
on the Input node. Figure 15.11 shows the configuration dialog for
a Timed Loop. Priority can be adjusted inside the Timed Loop through
the right data node, so you can dynamically adjust performance if
needed. Timed structures at the same priority do not multitask between
each other. The scheduler is preemptive, but not multitasking. If two
structures have the same priority, then dataflow determines which one
executes first. Whichever structure starts execution first will finish
before the other structure can start. Each Timed Loop runs to comple-
tion unless it is preempted by a higher-priority Timed Loop, a VI run-
ning at time-critical priority, or the operating system.
Figure 15.12 illustrates a round robin schedule implemented with
3 timed loops. All 3 loops use the 1 kHz system clock with a period
(dt) of 30 ms. Loop B’s start (t0) is offset by 10 ms from Loop A and
Loop C is offset from Loop A by 20 ms. The timeline in Figure 15.13
shows each loop’s time slice provided each task finishes in its allotted
10 ms. If one of the tasks took longer than 10 ms to complete the Timed
Structure Scheduler would adjust execution based on priority, and the
other loops’ configuration options for late iterations. Any loop with a
higher priority will preempt a lower priority loop. Once the higher pri-
ority loop has finished the preempted loop will get a chance to finish.
Loop A
Loop B
Loop C
Time
Figure 15.13 Time line for the three Timed Loops in Figure 15.12.
All loops have the same period, but are offset in phase.
Late starts by timed loops are governed by the selected actions on late
iterations as shown in Figure 15.11. The default selections will cause a
timed loop with a late start to forgo execution until its next scheduled
timeslot. Deselecting “Discard missed periods” causes the timed loop to
try to make up any missed iterations. This can have unexpected conse-
quences, so use with caution and always verify that your application is
doing what you expect.
Timing sources for Timed structures can be based on your computer’s
internal timer or linked through DAQmx (Figure 15.14) to a hardware-
timed DAQ event with DAQmx Create Timing Source.vi. You can
even chain multiple timing sources together with Build Timing
Figure 15.14 Hardware-timed acquisition and control. Timed Loops can be synchronized to DAQmx events.
398 Chapter Fifteen
Communications
LabVIEW has several technologies to make exchanging data transpar-
ent between distributed real-time targets and a host PC. The easiest to
use is the Network-Published Shared Variable. Figures 15.15 and
15.16 show all the code you need to communicate between a real-time
process and a host PC. Shared variables communicate using National
Instruments’ Publish-Subscribe Protocol (NI-PSP) with a Shared
Variable Engine (SVE) that hosts the shared variables on the net-
work. When you write to a shared variable, LabVIEW sends the data
to the SVE which then publishes (sends) the data to all subscribers.
You can directly link front panel control and indicators to shared vari-
ables (Figure 15.17). This is really easy, but totally violates dataflow.
Shared variables may not be right for every application, especially
if you need to communicate with programming languages outside of
LabVIEW. For those applications consider the Real-Time Communica-
tion Wizard (Tools >> Real-Time Module >> Communication Wizard).
Figure 15.17 Data binding allows you to bind front panel objects to
shared variables without programming.
Bibliography
Application Note 200, Using the Timed Loop to Write Multirate Applications in
LabVIEW, www.ni.com, National Instruments Corporation, 11500 N. Mopac Express-
way, Austin, Tex., 2004.
Hays, Joe: “Advanced Real-time Programming Techniques,” NIWeek 2000, Advanced
Track Session 2D, available from www.natinst.com/niweek.
Li, Yau-Tsun Steven, and Sharad Malik: Performance Analysis of Real-Time Embedded
Software, Kluwer Academic Publishers, Boston, 1998.
Simon, David E.: An Embedded Software Primer, Addison-Wesley Longman, Reading,
Pa., 1999.
This page intentionally left blank
Chapter
LabVIEW FPGA
16
LabVIEW FPGA takes real-time programming to the hardware level.
Although LabVIEW RT is a deterministic real-time environment, it is
still software running on a microprocessor, and any microprocessor has
only a limited number of clock cycles available to share among all the
running processes. LabVIEW RT’s deterministic behavior means that
you can accurately predict when your code will run; but as your appli-
cation grows and you start running more processes, you have to begin
making tradeoffs that affect determinism.
LabVIEW FPGA applications are not constrained by processor or oper-
ating system overhead. Code is written in parallel, is mapped directly
into parallel hardware registers, and runs in parallel. With LabVIEW
FPGA you can write massively parallel hardware-timed digital control
applications with tight closed-loop rates in the tens of megahertz.
What Is an FPGA?
FPGA is the acronym for field-programmable gate array. The tech-
nology has been around for a while, but only recently have the costs
come down and the tools improved enough that FPGAs have become
commonplace. An FPGA is essentially a 2D array of logic gates with
software programmable interconnections. FPGAs range in size from
devices with tens of thousands of gates to devices with millions of gates
and multiple embedded microprocessors. The number of gates is really
just a marketing number. Each FPGA vendor combines those gates to
create its own unique architecture. The basic programming block of
the Xilinx parts used by National Instruments is defined as “slices.”
Each slice contains two storage elements, arithmetic logic gates, large
401
402 Chapter Sixteen
code, but hopefully in the future all NI’s DAQ cards will have user-
programmable logic. A presentation at NIWeek04 pointed out the major
shift in design philosophy between the LabVIEW FPGA programmable
DAQ cards and traditional DAQ cards and drivers (NI DAQ or DAQmx).
System designers of traditional DAQ cards design a flexible hard-
ware personality into the card and engineer a full-featured driver to
expose the hardware as the designer intended. It’s that “as the designer
intended” part that causes trouble. The designer couldn’t anticipate
your application or its exotic analog and digital triggering conditions.
LabVIEW FPGA and the RIO cards give you, the end-user/programmer,
a wide-open piece of hardware and a lightweight driver to handle com-
munications between the card and the PC. Whatever you want to do on
the card is in your hands. This is a revolutionary change.
Plug-in cards
National Instruments currently has seven R-series intelligent DAQ
cards varying in form factor, size of FPGA, and number and type of
404 Chapter Sixteen
I/O lines. All the cards are designed to let you create your own process-
ing algorithms that can run at rates up to 40 MHz. You can configure
digital lines as inputs, outputs, counter/timers, pulse-width modula-
tor (PWM) encoder inputs, or custom communication busses. On cards
with analog capabilities you can have up to eight simultaneous 16-bit
analog inputs at rates up to 200 kHz, and up to eight simultaneous
16-bit analog outputs at rates up to 1 MHz. The more we use LabVIEW
FPGA, the more we’re convinced that all DAQ cards should be user-
programmable. All the trigger lines and PXI local bus lines are exposed
on the RIO hardware, and you may use any triggering scheme, analog
or digital, you desire. Because the input, processing, and control out-
put are all embedded into the FPGA fabric, you will find that the RIO
plug-in cards have exceptional performance for single-point I/O control
applications.
CompactRIO
CompactRIO or cRIO combines a LabVIEW RT controller for advanced
floating-point analysis and control with an intelligent FPGA backplane
for hard real-time timing, synchronization, and control. A cRIO system
LabVIEW FPGA 405
Compact Vision
National Instruments’ Compact Vision system is a rugged industrial vision
system that uses a LabVIEW programmable FPGA for complex timing
and synchronization. Each system has three FireWire camera ports, a
VGA port, an Ethernet port, 15 digital inputs, and 14 digital outputs.
You can program the digital lines with LabVIEW FPGA for quadrature
encoders, generating strobe pulses for the vision system, synchronizing to
external stimuli, or controlling relays and other actuators.
406 Chapter Sixteen
Application Development
There is no single definitive programming method for LabVIEW
FPGA, but there are some significant differences between LabVIEW
on a desktop and LabVIEW code running on an FPGA. Here is a short
list, which we’ll expand on throughout the chapter:
■ Local variables. Some things that are bad on the desktop, such as
local variables, are good on FPGA. A local variable in desktop Lab-
VIEW is inefficient and results in extra copies of the data everywhere
it is used. On FPGA a local variable is a hardware register that can
be efficiently accessed from multiple locations. Race conditions can
still occur if you are not careful, so be sure to have one writer and
many readers.
■ SubVIs. SubVIs are efficient chunks of reusable code on the desktop,
but they can have unexpected consequences on the FPGA. Unless a
subVI is marked reentrant, there will be only one copy of the subVI
mapped into hardware. A single copy of a VI in hardware may be
what you want if you need to Read–Modify–Write a shared piece
of data in multiple loops, but it will also mean that the loops have
a hardware dependency on one another. This can cause unexpected
jitter and may even destroy an expected real-time response.
■ Parallelism. A common pitfall of LabVIEW programmers new to
LabVIEW FPGA is to use the same logic flow they used on the desk-
top and to ignore the physical parallelism exposed by the FPGA.
LabVIEW’s parallel dataflow paradigm maps really well into FPGA
programming because any node that has all its inputs satisfied will
execute. This lets you easily write parallel control loops that actu-
ally run in parallel. And because your code is mapped into hardware
registers, the number of deterministic parallel tasks is only limited
by the size of the FPGA and the efficiency of your code.
■ Flat hierarchy. Even though your application may have a hier-
archical structure to it, it is compiled and mapped onto a “flat”
two-dimensional array of logic without any hierarchy at all. Avoid
excessively complicated logic with many nested Case structures
because this maps very poorly onto the FPGA and consumes too many
resources. Minimizing resource usage is a good thing to practice
because no matter how big the FPGA is, sooner or later you’ll write
a program that is bigger.
Compiling
Compiling for the FPGA can take some time and is the only painful
part of programming with LabVIEW FPGA. When you compile for your
FPGA target, LabVIEW converts your block diagram into VHDL and
LabVIEW FPGA 407
Figure 16.4 Successful Compile Report from LabVIEW FPGA. The number to
watch is the number of slices used. In this case 196 slices out of 5120 were used.
then uses the Windows command line to call Xilinix’s tools. The end
product is a netlist prescribing how the hardware is interconnected.
The compile process can take a long time, many tens of minutes on a
PC with limited RAM. On a dual-Xeon machine with 2-Gbyte RAM an
average compile is 3 min, and the longest has been 20 min. If you’re
developing on a PXI controller, set up a fast PC with a lot of RAM as a
compile server. You’ll save a lot of time.
Figure 16.4 shows a compile report that is returned after the project
has successfully compiled. Here’s a quick definition of terms:
■ BUFGMUX. BUFG is a buffer for a global clock and MUX is a multi-
plexer. As you add derivative clocks you will consume BUFGMUXs.
■ DCM. Digital Clock Manager is used for clock multiplication. In
this project we derived an 80-MHz clock from the 40-MHz onboard
clock.
■ External IOBs. IOB is an input/output block. The XCV1000 on the
7831R has 324 IOBs, 216 of which are used.
■ LOCed IOBs. These are located IOBs, or constrained to a fixed
location.
■ Slices. This is the most important number to watch as you optimize
your code. The fewer the slices used, the more efficient your code is
and the more you can do in a fixed FPGA space.
■ Base clock. This is the fundamental onboard clock. You can derive
faster clocks by using the 40-MHz reference. In Figure 16.4 we com-
piled our application using an 80-MHz clock.
408 Chapter Sixteen
■ Compiled rate. You can compile your code to use clocks of 40, 80,
120, 160, and even 200 MHz, but the faster you go, the more jitter
you introduce. Additionally, the digital I/O of the XCV1000 is slew-
rate-limited at 20 MHz. The safest practice is to use 40 MHz as your
base clock unless you really need some section of code to run faster.
As you develop applications for the FPGA, take time during develop-
ment to compile and notice how many slices your application consumes.
You can even cut and paste the compile report onto the block diagram
or into the history report to keep a running record of the application
size. Excessively convoluted logic consumes valuable resources, can
cause unnecessary latency, and may even cause your code not to com-
pile. To get maximum performance and the best use out of the FPGA,
keep the code on the FPGA simple and clean.
Debugging
LabVIEW has some great debugging tools, but unfortunately they
aren’t available for compiled code on the FPGA. However, you can use
LabVIEW’s graphical debugging tools, including probes and execution
highlighting, when you use the Emulation mode. To use the emulator,
right-click on the FPGA Target in the project folder and select Target
Properties. . . . I/O in Emulation mode can use randomly generated
data, or you can use the actual I/O hardware of your FPGA target.
This is a great way to do some rapid prototyping and debugging since
VIs in Emulation mode do not have to go through the long compilation
process. Of course there is no real-time response in Emulation mode
since your VI is battling every other process on the host computer for
CPU time. To get hard real-time response, you have to compile and run
down on the hardware where it is much harder to peak into a running
process. See Figure 16.5.
Place digital outputs at strategic points in your code, and toggle
the digital line each time that section of code executes. Each loop is a
potential point you may want to look at. Toggling a digital output once
each loop iteration is a great way to see what is happening. Once your
VI is compiled and running, you can connect an oscilloscope to these
digital watch points to get a real-time look into your process. You might
be surprised at what you find out.
Synchronous execution
and the enable chain
LabVIEW is a data-flow-driven programming language, and LabVIEW
FPGA takes special care to enforce dataflow in the generated code.
This is done by transparently adding synchronization logic to code
LabVIEW FPGA 409
on your block diagram. Figure 16.6A shows a few simple logic func-
tions chained together. What you might expect, if you are familiar with
VHDL programming, is for all the logic functions to map to combinato-
rial logic and execute in a single clock cycle (as shown in Figure 16.6B).
But LabVIEW treats each function as a separate entity requiring syn-
chronization at its inputs and outputs, and LabVIEW does not opti-
mize the code across synchronization boundaries. Instead of getting
the code in Figure 16.6B, you get code that functions like that in Figure
16.6C. This is a huge difference in performance; if each function takes
one clock tick, we’ve gone from one tick to four ticks of the clock to exe-
cute the same code. Figure 16.6C doesn’t tell the whole story though,
Figure 16.6(A) What you drew. (B) What you wanted. (C) What you
got. LabVIEW enforces dataflow on the VHDL code it generates.
410 Chapter Sixteen
Figure 16.7 Two loops toggle digital lines as fast as possible. (A)
Loop overhead and the enable chain cause loop A to execute in
3 clock ticks. (B) Single-cycle Timed Loops execute completely
with each clock tick.
LabVIEW FPGA 411
Figure 16.9New FPGA Derived Clock dialog. Configure new derivative clocks based on
the onboard clock for your target.
we could change the top-level clock to use our 80-MHz derived clock if
we needed things to run a little faster. Using a faster top-level clock
allows the code to execute faster, but it also introduces jitter into the
system. If you set a clock rate that is too fast, the VI will not compile and
you will have to select a lower clock and compile again. Figure 16.10
Figure 16.10 Single-cycle Timed Loops can use derived clocks as their tim-
ing source.
LabVIEW FPGA 413
Parallelism
Programming for an FPGA is distinctively different from that for Lab-
VIEW on a desktop. An FPGA executes parallel instructions in par-
allel whereas a microprocessor executes instructions sequentially. It’s
important to take advantage of this parallelism when you develop your
program. A parallel state machine turns out to be an easy way to build
a serial bus controller. Figure 16.11 shows a timing diagram for a sim-
ple serial peripheral interface (SPI) communication sequence. SPI is a
common bus in chip-to-chip communications between microprocessors
and peripheral components. One of the strengths of LabVIEW FPGA
is the ability to interface with low-level devices using communications
busses such as SPI. This lets you build test systems that interface
directly with board-level components. It is fairly easy to design an SPI
bus controller by counting clock edges on a timing diagram. Each com-
ponent will have a diagram in its data sheet, and it’s fairly simple to get
out a ruler and mark out the transitions. Each clock edge represents a
different state. The Chip Select line is set low in state 0 and high in 17.
Data is latched out on the MOSI (master out slave in) line on each fall-
ing edge. The MISO (master in slave out) latches in data on each rising
edge. Figure 16.12 illustrates how parallel execution with a separate
case for each line results in a clean, easy-to-understand diagram.
Pipelining
A powerful extension of parallelism is pipelining. Pipelining breaks a
series of sequential tasks into parallel steps for faster loop rates. This
parallel execution can end up increasing the throughput of a sequential
task. The total time for the code to execute is still the sum of the time
required for each individual task, but the loop rate is faster because
414 Chapter Sixteen
Conclusions
The great thing about programming with LabVIEW FPGA is that it’s
just LabVIEW. You don’t have to learn another language or develop-
ment environment. The hardware interface tool LabVIEW FPGA puts
in your toolbox is powerful. Once you understand the subtleties of
the hardware target, you’ll wonder how you ever did without it. Just
remember to keep your code clean and simple, take advantage of the
LabVIEW FPGA 415
Bibliography
“Thinking Inside the Chip,” NIWeek04, National Instruments Corporation, 11500 N.
Mopac Expressway, Austin, Tex., 2004.
“Virtex-II Platform FPGAs: Complete Data Sheet,” DS031 (v, 3.3), Xilinx, Inc., 2100 Logic
Drive, San Jose, Calif., June 24, 2004.
This page intentionally left blank
Chapter
LabVIEW Embedded
17
In the last edition of this book we showed you how to build an embed-
ded system using LabVIEW for Linux and a PC104 computer. It was
big and power-hungry, but at the time it was the only way to build an
embedded system with LabVIEW. National Instruments has been hard
at work on its own version of LabVIEW for embedded systems, and it’s
so much better than anything you or I could do. LabVIEW can now
target any 32-bit microporcessor (Figure 17.1) To get an inside look at
LabVIEW Embedded, we asked P. J. Tanzillo, National Instruments’
product support engineer for the LabVIEW Embedded platform, to
give us a hand and tell us what this new technology is all about. So,
without further ado, here’s P. J.:
Introduction
LabVIEW is much more than a tool for instrument control and data acqui-
sition. With its extensive built in signal processing and analysis libraries,
LabVIEW has developed into an ideal environment for algorithm develop-
ment and system design. Furthermore, with the addition of the LabVIEW
modules like the LabVIEW Real-Time Module and the LabVIEW FPGA
Module, it has become possible to run these applications not just on a
Windows PC, but on other hardware platforms as well. In this chapter,
we will examine the newest such module called the LabVIEW Embed-
ded Development Module. This latest addition to the LabVIEW family
allows you to develop applications for any 32-bit microprocessor.
History
In the late 1990s, it became clear that Personal Digital Assistants (PDAs)
were gaining mainstream acceptance as a viable computing platform.
National Instruments saw this platform as a potential compliment to their
417
418 Chapter Seventeen
Figure 17.1 LabVIEW targets devices from the desktop to the palmtop, and
beyond. LabVIEW Embedded can target any 32-bit microprocessor. (Photo cour-
tesy of National Instruments)
Therefore, even more effort would have to be made to reduce the code foot-
print of LabVIEW built executables. It became clear that a new approach
would be needed to port LabVIEW to these small, inconsistent targets.
First, LabVIEW must traverse the block diagram and generate ANSI C
code for the top-level VI as well as for every subVI in the VI hierarchy. The
LabVIEW C Code Generator generates simple C primitives wherever
possible. For example, while loops convert to while() statements, and the
add function generates a simple ‘+’. More complex functions, however, can-
not provide such a direct mapping. Therefore, an additional component
of the LabVIEW generated C Code is the LabVIEW C Runtime Library.
This is analogous to the LabVIEW run-time engine on the desktop, and
we will discuss the LabVIEW runtime library in greater detail later in
this chapter.
Finally, in order to have live front panel controls and indicators, the target
and the host need to establish communications. This can again be done
through any protocol that can be understood by both the host and the
target. This can be implemented in one of two possible ways.
LabVIEW Embedded 421
The other debug method is called on-chip debugging. This method does
not call for any additional functions to be added to the generated code,
so debug and release builds are identical. Rather than adding additional
function calls, here the LabVIEW C code generator adds only additional
comments. Each wire, control, and indicator on the block diagram results
in a separate generated comment. At compile time, these comments are
mapped to physical memory locations on the chip, and this information is
stored in a debug database. When LabVIEW needs a value of a specific
wire, indicator, or probe, it looks to the debug database to see what seg-
ment of physical memory that it needs to read. Then, it requests the value
from the JTAG emulator/debugger using that debugger’s provided API.
This approach is considerably more complex, and thus, it requires more
implementation effort. For example, the means of mapping lines of C code
to physical memory locations is compiler dependent and non-trivial. In
addition, if the debugging interface requires lengthy processor halts to
read memory, this method can be rather intrusive to the program’s exe-
cution. However, high quality probes that can efficiently read and write
memory can result in almost completely non-intrusive debugging. In addi-
tion, full debugging information can be included without increasing the
final code footprint.
Hardware requirements:
■ 32-bit processor architecture
■ 256K of application memory (plus whatever is required for your embed-
ded OS)
Once your hardware, toolchain, and BSP are in place, you can begin the
process of porting LabVIEW to your target. This consists of four main
steps, each of which we will discuss in detail. They are:
The rest of this chapter will be spent discussing these four steps in detail.
In order for your target to support some of the more advanced features in
LabVIEW, you will also need to implement some OS specific functions. For
instance, in order for the timing primitives to be supported, you will need
to implement the LVGetTicks() function. Similar functions are defined in
the porting guide that are necessary for further LabVIEW synchroniza-
tion and timing features like Timed Loops and Notifiers. Source files for
the OS specific pieces of the LabVIEW C runtime library can be found in
an operating system specific folder in the C Generation codebase.
Finally, the main entry point for all applications that LabVIEW gener-
ates can be found in LVEmbeddedMain.c. This file contains functions that
perform set up and tear down of common pieces such as occurrences and
FIFOs as well as any hardware specific setup and tear down routines
that are necessary for a given target. The main function in LVEmbed-
dedMain.c initializes all global variables and then calls the top-level VI
in the project. After the top-level VI has completed, a shutdown routine
is completed. These initialization and shutdown routines are defined by
two macros in LVEmbeddedMain.c called LV_PLATFORM_INIT and
LV_PLATFORM_FINI. These macros are defined per target rather than
per OS because different routines may be required for separate hard-
ware platforms that share the same OS. For instance, although they both
run eCos, a PowerPC for engine control and an ARM processor in the
Nintendo Gameboy Advance would require very different hardware ini-
tialization routines.
The output of such a command will be an object file (*.o) that will have
the same file name as the C source file. A similar compile command will
need to be executed for every C source file in your project (as well as the
LabVIEW C runtime library). Once the compile step is completed, you will
need to link all the compiled object files and pre-existing libraries into a
single stand-alone executable file. This is done by running the compiler
424 Chapter Seventeen
command again, this time with appropriate flags set so that linking is
performed. This command will typically look like this:
Once the application has been successfully linked, you will have a cross
compiled executable for your target platform. This is what we will be
referring to as “building” your application. Though the syntax and order
of components of the compiler for your target may vary, these key compo-
nents will need to be in place for the compiler and the linker to succeed.
The LabVIEW Embedded Project Manager is also the place where you
interact with the target and the project. For example, there is a “Build”
button that will cause LabVIEW to generate the C Code for all of the VIs
in the project (as well as for the entire VI hierarchy for each of these VIs)
and compile and link the generated code. The implementation of this and
other actions within the LabVIEW Embedded Project Manager vary from
target to target, and will therefore need to be modified when adding sup-
port for a new target.
Figure 17.4 Build.vi turns your project into an executable. Only one case is shown for clarity.
426 Chapter Seventeen
Target_OnSelect
When exploring the target plug-in VIs for the first time, it makes the most
sense to begin with the Target_OnSelect.vi. This VI consists of a series
of cases in a case structure, each of which corresponds to a target menu
selection from the Embedded Project Manager. In the future, this VI will
be replaced by a target specific xml file that will provide the mapping from
the LabVIEW project to specific implementation VIs.
Common target menu selections like Build, Run, Debug, and Build Options
will likely be present for most targets, but you have the option to add cus-
tom target menu options for your target by modifying the Target_Menu.
vi and the Target_OnSelect.vi. For example, the LabVIEW Embedded
Module for ADI Blackfin Processors includes an option to reset the Black-
fin board. In the future, this information will also be defined in the target
specific xml file.
Code generation
1. The CGen plug-in VI is responsible for traversing the block diagram of
the Top-Level VI and generating ANSI C code for all of the applicable
VIs in the project. This capability is built into LabVIEW Embedded, and
LabVIEW Embedded 427
files that the user has included in the project. In essence, this VI is
responsible for building up the correct compiler command in a large
string and executing this from the command line using the SysExec.
vi from the LabVIEW functions palette. The inputs to this VI are typi-
cally the configuration options for the project, an array of source files,
and an array of file paths that need to be included. The result of a suc-
cessful execution of this VI is a project folder full of object files that will
be used by the linker to produce the executable. To observe the build
process of any target, you will only need to have the ScriptCompiler.vi
open while you are building a project.
4. The ScriptLinker.vi works in much the same way as the Script-
Compiler.vi, only now, the command that is executed is the command
necessary to link all of the object files to produce one stand alone
executable. Just as you monitored the progress of the ScriptCompiler.vi
in the previous section, you can also monitor the execution of the
ScriptLinker.vi in a similar way.
5. The Debug.vi is responsible for initiating a connection between Lab-
VIEW and the running application so that you can have an interactive
LabVIEW debugging experience including probes, breakpoints, and live
front panels. This can be completed in one of two ways—instrumented
debugging (TCP, Serial, CAN, etc.) and on-chip debugging (JTAG). In
either case, the Debug plug-in VI relies on the proper and complete
implementation of the debugging plug-in VIs. Once one or both of these
mechanisms are in place, you only need to begin the debugging session
from the Debug.vi and LabVIEW will handle the background commu-
nications to the application. For instrumented debugging via TCP/IP
four things need to happen:
■ The generated code has debug information included.
■ The compiler directive of UseTCPDebugging=1 is set.
■ The application is run with the correct host IP address as a param-
eter.
■ The niTargetStartTCPDebug is called after the application is running.
Figure 17.5 Elemental I/O provides simple access to analog and digital peripherals.
I/O driver development can differ greatly depending on the type and com-
plexity of your device. For example, digital input and output is typically
done by accessing the general purpose I/O pins on the device, and devel-
opment of a driver can be relatively straightforward. However, a buffered
DMA analog acquisition through an external A/D converter can be much
more challenging to develop. In all cases, however, as with any I/O, there
will typically be some initialization code that must be executed before
the acquisition can be completed. In the traditional LabVIEW I/O pro-
gramming model of Open -> Read/Write -> Close, the initialization code
Figure 17.6Inline C node allows you to add your own C and assembly code directly
on the LabVIEW block diagram.
LabVIEW Embedded 431
obviously resides in the open VI. However, since an elemental I/O node
consists of a single block, the implementation of this becomes slightly less
apparent. To accomplish initialization within the Elemental I/O single
node framework, it is best to declare a static integer in your Inline C node
called “initialized.” This can act as a flag that should be set and reset
accordingly as the acquisition is initialized and completed.
Use global variables instead of local variables. Every time a local vari-
able is accessed, extra code is executed to synchronize it with the front
panel. Code performance can be improved, in many cases, by using a
global variable instead of a local. The global has no extra front panel syn-
chronization code and so executes slightly faster than a local.
Use shift registers instead of loop tunnels for large arrays. When pass-
ing a large array through a loop tunnel, the original value must be copied
into the array location at the beginning of each iteration, which can be
expensive. The shift register does not perform this copy operation, but
make sure to wire in the left shift register to the right if you don’t want
the data values to change (Figure 17.8A).
432 Chapter Seventeen
Avoid Case structures for simple decision making. For simple decision
making in LabVIEW, it is often faster to use the Select function rather
than a Case structure. Since each case in a Case structure can contain its
own block diagram there is significantly more overhead associated with
this structure when compared with a Select function. However, it is some-
times more optimal to use a case structure if one case executes a large
amount of code and the other cases execute very little code. The decision
to use a Select function versus a Case structure should be made on a case
by case basis.
LabVIEW Embedded 433
Figure 17.8 (A) Memory for look up table is allocated once outside
the loop. (B) A single copy of look up table is stored in global vari-
able. (C) Memory for look up table is allocated each loop iteration.
Often the best results can be obtained by using a hybrid of LabVIEW and
C code. The Inline C Node and Call Library Node allow the use of C code
directly within your LabVIEW block diagram. See the Embedded Devel-
opment Module documentation for more information on the use of the
Inline C and Call Library Nodes. By following good embedded program-
ming practices, you can better optimize your code to meet the constraints
of your embedded application. Implementing one or two of these tech-
niques may noticeably improve the performance of your application, but
the best approach is to incorporate a combination of all these techniques.
Figure 17.11 Interrupt driven Timed Loop executes once per interrupt.
The timed loop is a specialized version of the while loop that was added to
LabVIEW 7.1. This structure allows you not only to control the loop rate,
but also set independent priorities for the execution thread of that loop. In
LabVIEW Embedded, this structure simply spawns an OS thread, and the
embedded OS’s scheduler handles scheduling the different threads. This
is relevant to interrupt driven programming because you also have the
ability to assign an external timing source to a timed loop. Furthermore,
you can then programmatically fire a timed loop from an ISR VI. This
allows for a block diagram with multiple parallel timed loops with each
loop’s timing source associated with some interrupt (Figure 17.11).
437
438 Chapter Eighteen
Industrial standards
Standards set by the ISA and other organizations address both the
physical plant—including instruments, tanks, valves, piping, wiring,
and so forth—as well as the man-machine interface (MMI) and any
associated software and documentation. The classic MMI was a silk-
screened control panel filled with controllers and sequencers, digital
and analog display devices, chart recorders, and plenty of knobs and
switches. Nowadays, we can use software-based virtual instruments
to mimic these classic MMI functions. And what better way is there to
create virtual instruments than LabVIEW?
Engineers communicate primarily through drawings, a simple but
often overlooked fact. In process control, drawing standards have been in
place long enough that it is possible to design and build a large process
plant with just a few basic types of drawings, all of which are thoroughly
Process Control Applications 439
specified by ISA standards. By the way, ISA standards are also regis-
tered as ANSI (American National Standards Institute) standards.
Some of these drawings are of particular interest to you, the con-
trol engineer. (See? We’ve already promoted you to a new position, and
you’re only on the third page. Now read on, or you’re fired.)
Piping and instrument diagrams and symbols. The single most impor-
tant drawing for your plant is the piping and instrument diagram
(P&ID). It shows the interconnections of all the vessels, pipes, valves,
pumps, transducers, transmitters, and control loops. A simple P&ID
is shown in Figure 18.1. From such a drawing, you should be able to
understand all the fluid flows and the purpose of every major item
in the plant. Furthermore, the identifiers, or tag names, of every
instrument are shown and are consistent throughout all drawings and
specifications for the plant. The P&ID is the key to a coherent plant
design, and it’s a place to start when you create graphical displays in
LabVIEW.
MAX ETHYLENE
1 (MAX) (100PPM)
TOTAL C2'S
TO 2ND STAGE PCV
FT FI AC (0.4% WT) AC
COMPRESSOR 1329
1313 1313
PT
1328
PC
1328
2 PC
FILTER FILTER
1329
C2 C2=
PCV PT C3= C3=
1328 1329
LCV ETHANE C2
1319 (NEGLECT
ETHYLENE)
1 PROPYLENE ETHYLENE C2=
FT TI TE C3= AT
NEW 1314 1344 1344
FROM E-A3-63 10
K.O. FC V/L
TI TE
DRUM 1314 TE TC
SET 1345 1345 20
F-A3–201 21 1346 1346
LAG L
LC LT LT LC
X V
1318 1318 1319 1319
ORGANIC F- A3-200 NEW
TE TI FC FT
1347 1347 1317 1317
WATER STEAM
LCV
1318
50 PSIG
4
WATER LE LT
1320 1320 E - A3-200A E - A3-200B
F -A3-202 LT LC
1321 1321
TO DEETHANIZER
LCV BOTTOMS
1321
Loop Types:
FCV
1. Simple Indication Loop FC FT 1315
2. Simple Control Loop 1316 1315
Figure 18.1 A piping and instrument diagram is the basis for a good process plant
design.
440 Chapter Eighteen
TABLE 18.1 Abbreviated List for the Generation of ISA Standard Instrument Tags
First letter Second letter
measured or readout or Succeeding letters
initiating variable output function (if required)
A Analysis Alarm Alarm
B Burner, combustion User’s choice User’s choice
C Conductivity Controller Controller
D Density/damper Differential
E Voltage (elect) Primary element
F Flow Ratio/bias Ratio/bias
G Gauging (dimensional) Glass (viewing device)
H Hand (manual) High
I Current (electrical) Indicate Indicate
J Power Scanner
K Time Control station
L Level Light Low
M Moisture/mass Middle/intermediate
N User’s choice User’s choice User’s choice
O User’s choice Orifice, restriction
P Pressure Point (test) connection
Q Quantity Totalize/quantity
R Radiation Record Record
S Speed/frequency Safety/switch Switch
T Temperature Transmitter Transmitter
U Multipoint/variable Multifunction Multifunction
V Vibration Valve, damper, louver Valve, damper, louver
W Weight Well
X Special Special Special
Y Interlock or state Relay/compute Relay/compute
Z Position, dimension Damper or louver drive
SOURCE: From S5.1, Instrumentation Symbols and Identification. Copyright 1984 by Instrument Society of
America. Reprinted by permission.
cluster elements, and frames of Case and Sequence structures that are
dedicated to processing a certain channel. On the panel, tag names make
a convenient, traceable, and unambiguous way to name various objects.
Here are some examples of common tag names with explanations:
The little balloons all over the P&ID contain tag names for each
instrument. Some balloons have lines through or boxes around them
that convey information about the instrument’s location and the
method by which a readout may be obtained. Figure 18.2 shows some
of the more common symbols. The intent is to differentiate between a
field-mounted instrument, such as a valve or mechanical gauge, and
various remotely mounted electronic or computer displays.
Every pipe, connection, valve, actuator, transducer, and function in
the plant has an appropriate symbol, and these are also covered by ISA
standard S5.1. Figure 18.3 shows some examples that you would be
likely to see on the P&ID for any major industrial plant. Your system
may use specialized instruments that are not explicitly covered by the
standard. In that case, you are free to improvise while keeping with
HV
Discrete instrument, field-mounted
552
Electrical control
signal
PY
552 HS
FIR
552
531
Instrument S 3-way solenoid valve
air supply Temperature
Pneumatic line gauge
Diaphragm- FT
IA operated valve
531 TI
530
VENT
Figure 18.3 Some instrument symbols, showing valves, transmitters, and associated
connections. These are right out of the standards documents; your situation may
require some improvising.
the spirit of the standard. None of this is law, you know; it’s just there
to help. You can also use replicas of these symbols on some of your
LabVIEW screens to make it more understandable to the technicians
who build and operate the facility.
Other drawing and design standards. Another of our favorite ISA stan-
dards, S5.4, addresses instrument loop diagrams, which are the
control engineer’s electrical wiring diagram. An example appears in
Figure 18.4. The reasons for the format of a loop diagram become clear
once you have worked in a large plant environment where a signal may
pass through several junction boxes or terminal panels before finally
arriving at the computer or controller input. When a field technician
must install or troubleshoot such a system, having one (or only a few)
channels per page in a consistent format is most appreciated.
Notice that the tag name appears prominently in the title strip,
among other places. This is how the drawings are indexed, because the
tag name is the universal identifier. The loop diagram also tells you
where each item is located, which cables the signal runs through, instru-
ment specifications, calibration values (electrical as well as engineering
units), and computer database or controller setting information.
We’ve found this concise drawing format to be useful in many labora-
tory installations as well. It is easy to follow and easy to maintain.
There are commercial instrument database programs available that
contain built-in forms, drawing tools, and cross-referencing capability
Process Control Applications 443
FY 301B-2 (W)
FY 301B-1 (B)
ELEV. PLC: ANNUNCIATOR:
FSH 301(B)
120VAC
300'-0" MODEL NO. MODEL NO.
AS 20 PSIG SPEC NO. POINT POS.
FE 0-100"WC CABLE-30 28-1
301 JB
FE S 3-15 PSIG JB 30 200
301 O 3 7
FT-301 (B) TUBE BUNDLE 28 3-15 PSIG
4 8
FV-301 (B)
1
L2
28-2 S 3
L1
2
ELEV. AS 100 PSIG CONTROLLER:
SOLENOID: FY MODEL NO.
300'-0" 301B
MODEL NO. SPEC NO.
FV SPEC NO.
1
301 I S 12 21
VALVE: MODEL NO. O FIC
301
SPEC NO. AS 20 PSIG S REV
TO UNIT NO.3
NOTES:
Figure 18.4 An instrument loop diagram, the control engineer’s guide to wiring. (Reprinted by
permission. Copyright ©1991 by the Instrument Society of America. From S5.4,Instrument Loop
Diagrams.)
If you are involved with the design of a major facility, many other
national standards will come into play, such as the National Electri-
cal Code and the Uniform Mechanical Code. That’s why plants are
designed by multidisciplinary teams with many engineers and design-
ers who are well versed in their respective areas of expertise. With
these standards in hand and a few process control reference books, you
might well move beyond the level of the mere LabVIEW hacker and
into the realm of the registered professional control engineer.
444 Chapter Eighteen
Process variable
1 Integral
B) Proportional-Integral
(PI) controller Ti s
Proportional
+
+ error +
Setpoint Σ Σ Kp Controller
output
–
Process variable
C) Proportional-Integral- 1 Integral
Derivative (PID) controller with Ti s
derivative on measurement Proportional
+ + +
error Controller
Setpoint Σ Σ Kp
output
– –
Td s
Derivative
Process variable
Figure 18.5 Signal flow diagrams for proportional, integral, and derivative algorithms
are the basis for much of today’s practical feedback control. These are just a few exam-
ples of P/PI/PID configuration; there are many more in actual use.
signal has little noise or where suitable filtering or limiting has been
applied.
National Instruments offers a set of PID algorithms available in the
PID Control Toolkit. You can use them to build all kinds of control
schemes; usage will be discussed later in this chapter. PID control can
also be accomplished through the use of external “smart” controllers
and modules. Greg Shinskey’s excellent book Process Control Systems
(1988) discusses the application, design, and tuning of industrial con-
trollers from a practical point of view. He’s our kinda guy.
There are many alternatives to the common PID algorithm so often
used in industrial control. For instance, there are algorithms based on
state variable analysis which rely on a fairly accurate model of the pro-
cess to obtain an optimal control algorithm. Adaptive controllers, which
may or may not be based on a PID algorithm, modify the actions of the
controller in response to changes in the characteristics of the process.
Predictive controllers attempt to predict the trajectory of a process to
minimize overshoot in controller response. Modern fuzzy logic control-
lers are also available in LabVIEW in the PID Toolkit. That package
includes a membership function editor, a rule-base editor, and an infer-
ence engine that you can incorporate into your system. If you have
experience in control theory, there are few limitations to what you can
accomplish in the graphical programming environment. We encourage
you to develop advanced control VIs and make them available to the
rest of us. You might even make some money!
So far, we have been looking at continuous control concepts that
apply to steady-state processes where feedback is applicable. There are
other situations. Sequential control applies where discrete, ordered
events occur over a period of time. Valves that open in a certain order,
parts pickers, robots, and conveyors are processes that are sequential
in nature. Batch processes may be sequential at start-up and shut-
down, but operate in steady state throughout the middle of an opera-
tion. The difficulty with batch operations is that the continuous control
algorithms need to be modified or compromised in some way to handle
the transient conditions during start-up, shutdown, and process upsets.
A special form of batch process, called a recipe operation, uses some
form of specification entry to determine the sequence of events and
steady-state setpoints for each batch. Typical recipe processes are
paint and fuel formulation (and making cookies!), where special blends
of ingredients and processing conditions are required, depending on
the final product.
Early sequential control systems used relay logic, where electrome-
chanical switching devices were combined in such a way as to imple-
ment boolean logic circuits. Other elements such as timers and stepper
switches were added to facilitate time-dependent operations. System
Process Control Applications 447
Process signals
The signals you will encounter in most process control situations are
low-frequency or dc analog signals and digital on/off signals, both
inputs and outputs. Table 18.2 lists some of the more common ones.
In a laboratory situation, this list would be augmented with lots of
special analytical instruments, making your control system heavy on
data acquisition needs. Actually, most control systems end up that way
because it takes lots of information to accurately control a process.
Industry likes to differentiate between transducers and trans-
mitters. In process control jargon, the simple transducer (like a ther-
mocouple) is called a primary element. The signal conditioner that
connects to a primary element is called a transmitter.
In the United States, the most common analog transmitter and con-
troller signals are 4–20-mA current loops, followed by a variety of volt-
age signals including 1–5, 0–5, and 0–10 V. Current loops are preferred
because they are resistant to ground referencing problems and voltage
drops. Most transmitters have a maximum bandwidth of a few hertz,
and some offer an adjustable time constant which you can use to opti-
mize high-frequency noise rejection. To interface 4–20-mA signals to an
ordinary voltage-sensing input, you will generally add a 250-Ω preci-
sion resistor in parallel with the analog input. (If you’re using National
Instruments’ signal conditioning—particularly SCXI—order its 250-Ω
terminating resistor kits.) The resulting voltage is then 1–5 V. When
you write your data acquisition program, remember to subtract out the
1-V offset before scaling to engineering units.
On/off signals are generally 24 V dc, 24 V ac, or 120 V ac. We pre-
fer to use low-voltage signals because they are safer for personnel. In
areas where potentially flammable dust or vapors may be present, the
National Electrical Code requires you to eliminate sources of ignition.
Low-voltage signals can help you meet these requirements as well.
The world of process control is going through a major change with
emerging digital field bus standards. These busses are designed to
replace the 4–20-mA analog signal connections that have historically
connected the field device to the distributed control systems with
a digital bus that interconnects several field devices. In addition to
using multidrop digital communications networks, these busses use
very intelligent field devices. They are designed for device interoper-
ability, which means that any device can understand data supplied by
any other device. Control applications are distributed across devices,
each with function blocks that execute a given control algorithm. For
instance, one control algorithm can orchestrate a pressure transmitter,
a dedicated feedback controller, and a valve actuator in a field-based
control loop.
The first of the digital communication protocols, HART, was created
by Rosemount and supported by hundreds of manufacturers. It adds
a high-frequency carrier (such as that used with 1200-baud modems)
which rides on top of the usual 4–20-mA current loop signal. Up to
16 transmitters and/or controllers can reside on one HART bus, which
is simply a twisted pair of wires. HART allows you to exchange mes-
sages with smart field devices, including data such as calibration
information and the values of multiple outputs—things that are quite
impossible with ordinary analog signals.
Perhaps the most important new standard is Foundation Field-
bus (from the Fieldbus Foundation, naturally), which has interna-
tional industry backing. The ISA is also developing a new standard,
SP-50, which will ultimately align with Foundation Fieldbus and other
standards. Because the field devices on these field bus networks are so
intelligent, and because the variables shared by devices are completely
defined, the systems integration effort is greatly reduced. It is thus
much more practical to implement a supervisory system with LabVIEW
and one or more physical connections to the field bus network. You
no longer need drivers for each specific field device. Instead, you can
use drivers for classes of field devices, such as pressure transmitters
or valves. National Instruments is an active player in the Foundation
Fieldbus development process, offering several plug-in boards and NI-
FBUS host software. At this writing, the standard is well established
Process Control Applications 449
Data
Archive
1.00
√
0.80 X
0.60
0.40
0.20
0.00
-4
00
-2
00
0
20
0
40
0
Redundant Network
CS CS
IX IX
CS 11 11
IX 04 04
11
04
SCXI- SCXI
1001 -1001
SCXI SCXI
1140 1140
SCXI SCXI
1140 1140
SCXI SCXI
SCXI SCXI
1140 1140
SCXI SCXI
1140 1140
SCXI SCXI
1140 1140
SCXI SCXI
1140 1140
SCXI SCXI
1140 1140
MAINFRAM MAINFRAM
1140 1140
I/O Interface
E
Information
SCXI SCXI
1140 1140
SCXI SCXI
1140 1140
System
Figure 18.6 A distributed control system, or DCS, encompasses many nodes communicat-
ing over networks and many I/O points.
Enter the personal computer. At the other end of the scale from a mega-
DCS is the personal computer, which has made an incredible impact on
plant automation. The DCS world has provided us with software and
hardware integration techniques that have successfully migrated to
desktop machines. A host of manufacturers now offer ready-to-run PC-
based process control and SCADA packages with most of the features
of their larger cousins, but with a much lower price tag and a level of
complexity that’s almost . . . human in scale. LabVIEW with DSC fits
into this category.
A wide range of control problems can be solved by these cost-effective
small systems. In the simplest cases, all you need is a PXI system
running LabVIEW with plug-in I/O boards or maybe some outboard
I/O interface hardware. You can implement all the classical control
schemes— continuous, sequential, batch, and recipe—with the functions
452 Chapter Eighteen
built into LabVIEW and have a good user interface on top of it all. Such
a system is easy to maintain because there is only one programming
language, and only rarely would you need the services of a consultant
or systems house to complete your project. The information in this
chapter can lead you to a realistic solution for these applications.
There are some limitations with any stand-alone PC-based pro-
cess control system. Because one machine is responsible for servicing
real-time control algorithms as well as the user interface with all its
graphics, there can be problems with real-time response. If you need
millisecond response, consider using outboard smart controllers (see
the following section) or LabVIEW RT. The I/O point count is another
factor to consider. Piling on 3000 analog channels is likely to bring
your machine to its knees; you must consider some kind of distributed
processing scheme.
LabVIEW, like the specialized process control packages, permits you
to connect several PCs in a network, much like a DCS. You can config-
ure your system in such a way that the real-time tasks are assumed
by dedicated PCs that serve as I/O control processors, while other PCs
serve as the man-machine interfaces, data recorders, and so forth. All
the machines run LabVIEW and communicate via a local-area network
using supported protocols such TCP/IP. The result is an expandable
system with the distributed power of a DCS, but at a scale that you
(and perhaps a small staff) can create and manage yourself. The prob-
lem is programming.
*A friend of ours at a major process control system manufacturer reports that Bill
Gates and company have effectively lowered our expectations. Prior to the great rise in
the popularity of Windows in process control, systems were based on VAX/VMS, UNIX,
and very expensive custom software. If a system crashed in a plant operation, our friend’s
company would get a nasty phone call saying, “This had better never happen again!”
Now, he finds that Windows and the complex application software come crashing down
so often that plant operators hardly bat an eye; they just reboot. It’s a sad fact, and we
wonder where this is all going.
Process Control Applications 455
Cost. The bottom line is always cost. One cost factor is the need for
system integration services and consultants, which is minimal with
LabVIEW, but mandatory with a DCS. Another factor is the num-
ber of computers or smart controllers that you might need: A stand-
alone PC is probably cheaper than a PC plus a PLC. Also, consider
the long-term costs such as operator training, software upgrades and
modifications, and maintenance. Plan to evaluate several process
control packages— as well as LabVIEW with DSC—before commit-
ting your resources.
You can buy PLCs with a wide range of I/O capacity, program storage
capacity, and CPU speed to suit your particular project. The simplest
PLCs, also called microcontrollers, replace a modest number of relays
and timers, do only discrete (on/off, or boolean) logic operations, and
456 Chapter Eighteen
FV-301 FV-302
TIMER1
15 SEC
Coil (output)
support up to about 32 I/O points. Midrange PLCs add analog I/O and
communications features and support up to about 1024 I/O points. The
largest PLCs support several thousand I/O points and use the latest
microprocessors for very high performance (with a proportionally high
cost, as you might expect).
You usually program a PLC with a PC running a special-purpose
application. Ladder logic (Figure 18.7) is the most common language
for programming in the United States, though a different concept,
called GRAFCET, is more commonly used in Europe. Some program-
ming applications also support BASIC, Pascal, or C, either intermixed
with ladder logic or as a pure high-level language just as you would use
with any computer system.
There are about 100 PLC manufacturers in the world today. Some of
the major brands are currently supported by OLE for Process Control
(OPC) and LabVIEW drivers. Check the National Instruments Web
site for new releases.
The ISA offers some PLC-related training you may be interested in.
Course number T420, Fundamentals of Programmable Controllers,
gives you a good overview of the concepts required to use PLCs. The
textbook for the class, Programmable Controllers: Theory and Imple-
mentation (Bryan and Bryan 1988), is very good even if you can’t attend
the class. Another course, T425, Programmable Controller Applica-
tions, gets into practical aspects of system design, hardware selection,
and programming in real situations.
Figure 18.8 PLCs like this Siemens model are popular in process control. SinecVIEW
is a LabVIEW driver package that communicates with Siemens PLCs via a serial
interface. (Photo courtesy of National Instruments and CITvzw Belgium.)
458 Chapter Eighteen
peer to peer, that is, between individual PLCs without host interven-
tion. A variety of schemes exist for host computers as well, serving as
masters or slaves on the network.
Once a PLC has been programmed, your LabVIEW program is free
to read data from and write data to the PLC’s registers. A register may
represent a boolean (either zero or one), a set of booleans (perhaps 8 or
16 in one register), an ASCII character, or an integer or floating-point
number. Register access is performed at a surprisingly low level on
most PLCs. Instead of sending a message like you would with a GPIB
instrument (“start sequence 1”), you poke a value into a register that
your ladder logic program interprets as a command. For instance, the
ladder logic might be written such that a 1 in register number 10035
is interpreted as a closed contact in an interlock string that triggers a
sequential operation.
There are a few things to watch out for when you program your PLC
and team it with LabVIEW or any other host computer. First, watch
out for conflicts when writing to registers. If the ladder logic and your
LabVIEW program both write to a register, you have an obvious con-
flict. Instead, all registers should be one-way; that is, either the PLC
writes to them or your LabVIEW program does. Second, you may want
to implement a watchdog program on the PLC that responds to fail-
ures of the host computer. This is especially important when a host
failure might leave the process in a dangerous condition. A watchdog is
a timer that the host must hit, or reset, periodically. If the timer runs to
its limit, the PLC takes a preprogrammed action. For instance, the PLC
may close important valves or reset the host computer in an attempt to
reawaken it from a locked-up state.
PLC communications is another area where the Datalogging and
Supervisory Control (DSC) module driver model has advantages over
the much simpler LabVIEW driver model. With DSC, National Instru-
ments supplies a large set of OPC drivers that connect to a wide variety
of industrial I/O devices, including PLCs. Also, many hardware manu-
facturers supply OPC-compliant drivers that you can install and use
with DSC. OPC is a layered, plug-and-play communications interface
standard specified by Microsoft for use with all versions of Windows.
Essentially, it provides a high-performance client-server relationship
between field devices and user applications, including LabVIEW, Excel,
Visual Basic, and many other packages. There’s even an organization—
the OPC Foundation—that promotes the OPC standards. Visit them at
www.opcfoundation.org.
Servers CD with hundreds of OPC drivers that you install into your
operating system and configure with a utility program that is external
to LabVIEW. This basic configuration step tells the driver what com-
munications link you’re using (serial, Ethernet, etc.) and how many
and what kind of I/O modules you have installed. Once this low-level
step is performed, you work entirely within the features of the DSC
tool set to configure and use individual tags (I/O points). The Tag Con-
figuration Editor is the place where you create and manage tags.
For a given tag, you assign a tag name, a PLC hardware address, scale
factors, alarm limits, and trending specifications. For very large data-
bases, you may find it easier to manage the information in a spread-
sheet or a more sophisticated database program. In that case, the Tag
Configuration Editor allows you to export and import the entire tag
database as ASCII text. Once the tag database is loaded, the editor
does a consistency check to verify that the chosen OPC server recog-
nizes the hardware addresses that you’ve chosen.
When you install DSC, LabVIEW gains a large number of control and
function subpalettes. Among the functions are VIs to read, write, and
access properties of tags—and it’s really easy to do. Figure 18.9 shows
a few of the basic VIs and how they’re used for simple read/write activi-
ties. In most cases, you just supply a tag name and wire up the input
or output value. The database and OPC drivers take care of the rest.
You can also use G Wizards, which create blocks of G code on the dia-
gram. A pop-up item available on every control and indicator brings
up a dialog through which you assign a tag name and an action. For
instance, if you want an indicator to display the value of an analog tag,
a small While Loop with the Read Tag VI will appear on the diagram,
already wired to the indicator. A large number of options are available
such as alarm actions (color changes and blinking) as well as the fre-
quency of update.
array of floating-point values is read by the PLC-5 Read Float VI. If the
tank level is greater than the setpoint, a drain pump is turned on by
setting a bit with the PLC-5 Write VI.
Your Measuring
Sensor or Transmitter Monitroller I
Power Supply Board
Thermocouple
Front Panel
System Power,
Display Board Terminal
Block 120 VAC
RTD
Connections
20 20.0
Group & Unit Read temp Update setpoint Read alarm status
Setpoint SL STOP
baud rate
Figure 18.12 This example uses the Eurotherm 808/847 single-loop controller driver to update the
setpoint and read the process variable and alarm status.
Process Control Applications 463
Man-Machine Interfaces
When it comes to graphical man-machine interfaces, LabVIEW is
among the very best products you can choose. The library of stan-
dard and customizable controls is extensive; but more than that,
464 Chapter Eighteen
SP PV OUT 8.0
53.1 66.3 8.2 6.0
Figure 18.13 Some of the basic process displays that you can make in LabVIEW.
Unit ID lo DegC lo
20.00 100 100 20.00
1
80 80
Previous A/M
60 Final A/M
Tagname MAN
MAN
40 40
TIC-203
60
20 20
Units
0 0
DegC
HIGH EXIT
Figure 18.14 Here’s the panel of a subVI that mimics a single-loop controller
faceplate. You can call it from other VIs to operate many controllers since every-
thing is programmable.
466 Chapter Eighteen
only the faceplate part of the panel is visible; the other items are just
parameters for the calling VI. This subVI is programmable in the sense
that the tag name, units, and other parameters are passed from the
caller. This fact permits you to use one subVI to operate many control
loops. The VI must be set to Show front panel when called.
The SP (setpoint) slider control is customized via the various pop-up
options and the Control Editor. We first added two extra sliders by
popping up on the control and selecting Add Slider. They serve as high
and low alarm limits. We set the Fill Options for each of these alarm
sliders to Fill to Maximum (upper one) and Fill to Minimum (lower
one). In the Control Editor, we effectively erased the digital indicators
for the two alarm limits by hiding them behind the control’s main digi-
tal display.
The diagram in Figure 18.15 is fairly complex, so we’ll go through
it step by step. It illustrates the use of Local variables for control ini-
tialization. The I/O hardware in this example is, again, the Eurotherm
808 SLC. This VI relies on the controller to perform the PID algorithm,
though you could write a set of subVIs that perform the same task
Group
& Unit Read temperature True
Setpoint
1 True
Setpoint changed;
False cases are empty
4 update controller
2 SP
Previous SP/hi/lo 8
EXIT
SP
Local var
False
Dependency MANUAL: write from OUT
control to controller
SO
7
OUT
with the PID Control Toolkit in LabVIEW, in conjunction with any type
of I/O.
1. Tag name and Units are copied from incoming parameters to indi-
cators on the faceplate. This gives the faceplate a custom look—as if
it was written just for the particular channel in use.
2. Previous settings for Auto? (the auto/manual switch) and SP (the
setpoint slider) are written to the user controls on the faceplate by
using local variables. This step, like step 1, must be completed before
the While Loop begins, so they are contained in a Sequence struc-
ture with a wire from one of the items to the border of the While
Loop.
3. The process variable (current temperature) is read from the control-
ler, checked against the alarm limits, and displayed.
4. The current value for the setpoint is compared against the previous
value in a Shift Register. If the value has changed, the setpoint is
sent to the controller. This saves time by avoiding retransmission of
the same value over and over.
5. In a similar fashion, the Auto? switch is checked for change of state.
If it has changed, the new setting is sent to the controller.
6. If the mode is automatic, the Out (output) control is updated by
using a Local variable. The value is read from the controller.
7. If the mode is manual, Out supplies a value that is sent to the con-
troller. These two steps illustrate an acceptable use of read-write
controls that avoids race conditions or other aberrant behavior.
8. The loop runs every second until the user clicks Exit, after which
the final values for Auto? and SP/hi/lo are returned to the calling
VI for use next time this VI is called.
Which Button.vi
TIC-203
TIC-204
= 0 if no buttons are pressed
TIC-321
1 [0..1]
Call Controller Faceplate with previous settings
Configuration Controller faceplate
Replace
Init Array
controller Element
data data for all ctlrs
A/M A/M
SP/hi/lo SP/hi/lo
Figure 18.16 An application that uses the Faceplate Controller subVI. The Shift Register acts
as a database for previous controller values.
data array is replaced. The next time this controller is called, data from
the previous call will be available from the Shift Register. Note the use
of Bundle by Name and Unbundle by Name. These functions show
the signal names so you can keep them straight.
This overall procedure of indexing, unbundling, bundling, and replac-
ing an array element is a versatile database management concept that
you can use in configuration management. Sometimes the data struc-
tures are very complex (arrays of clusters of arrays, etc.), but the proce-
dure is the same, and the diagram is very symmetrical when properly
laid out. Little memory management is required, so the operations are
reasonably fast. One final note regarding the use of global memory of
this type. All database updates must be controlled by one VI to serial-
ize the operations. If you access a global variable from many locations,
sooner or later you will encounter a race condition where two callers
attempt to write data at the same time. There is no way of knowing
Process Control Applications 469
who will get there first. Each caller got an original copy of the data at
the same time, but the last one to write the data wins. This is true for
both the built-in LabVIEW globals and the one based on Shift Regis-
ters that you build yourself.
If you want to save yourself lots of effort, check out the DSC MMI
G Wizard. It lets you interactively create your MMI from a library of
standard display elements such as trends, faceplates, and alarms.
What’s amazing is that you don’t have to do any G programming at
all, if you don’t want to.
Display hierarchy
Naturally, you can mix elements from each of the various types of dis-
plays freely because LabVIEW has no particular restrictions. Many
commercial software packages have preformatted displays that make
setups easy, but somewhat less versatile. You could put several of these
fundamental types of displays on one panel; or better, you may want to
segregate them into individual panels that include only one type of dis-
play per panel. A possible hierarchy of displays is shown in Figure 18.17.
Such a hierarchy relies heavily on global variables for access to infor-
mation about every I/O point and on pop-up windows (VIs that open
when called).
Your displays need to be organized in a manner that is easily under-
stood by the operator and one that is easy to navigate. A problem that
is common to large, integrated control systems is that it takes too long
to find the desired information, especially when an emergency arises.
Therefore, you must work with the operators so that the display hierar-
chy and navigation process make sense. The organization in Figure 18.17
is closely aligned with that of the plant that it controls. Each subdis-
play gives a greater level of detail about, or a different view of, a par-
ticular subsystem. This method is generally accepted by operators and
is a good starting point.
The VI hierarchy can be mapped one for one with the display hierar-
chy by using pop-up windows (subVIs set to Show front panel when
called). Each VI contains a dispatcher loop similar to the one shown
later in this chapter (in Figure 18.29). Buttons on the panel select which
display to bring up. A Case structure contains a separate display subVI
in each frame. The utility VI, Which Button, multiplexes all the but-
tons into a number that selects the appropriate case. The dispatcher
loop runs in parallel with other loops in the calling VI.
Each subVI in turn is structured the same way as the top-level VI,
with a dispatcher loop calling other subVIs as desired. When it’s time
to exit a subVI and return to a higher level, an exit button is pressed,
terminating the subVI execution, closing its panel, and returning con-
trol to the caller. An extension of this exit action is to have more than
one button with which to exit the subVI. A value is returned by the
subVI indicating where the user wants to go next (Figure 18.18). The
caller’s dispatcher is then responsible for figuring out which subVI to
call next.
Making the actual buttons is the fun part. The simplest ones are the
built-in labeled buttons Or, you can paste in a picture that
is a little more descriptive. Another method is to use a transparent
boolean control on top of a graphic item on a mimic display, such as
a tank or reactor. Then all the operator has to do is click on the tank,
and a predefined display pops up. This is a good way to jump quickly
MAIN
UNIT1
MIMIC
TANKA
MIMIC
UNIT1
TRENDS
Power Windows: Use the VI Server. You can do some really interesting
display tricks (and a lot more) with the VI Server functions, which you
find in the Application Control function menu. They provide ways to
manipulate VIs that go way beyond the regular VI Setup items, such as
Show front panel when called.
The VI Server lets you act on VIs in three basic ways:
True
Open the VI's panel; Run the VI. Close ref and
Size to 640 by 480 pixels; Don't wait check errors.
Set the title string. until it stops.
Open VI Close LV Object
vi path Reference Property Node Invoke Node Reference
HD:Proj: VI VI
Config.vi
Configure FP.Open Run VI
FP.PanelBounds Wait until done
Front Panel Window:
FP.Title
Panel Bounds
left right This is the window title
0 640 Front Panel Window:Title
top bottom
0 480
Figure 18.19 The VI Server opens a VI, sets the appearance of the panel, runs the VI, and
leaves it running. It’s a kind of remote control, and it works even over a network.
Server code to its diagram. All you need to do is have it open a VI refer-
ence (to itself), then have a Property Node set the Front Panel Open
property to False.
The Call By Reference node operates in a manner similar to wiring
the designated subVI directly into the diagram, right down to pass-
ing parameters to controls and receiving results back from indicators
on the subVI. The important difference is that the subVI is dynami-
cally loaded and unloaded, and which subVI is loaded is determined by
the path name that you supply. In order to make it possible for you to
dynamically substitute “any” VI into the Call By Reference node, you
must make the connector pane identical in all of the candidate VIs.
Next, you have to tell the VI Server what your standard connector
pane looks like. Beginning with an Open VI Reference function, pop
up on the type specifier VI input and choose Create Control. A
VI Refnum control will appear on the panel. Drag the icon of one of your
standard VIs onto that VI Refnum control, and the connector pane will
appear there. You have now created a class of VIs for the VI Server.
Drop a Call By Reference Node on the diagram, connect it to the
Open VI Reference function, and you’ll see your connector pane appear.
Then all you have to do is wire to the terminals as if the standard VI
were actually placed on the diagram. Figure 18.20 shows the complete
picture. In this example, the selected subVI runs in the background
without its panel open unless it’s configured to Show Front Panel When
Called. You can also add Property Nodes to customize the appearance,
as before.
The VI Server offers additional opportunities for navigation among
display VIs. To speed the apparent response time of your user interface
when changing from one MMI window to another, consider leaving sev-
eral of the VIs running at all times, but have only the currently active
Process Control Applications 473
source
N
12.00
Close LV Object
vi path Call By Reference Node Reference
HD:Proj:
Config.vi
HD:a path N
Open VI
Reference
error out
Figure 18.20 The Call By Reference node loads a VI from disk and
exchanges parameters with it as if the subVI were wired directly
into the diagram. This can save memory since the subVI is loaded
only when required.
one visible. When the user clicks a button to activate another window,
use an Invoke Node to open the desired panel. Its window will pop to
the front as fast as your computer can redraw the screen. A similar
trick can prevent users from getting lost in a sea of windows. If your
display hierarchy is complex and you allow more than one active win-
dow at a time, it’s possible for the user to accidentally hide one behind
the other. It’s very confusing and requires excessive training to keep
the user out of trouble. Instead, try to simplify your display navigation
scheme, or use Invoke Nodes to assist the user.
Figure 18.21 The sample valve is a horizontal slider control with the
valve picture pasted in as the handle. The control has no axis labels,
and the housing is colored transparent. The bottles are boolean indi-
cators, and the pipes are static pictures.
you can place a Pict Ring indicator for status on top of a transparent
boolean control for manual actuation. For a valve with high and low
limit switches, the ring indicator would show open, transit, closed, and
an illegal value where both limit switches are activated.
You can animate pipes, pumps, heaters, and a host of other process
equipment by using picture booleans or Pict Rings. To simulate motion,
create a picture such as a paddle wheel in a drawing program, then
duplicate it and modify each duplicate in some way, such as rotating
the wheel. Then, paste the pictures into the Pict Ring in sequence. Con-
nect the ring indicator to a number that cycles through the appropriate
range of values on a periodic basis. This is the LabVIEW equivalent of
those novelty movies that are drawn in the margins of books.
Perhaps the most impressive and useful display is a process mimic
based on a pictorial representation of your process. A simple line draw-
ing such as a P&ID showing important valves and instruments is easy
to create and remarkably effective. If your plant is better represented
as a photograph or other artwork, by all means use that. You can import
CAD drawings, scanned images, or images from a video frame grabber
to enhance a process display.
Property nodes allow you to dynamically change the size, posi-
tion, color, and visibility of controls and indicators. This adds a kind of
animation capability to LabVIEW. The Blinking attribute is especially
useful for alarm indicators. You can choose the on/off state colors and
the blink rate through the LabVIEW Preferences.
1 [0..2] 2 [0..2]
Unbundle Outputs for indicators
FV-305 Build array of controls.
NOTE: Order-dependent! FCV-305 ON?
FV-204
FCV-204 ON?
FV-304
FCV-304 ON?
FV-401
FCV-401 ON?
FV-402
Unbundle FCV-402 ON?
TIC-101 DO 0..7
TIC-101 ON?
TIC-102
TIC-102 ON?
TIC-201
TIC-201 ON?
Figure 18.22 Clean up your diagram by combining controls and indicators into arrays or clus-
ters. Other frames in this sequence are the sources and sinks for data. The unbundler subVI
on the right extracts boolean values from an array or cluster. You could also use Unbundle By
Name.
Data Distribution
If you think about MMI display hierarchies, one thing you may won-
der about is how the many subVIs exchange current data. This is a
data distribution problem, and it can be a big one. A complex process
476 Chapter Eighteen
Control VIs Output Output I/O Input Data global Display VIs
queue VI handler VI Hardware handler VI variable
Output
Output
Queue
Queue Output Input
Handler Handler
Figure 18.23 Data distribution in a simple LabVIEW process control system. Control VIs write
new settings to an output queue, from which values are written to the hardware by an output
handler VI. An input handler VI reads data and stores it in a global variable for use by multiple
display VIs.
control system may have many I/O subsystems of different types, some
of which are accessed over a network. If you don’t use a coherent plan
of attack, performance (and perhaps reliability) is sure to suffer. In
homemade LabVIEW applications, global variables generally solve the
problem, if used with some caution.
Most commercial process control systems, including DSC, use a real-
time database to make data globally accessible in real time. You can
emulate this in your LabVIEW program at any level of sophistication.
The general concept is to use centralized, asynchronous I/O handler
tasks to perform the actual I/O operations (translation: stand-alone
VIs that talk to the I/O hardware through the use of LabVIEW driv-
ers). Data is exchanged with the I/O handlers via one or more global
variables or queues (Figure 18.23). User-interface VIs and control VIs
all operate in parallel with the I/O handlers.
Analog in
Digital in
Scan interval
RUN
As you might expect, process control has many of the same configu-
ration needs as a data acquisition system. Each input handler requires
information about its associated input hardware, channel assign-
ments, and so forth. This information comes from an I/O configuration
VI, which supplies the configurations in cluster arrays. If configura-
tion information is needed elsewhere in the hierarchy, it can be passed
directly in global variables, or it can be included in the data cluster
arrays produced by the input handlers.
Here’s an important tip regarding performance. Where possible, avoid
frequent access of string data, particularly in global variables. Strings
require extra memory management, and the associated overhead yields
relatively poor performance when compared to all other data types. Not
that you should never use strings, but try to use them sparingly and only
for infrequent access. For instance, when you open an operator display
panel, read channel names and units from the configuration database
and display them just once, not every cycle of a loop.
Output Output
globals handlers
Analog out
Digital out
Scan interval
RUN
output channel. Any control VI can update values in the output global.
The output handler checks for changes in any of the output channels and
updates the specific channels that have changed. You can combine the
handler and its associated global variable into an output scanner, as in
Figure 18.25,which works like an input scanner in reverse.
To test for changes of state on an array of output values, use a pro-
gram similar to the one in Figure 18.26. A Shift Register contains
the values of all channels from the last time the handler was called.
Each channel’s new value is compared with its previous value, and if
a change has occurred, the output is written. Another Shift Register is
used to force all outputs to update when the VI is loaded. You could also
add a force update or initialize boolean control to do the same thing.
Depending upon the complexity of your system, channel configuration
Analog out
Figure 18.26 An output handler that only writes to an output channel when a change of value
is detected. The upper Shift Register stores the previous values for comparison, while the lower
Shift Register is used for initialization. Its value is False when the VI is loaded, forcing all out-
puts to update. Additional initialization features may be required if the program is stopped then
restarted.
Process Control Applications 479
* Historical note: In early 1991, Gary wrote a letter to the LabVIEW developers describ-
ing the elements and behaviors of a process control database, hoping that they would build
such a beast into LabVIEW. As National Instruments got more involved with the process
control industry, they realized how important such a capability really was. Audrey Har-
vey actually implemented the first prototype LabVIEW SCADA package in LabVIEW 3
sometime in 1993. It eventually became the basis for DSC, now available as DSC.
480 Chapter Eighteen
DISPLAY SELECT
Goto Screen
CHARTS MIMIC METERS MAIN
0
Statistical: UCL Setpoint 0.000 LCL Setpoint 0.000 Range UCL 0.000
Pre-Control: Upper Limit 0.000 Lower Limit 0.000 Range Upper Limit 0.000
Pre-Control: Upper Limit 0.000 Lower Limit 0.000 Range Upper Limit 0.000
Pre-Control: Process U.L. 0.000 Process L.L. 0.000 Spec U.L. 0.000
solve the data distribution problem. The input scanner supplies up-to-
date values for all inputs, while the configuration part of your system
supplies such things as channel names, units, and other general infor-
mation, all available through global variables. Figures 18.27 and 18.28
show how we wrote a typical display/control VI by using the methods
True
1 [0..2] False case
Display present values Read controls is empty
MAIN Which
seconds 2.00 Button.vi Goto Screen
200 MIMIC
CHARTS
Interval Timer.vi
METERS
Figure 18.28 Diagram for the display/control VI. The Case structure and interval
timer limit how often the displays and controls are updated. A Sequence logically
groups the controls and indicators, eliminating clutter. The VI loops until the user
clicks a button to jump to another display.
Process Control Applications 481
DISPATCHER
0 [0..4] 1 [0..4]
MAIN (we're already here!) Call SPECS display. When SPECS is
next finished, it returns a value for the
MAIN next screen. Each frame of the
screen
SPECS dispatcher Case is similar.
Case zero is MAIN.
CHARTS
SPECS display VI
METERS
Which
MIMIC Goto screen
button.vi
Global to stop
QUIT RUN
main loop
Figure 18.29 The dispatcher loop in a higher-level VI that calls the display VI from the preceding
figures.
482 Chapter Eighteen
for data that reappears in a Data Client elsewhere on the network. You
can have as many connections, or ports, open simultaneously as you
need. Several TCP/IP examples are included with LabVIEW, and you
can refer to other books that cover the subject in depth (Johnson 1998;
Travis 2000).
The VI Server includes transparent, multiplatform network access.
LabVIEW itself can be the target, or particular VIs can be manipulated
across the network in the same ways that we have already described.
This makes it possible, for instance, to use a Call By Reference node
on a Windows machine to execute a VI on a remote Macintosh run-
ning LabVIEW. An example VI shows how to do exactly that. Remem-
ber that you have to set up the VI Server access permissions in the
LabVIEW Options dialog.
DataSockets are a great cross-platform way to transfer many kinds
of data. A DataSocket connection consists of a client and a server that
may or may not reside on the same physical machine. You specify the
source or destination of the data through the familiar URL designator,
just like you’d use in your Web browser. The native protocol for Data-
Socket connections is DataSocket Transport Protocol (dstp).
To use this protocol, you must run a DataSocket server, an applica-
tion external to LabVIEW that is only available on the Windows plat-
form as of this writing. You can also directly access files, File Transfer
Protocol (ftp) servers, and OPC servers. Here are a few example URLs
that represent valid DataSocket connections:
■ dstp://servername.com/numericdata where numericdata is the
named tag
■ opc:\\machine\National Instruments.OPCModbus\Modbus Demo Box.4:0
■ ftp://ftp.natinst.com/datasocket/ping.wav
■ file:\\machine\mydata\ping.wav
360.00
200 stop
ms timeout 500
stop
Sequential Control
Every process has some need for sequential control in the form of inter-
locking or time-ordered events. We usually include manual inputs in
this area—virtual switches and buttons that open valves, start motors,
and the like. This is the great bastion of PLCs, but you can do an admi-
rable job in LabVIEW without too much work. We’ve already discussed
methods by which you read, write, and distribute data. The examples
that follow fit between the input and output handlers.
B A
B
Figure 18.31 Simple interlock logic, comparing a ladder logic network with its LabVIEW
equivalent.
Process Control Applications 487
Input A
Mode Startup 0
Output A
Boolean Table
row 0 Input B
col 0
Output B
Input A
Output A
Boolean Table row 0
Output B
row 1
Mode
Input B
State machines
The state machine architecture is about the most powerful LabVIEW
solution for sequential control problems. A State Machine consists of
a Case structure inside of a While Loop with the Case selector carried
in a Shift Register. Each frame of the state machine’s Case structure
has the ability to transfer control to any other frame on the next itera-
tion or to cause immediate termination of the While Loop. This allows
you to perform operations in any order depending on any number of
conditions—the very essence of sequential control.
488 Chapter Eighteen
Start Fill
Fill stay here
Start Fill next frame
Timeout
Figure 18.33 This state machine implements an operation that terminates when the tank is full
or when a time limit has passed. The Timeout of the Case (not shown) takes action if a time-out
occurs. The Upper Limit frame (also not shown) is activated when the upper limit for tank level
is exceeded.
Initialization problems
Initialization is important in all control situations and particularly so
in batch processes that spend much of their time in the start-up phase.
When you first load your LabVIEW program, or when you restart for
some reason, the program needs to know the state of each input and
output to prevent a jarring process upset. For instance, the default val-
ues for all your front panel controls may or may not be the right values
to send to the output devices. A good deal of thought is necessary with
regard to this initialization problem. The DSC database includes user-
defined initialization for all I/O points, drastically reducing the amount
of programming you’ll need to do.
When your program starts, a predictable start-up sequence is neces-
sary to avoid output transients. Begin by scanning all the inputs. It’s
certainly a safe operation, and you probably need some input data in
order to set any outputs. Then compute and initialize any output val-
ues. If the values are stored in global variables, the problem is some-
what easier because a special initialization VI may be able to write
the desired settings without going through all the control algorithms.
Finally, call the output handler VI(s) to transfer the settings to the
output devices. Also remember to initialize front panel controls, as dis-
cussed earlier.
Control algorithms may need initialization as well. If you use any
uninitialized Shift Registers to store state information, add initializa-
tion. The method in Figure 18.34 qualifies as another Canonical VI.
The technique relies on the fact that an uninitialized boolean Shift
Register contains False when the VI is freshly loaded or compiled.
(Remember that once the VI has been run, this is no longer the case.)
The Shift Register is tested, and if it’s False, some initialization logic
inside the Case structure is executed. A boolean control called Initial-
ize is included to permit programmatic initialization at any time, such
as during restarts.
You could also send flags to your control algorithms via dedicated
controls or global variables. Such a flag might cause the clearing of
False
Shift register to True
be initialized real work
123 goes here Normal
Initialize operation
here
This shift register contains
False when first loaded,
Initialize Don’t force init after
forcing initialization
first time through
Figure 18.34 The upper Shift Register, perhaps used for a control algorithm, is initialized by
the lower Shift Register at startup or by setting the Initialize control to True.
490 Chapter Eighteen
GrafcetVIEW—a graphical
process control package
There is an international standard for process control programming
called GRAFCET. It’s more popular in Europe, having originated
in France, but is found occasionally in the United States and other
countries. You usually need a special GRAFCET programming pack-
age, much like you need a ladder logic package to program a target
machine, which is usually a PLC. The language is the forerunner of
sequential function charts (SFCs) (IEC standard 848), which are
similar to flowcharts, but optimized for process control and PLCs in
particular (Figure 18.35). Emmanuel Geveaux and Francis Cottet at
LISI/ENSMA in conjunction with Saphir (all in France) have devel-
oped a LabVIEW package called GrafcetVIEW that allows you to do
GRAFCET programming on LabVIEW diagrams. It’s a natural match
because of the graphical nature of both languages.
One enhancement to LabVIEW that the authors had to make was
additional synchronization through the use of semaphores, a classic
software handshaking technique. In many control schemes, you may
1
Graphical representation of a step.
2 5 7
15
Graphical representation of an initial step.
9
Graphical representation of a transition.
Boolean indicators
Initial Step VI
S0 S1 Upstream
Synchronization VI
Transition VI
E0
Step VI
S2
E1
Transition VI
Downstream
Picture, pasted into front panel Synchronization VI
Figure 18.36 This is one of the GrafcetVIEW demonstration VIs. The panel graphic was pasted
in, and boolean controls and indicators were placed on top. Compare the LabVIEW diagram
with actual GRAFCET programming in Figure 18.35.
Continuous Control
Continuous control generally implies that a steady-state condition
is reached in a process and that feedback stabilizes the operation over
some prolonged period of time. Single-loop controllers, PLCs, and other
programmable devices are well suited to continuous control tasks, or
you can program LabVIEW to perform the low-level feedback algo-
rithms and have it orchestrate the overall control scheme. LabVIEW
has some advantages, particularly in experimental systems, because it’s
so easy to reconfigure. Also, you can handle tricky linearizations and
complex situations that are quite difficult with dedicated controllers.
Not to mention the free user interface.
Most processes use some form of the PID algorithm as the basis for
feedback control. Gary wrote the original PID VIs in the LabVIEW
PID Control Toolkit with the goal that they should be easy to apply
and easy to modify. Every control engineer has personal preferences as
to which flavor of PID algorithm should be used in any particular situ-
ation. You can easily rewrite the supplied PID functions to incorporate
your favorite algorithm. Just because he programmed this particular
set (which he personally trusts) doesn’t mean it’s always the best for
every application. The three algorithms in the package are
■ PID—an interacting positional PID algorithm with derivative action
on the process variable only
■ PID (Gain Schedule)—similar to the PID, but adds a tuning sched-
ule that helps to optimize response in nonlinear processes
■ PID with Autotuning—adds an autotuning wizard feature that
guides you through the loop tuning process
1.00 Reverse
Hysteresis (0 %) Override
Normal
Output
Direction (F:reverse)
Figure 18.37 An on/off controller with hysteresis, which functions much like a thermostat. Add
it to your process control library.
You can use the math and logic functions in LabVIEW to implement
almost any other continuous control technique. For instance, on/off or
bang-bang control operates much like the thermostat in your refrig-
erator. To program a simple on/off control scheme, all you need to do
is compare the setpoint with the process variable and drive the output
accordingly. A comparison function does the trick, although you might
want to add hysteresis to prevent short-cycling of the output, as we did
in the Hysteresis On/Off Controller VI in Figure 18.37. This control
algorithm is similar to the ones in the PID Control Toolkit in that it
uses an uninitialized shift register to remember the previous state of
the output.
Cascade/feedforward surge tank level control LabVIEW equivalent using PID control VIs
Tld
FT1
Lead-Lag.vi
SP Valve
LC + PID.vi
FT-1
+
∑
SURGE
SP
TANK
LC
LT1
LT-1 FT-2 RUN
FT2
With the PID functions, you can map a control strategy from a textbook diagram to
Figure 18.38
a LabVIEW diagram.
Seems like a reasonable request, doesn’t it? Well, it turns out that
control engineers have been struggling with this exact problem for the
last 60 years with mixed results and a very wide range of possible solu-
tions, depending upon the exact character of the process, the available
Process Control Applications 495
Gain Scheduling
Scheduling input
Setpoint
Measure PV Update output
Gain schedule (breakpoints) Setpoint PID (Gain AO Update
0 Schedule).vi Channel.vi
100.00
PID parameters
0
Kc 4.8518 Gain Scheduling
init
Ti (min) 0.0211
ms 250 Run
Td (min) 0.0053
Run Setpoint
ON 2.50
Figure 18.39 This is one way to implement a PID tuning schedule. Properly
tuned, it yields good control performance over a wide range of operating
conditions.
496 Chapter Eighteen
Scaling input and output values. All the functions in the PID Con-
trol Toolkit have a cluster input that allows you to assign setpoint,
Process Control Applications 497
process variable, and output ranges. You supply a low and a high value,
and the controller algorithm will normalize that to a zero to 100 per-
cent range or span, which is customarily used for PID calculations.
Here is an example. Assume that you use a multifunction board ana-
log input to acquire a temperature signal with a range of interest span-
ning 0 to 250°C.You should set the sp low value to zero, and the sp high
value to 250 in the PID option cluster. That way, both the setpoint and
process variable will cover the same range. On the output side, perhaps
you want to cover a range of 0 to 10 V. You should assign out low to zero
and out high to 10. That will keep the analog output values within a
sensible range.
Once everything is scaled, the PID tuning parameters will begin to
make sense. For instance, say that you have a temperature transmitter
scaled from −100 to +1200°C. Its span is 1300°C. If the controller has a
proportional band of 10 percent, it means that a measurement error of
130°C relative to the setpoint is just enough to drive the controller out-
put to saturation, if you have scaled your setpoint in a similar manner.
In the PID tuning parameters, they call for a proportional gain value,
which is equal to 100 divided by the proportional band, in percent. So
for this example, the proportional gain would be 10.
PID tricks and tips. All of the PID VIs contain a four-sample FIR low-pass
filter on the process variable. It’s there to reduce high-frequency noise,
which is generally a good thing. However, it also induces some additional
phase shift. This extra lag must be included in any process simulation.
Gary sometimes goes into the VI and rips out the filter subVI, and does
his own filtering as part of the data acquisition process.
Nonlinear integral action is used to help reduce integrator windup
and the associated overshoot in many processes. The error term that is
applied to the integrator is scaled according to the following expression:
error
Error =
1 + 10( error 2/span2)
where error is the setpoint minus the process variable and span is
the controller’s setpoint span. Thus, when the deviation or error is very
large, such as during process start-up, the integral action is reduced to
about 10 percent of its normal value.
Because derivative action is effectively a high-pass filter, high-
frequency noise is emphasized, leading many users to abandon deriva-
tive action altogether. To combat such problems, derivative limiting is
sometimes used. It’s basically a low-pass filter that begins acting at
high frequencies to cancel the more extreme effects of the derivative.
In the PID Toolkit, there is no implicit limit on the derivative action.
498 Chapter Eighteen
However, if you leave the process variable filter in place, there’s a good
chance that it will serve as a nice alternative.
that the PID functions keep track of the elapsed time between each
execution. External timing requires you to supply the actual cycle time
(in seconds) to the PID function VI. If you are using the DAQ library,
the actual scan period for an acquisition operation is returned by the
Waveform Scan VI, for instance, and the value is very precise. Each
PID VI has an optional input called dt. If dt is set to a value less than
or equal to zero seconds (the default), internal timing is used. Positive
values are taken as gospel by the PID algorithm.
The DAQ analog I/O example Analog IO Control Loop (hw timed)
is an example of a data acquisition operation where you can place a
PID loop on one of the input channels driving an analog output. The
trick is to wire actual scan rate (in scans per second) from the DAQ
VI, AI Start, through a reciprocal function to the PID VI. This provides
the PID algorithm with an accurate time interval calibration. The data
acquisition board precisely times each data scan so you can be assured
that the While Loop runs at the specified rate. This example should run
reliably and accurately at nearly 1 kHz, including the display.
Trending
The process control industry calls graphs and charts trend displays.
They are further broken down into real-time trends and historical
trends, depending on the timeliness of the data displayed. Exactly
where the transition occurs, nobody agrees. Typically, a historical trend
displays data quite a long time into the past for a process that runs
continuously. A real-time trend is updated frequently and only displays
a fairly recent time history. Naturally, you can blur this distinction
to any degree through crafty programming. Historical trending also
implies archival storage of data on disk for later review while real-time
trending may not use disk files.
Real-time trends
The obvious way to display a real-time trend is to use a Waveform
Chart indicator. The first problem you will encounter with chart indi-
cators is that historical data is displayed only if the panel containing
the chart is showing at all times. As soon as the panel is closed, the old
data is gone. If the chart is updating slowly, it could take quite a long
time before the operator sees a reasonable historical record. A solution
is to write a program that stores the historical data in arrays and then
write the historical data to the chart with the Chart History item in
a Property node. You should take advantage of strip charts whenever
possible because they are simple and efficient and require no program-
ming on your part.
500 Chapter Eighteen
Plotted Data
2.0
1.5
1.0
0.5
0.0
09:51:55 PM 09:52:20 PM 09:52:44 PM
Data Scroll
Channel to Display
STOP 1
Points in
Write Interval (sec) Read Interval (sec) Graph Window
1 2 50
Circular
1000 buffer.vi Reset stop button
Samples/Chan 1000 stop
stop
Write Interval (sec)
Figure 18.40 An example of the Circular Buffer real-time trending subVI in action.
Parallel While Loops are used, one to write sinusoidal data for 10 channels, and the
other to read a specified range from a channel for display. Note that the graphed data
can be scrolled back in time.
any time. The data written here is a set of sine waves, one for each
of the 10 channels defined. The buffer is initialized to contain 1000
samples per channel in this example. Every two seconds, a fresh set of
values for all 10 channels is written.
The Data Scroll slider allows you to display a range of data from
the past without disturbing the data recording operation. The Points
502 Chapter Eighteen
Historical trends
Memory-resident data is fine for real-time trending where speed is
your major objective. But long-term historical data needs to reside on
disk both because you want a permanent record and because disks
generally have more room. You should begin by making an estimate
of the space required for historical data in your application. Consider
the number of channels, recording rates, and how far back in time the
records need to extend. Also, the data format and content will make a
big difference in volume. Finally, you need to decide what means will be
used to access the data. Let’s look at some of your options.
All of the basic file formats discussed in Chapter 7, “Files,”—datalogs,
ASCII text, and proprietary binary formats—are generally applicable
to historical trending. ASCII text has a distinct speed disadvantage,
however, and is probably not suited to high-performance trending
Process Control Applications 503
where you want to read large blocks of data from files for periodic redis-
play. Surprisingly, many commercial PC-based process control applica-
tions do exactly that, and their plotting speed suffers accordingly. Do
you want to wait several minutes to read a few thousand data points?
Then don’t use text files. On the other hand, it’s nice being able to
directly open a historical trending file with your favorite spreadsheet,
so text files are certainly worth considering when performance is not
too demanding.
LabVIEW datalog files are a better choice for random access histori-
cal trending applications because they are fast, compact, and easy to
program. However, you must remember that only LabVIEW can read
such a format, unless you write custom code for the foreign application
or a LabVIEW translator program that writes a more common file for-
mat for export purposes.
The HIST package. A custom binary file format is the optimum solu-
tion for historical trending. By using a more sophisticated storage algo-
rithm such as a circular buffer or linked list on disk, you can directly
access data from any single channel over any time range. Gary wrote
such a package, called HIST, which he used to sell commercially.
The HIST package actually includes two versions: Fast HIST and
Standard HIST. Fast HIST is based on the real-time circular buffer
that we’ve just discussed, but adds the ability to record data to disk in
either binary or tab-delimited text format. This approach works well
for many data acquisition and process control packages. Based on a
single, integrated subVI, Fast HIST has very low overhead and is capa-
ble of recording 100 channels at about 80 Hz to both the memory-based
circular buffer and to disk. Utility VIs are included for reading data,
which is particularly important for the higher-performance binary for-
mat files.
Standard HIST is based on a suite of VIs that set up the files, store
data, and read data using circular buffers on disk rather than in
memory. At start-up time, you determine how many channels are to
be trended and how many samples are to be saved in the circular buf-
fers on disk. Sampling rates are variable on a per channel basis. The
use of circular buffers permits infinite record lengths without worry
of overflowing your disk. However, this also means that old data will
eventually be overwritten. Through judicious choice of buffer length
and sampling parameters and periodic dumping of data to other files,
you can effectively trend forever. (See Figure 18.41.)
Data compression is another novel feature of Standard HIST. There
are two compression parameters—fencepost and deadband—for
each channel (Figure 18.42). Fencepost is a guaranteed maximum
update time. Deadband is the amount by which a channel’s value
504 Chapter Eighteen
Previous value,
± Deadband
Fencepost
(4 time units)
Figure 18.42 Illustration of the action of deadband-fencepost data
compression. Values are stored at guaranteed intervals deter-
mined by the fencepost setting, which is four time units in this
example. Also, a value is stored when it deviates from the last
stored value by greater than +/-deadband (measured in engineer-
ing units).
Process Control Applications 505
74.02 CL
500
74.01 LCL
400
74.00 X-bar Chart Limits
300
73.99 UCLx 74.015
Figure 18.43 The SPC Toolkit demo VI displays several of the built-in chart types and statistical
analysis techniques included in the package.
Alarms
A vital function of any process control system is to alert the opera-
tor when important parameters have deviated outside specified lim-
its. Any signal can be a source of an alarm condition, whether it’s a
measurement from a transducer, a calculated value such as an SQC
control limit, or the status of an output. Alarms are generally classified
Process Control Applications 507
Pressure switch
WARNING
high limit
Normal
RUN INIT
Change of state
On to Off
Off to On
Figure 18.45 A change of state can be detected by comparing a boolean’s present value with its
previous value stored in a Shift Register. Three different comparisons are shown here. This would
make a reasonable subVI for general use.
508 Chapter Eighteen
Input
last state
Off to On
detected
Latched
Off -> On
Alarm
Acknowledge
Global Alarm Queue VI. One way to handle distributed alarm generation
is to store alarm messages in a global queue. The approach is similar
to the one we used with an output handler VI. Multiple VIs can deposit
alarm messages in the queue for later retrieval by the alarm handler.
Process Control Applications 509
Alarm Message In
Figure 18.47 A global queue stores alarm messages. A message consists of a cluster containing a
string and a numeric. In Enqueue mode, elements are appended to the array carried in the Shift
Register. In Dequeue mode, elements are removed one at a time.
A queue guarantees that the oldest messages are handled first and
that there is no chance that a message will be missed because of a tim-
ing error. The alarm message queue in Figure 18.47 was lifted directly
from the Global Queue utility VI that was supplied with older versions
of LabVIEW. All we did is change the original numeric inputs and out-
puts to clusters. The clusters contain a message string and a numeric
that tells the alarm handler what to do with the message. You could
add other items as necessary. This is an unbounded queue which grows
without limits.
Alarms can be generated anywhere in your VI hierarchy, but the I/O
handlers may be the best places to do so because they have full access
to most of the signals that you would want to alarm. You can combine
the output of an alarm detector with information from your configura-
tion database to produce a suitable alarm message based on the alarm
condition. For instance, one channel may only need a high-limit alarm
while another needs both high- and low-limit alarms. And the contents
of the message will probably be different for each case. All of these
dependencies can be carried along in the configuration. Once you have
formatted the message, deposit it in the global alarm queue.
Alarm Handler VI. Once the alarm messages are queued up, an alarm
handler can dequeue them asynchronously and then report or distribute
them as required. The alarm handler in Figure 18.48 performs two such
actions: (1) It reports each alarm by one of four means as determined
by the bits set in the Destination Code number that’s part of the mes-
sage cluster; and (2) it appends message strings to a string indicator,
510 Chapter Eighteen
ACKNOWLEDGE
Dequeue True
messages
Valid Queue -- Process 3 [0..3]
until all gone
Dialog
dequeue Message
0
Dest Code True
Alarm Message Queue
1
Alarms
noted 2
If "ACK" button
is pressed, set 3
messages to
empty string. Alarm
No. of Messages to Keep Keep N Lines Messages
Figure 18.48 An alarm handler VI. It reads messages from the global alarm queue and reports them
according to the bits set in the Destination code. Messages are appended to the Current Messages
string. The subVI, Keep N Lines, keeps several of the latest messages and throws away the older ones
to conserve space in the indicator for current messages.
The fun part of this whole alarm business is notifying the operator. You
can use all kinds of LabVIEW indicators, log messages to files, make
sounds, or use external annunciator hardware. Human factors special-
ists report that a consistent and well-thought-out approach to alarm
presentation is vital to the safe operation of modern control systems.
Everything from the choice of color to the wording of messages to the
physical location of the alarm readouts deserves your attention early
in the design phase. When automatic controls fail to respond properly,
it’s up to the operator to take over, and he or she needs to be alerted in
a reliable fashion.
Boolean indicators are simple and effective alarm annunciators.
Besides the built-in versions, you can paste in graphics for the True
and/or False cases for any of the boolean indicators. Attention-getting
colors or shapes, icons, and descriptive text are all valuable ideas for
alarm presentation.
You can log alarm messages to a file to provide a permanent record.
Each time an alarm is generated, the alarm handler can call a subVI
that adds the time and date to the message then appends that string
to a preexisting text file. Useful information to include in the message
includes the tag name, the present value, the nature of the alarm, and
whether the alarm has just occurred or has been cleared. This same file
can also be used to log other operational data and important system
events, such as cycle start/stop times, mode changes, and so forth. DSC
stores alarms and other events in an event log file. SubVIs are avail-
able to access and display information stored in the event log.
The same information that goes to a file can also be sent directly to
a printer. This is a really good use for all those old dot-matrix serial
printers you have lying around. Since the printer is just a serial instru-
ment, it’s a simple matter to use the Serial Port Write VI to send it
an ASCII string. If you want to get fancy, look in your printer’s manual
and find out about the special escape codes that control the style of the
output. You could write a driver VI that formats each line to emphasize
certain parts of the message. As a bonus, dot-matrix printers also serve
as an audible alarm annunciator if located near the operator’s station.
When the control room printer starts making lots of noise, you know
you’re in for some excitement.
Audible alarms can be helpful or an outright nuisance. Traditional
control rooms and DCSs had a snotty-sounding buzzer for some alarms,
512 Chapter Eighteen
and maybe a big bell or Klaxon for real emergencies. If the system
engineer programs too many alarms to trigger the buzzer, it quickly
becomes a sore point with the operators. However, sound does have its
place, especially in situations where the operator can’t see the display.
You could hook up a buzzer or something to a digital output device or
make use of the more advanced sound recording and playback capabili-
ties of your computer (see Chapter 20, “Data Visualization, Imaging,
and Sound”). LabVIEW has a set of sound VIs that work on all plat-
forms and serve as annunciators.
Commercial alarm annunciator panels are popular in industry
because they are easy to understand and use and are modestly priced.
You can configure these units with a variety of colored indicators that
include highly visible labels. They are rugged and meant for use on the
factory floor. Hathaway/Beta Corporation makes several models, rang-
ing from a simple collection of lamps to digitally programmed units.
That covers sight and sound; what about our other senses? Use your
imagination. LabVIEW has the ability to control most any actuator.
Maybe you could spray some odoriferous compound into the air or drib-
ble something interesting into the supervisor’s coffee.
Bibliography
Boyer, S. A.: SCADA: Supervisory Control and Data Acquisition, ISA Press, Raleigh, N.C.,
1993.
Bryan, Luis A., and E. A. Bryan: Programmable Controllers: Theory and Implementation,
C Industrial Text Company, Atlanta, GA, 1988.
Corripio, Armondo B.: Tuning of Industrial Control System,. ISA Press, Raleigh, N.C.,
1990.
Hughes, Thomas A.: Measurement and Control Basics, ISA Press, Raleigh, N.C., 1988.
Johnson, Gary (Ed.): LabVIEW Power Programming, McGraw-Hill, New York, 1998.
McMillan, Gregory K.: Advanced Temperature Control, ISA Press, Raleigh, N.C., 1995.
Montgomery, Douglas C.: Introduction to Statistical Quality Control, Wiley, New York,
1992.
Shinskey, F. G.: Process Control Systems, McGraw-Hill, New York, 1988.
Travis, Jeffrey: Internet Applications in LabVIEW, Prentice-Hall, Upper Saddle River,
N.J., 2000.
Wheeler, Donald J., and D. S. Chambers: Understanding Statistical Process Control,
2d ed., SPC Press, Knoxville, Tenn., 1992.
Chapter
Physics Applications
19
Physics is Phun, they told us in Physics 101, and, by golly, they were
right! Once we got started at Lawrence Livermore National Laboratory
(LLNL), where there are plenty of physics experiments going on,
we found out how interesting the business of instrumenting such
an experiment can be. One problem we discovered is just how little
material is available in the way of instructional guides for the budding
diagnostic engineer. Unfortunately, there isn’t enough space to do a
complete brain dump in this chapter. What we will pass along are a
few references: Bologna and Vincelli (1983); and Mass and Brueckner
(1965). Particularly, we’ve gotten a lot of good tips and application
notes from the makers of specialized instruments (LeCroy Corporation
2000). Like National Instruments, they’re all in the business of selling
equipment, and the more they educate their customers, the more
equipment they are likely to sell. So, start by collecting catalogs and
look for goodies like sample applications inside. Then, get to know your
local sales representatives, and ask them how to use their products.
Having an experienced experimental physicist or engineer on your
project is a big help, too.
We’re going to treat the subject of physics in its broadest sense for the
purpose of discussing LabVIEW programming techniques. The common
threads among these unusual applications are that they use uncon-
ventional sensors, signal conditioning, and data acquisition equipment,
and often involve very large data sets. Even if you’re not involved in
physics research, you are sure to find some interesting ideas in this
chapter.
Remember that the whole reason for investing in automated data
acquisition and control is to improve the quality of the experiment. You
can do this by improving the quality and accuracy of the recorded data
513
514 Chapter Nineteen
Special Hardware
In stark contrast to ordinary industrial situations, physics experiments
are, by their nature, involved with exotic measurement techniques and
apparatus. We feel lucky to come across a plain, old low-frequency pres-
sure transducer or thermocouple. More often, we’re asked to measure
microamps of current at high frequencies, riding on a 35-kV dc potential.
Needless to say, some fairly exotic signal conditioning is required. Also,
some specialized data acquisition hardware, much of which is rarely
seen outside of the physics lab, must become part of the researcher’s
repertoire. Thankfully, LabVIEW is flexible enough to accommodate
these unusual instrumentation needs.
Signal conditioning
High-voltage, high-current, and high-frequency measurements require
specialized signal conditioning and acquisition equipment. The sources
of the signals, though wide and varied, are important only as far as their
electrical characteristics are concerned. Interfacing the instrument to
the data acquisition equipment is a critical design step. Special ampli-
fiers, attenuators, delay generators, matching networks, and overload
protection are important parts of the physics diagnostician’s arsenal.
R2 Compensated when: R1 C1 = R2 C2
V out = V in C1 represents stray capacitance
R1 + R2
Figure 19.1 General application of a high-voltage probe. Frequency compensation is
only needed for ac measurements. Always make sure that your amplifier doesn’t load
down the voltage divider.
516 Chapter Nineteen
safety devices that shut down and crowbar (short-circuit) the source of
high voltage. Cables emerging from high-voltage equipment must be
properly grounded to prevent accidental energization. At the input to
your signal conditioning equipment, add overvoltage protection devices
such as current-limiting resistors, zener diodes, Transorbs, metal-oxide
varistors (MOVs), and so forth. Protect all inputs, because improper
connections and transients have a way of creeping in and zapping your
expensive equipment . . . and it always happens two minutes before a
scheduled experiment.
System
+ under test
Input protection
Power
supply
Current Current Amplifier
– Shunt
To ADC
+
Current Current
+ Shunt
To ADC
Power –
supply
System Isolation
– under test amplifier
System
ground
Figure 19.3 This example shows an isolation amplifier with a high side
current shunt. The common-mode voltage is that which appears across
the system under test, and may exceed the capability of nonisolated
amplifiers.
518 Chapter Nineteen
and explosions. The detector may sense ion currents or some electri-
cal event in a direct manner, or it may use a multistep process, such
as the conversion of particle energy to light, then light to an electri-
cal signal through the use of a photomultiplier or photodiode. Making
quantitative measurements on fast, dynamic phenomena is quite chal-
lenging, even with the latest equipment, because there are so many
second-order effects that you must consider. High-frequency losses due
to stray capacitance, cable dispersion, and reflections from improperly
matched transmission lines can severely distort a critical waveform.
It’s a rather complex subject. If you’re not well versed in the area of
pulsed diagnostics, your best bet is to find someone who is.
There are a number of instruments and ancillary devices that you
will often see in high-frequency systems. For data acquisition, you have
a choice of acquiring the entire waveform or just measuring some par-
ticular characteristic in real time. Waveform acquisition implies the
use of a fast transient digitizer or digitizing oscilloscope, for which
you can no doubt find a LabVIEW driver to upload the data. Later in
this chapter, we’ll look at waveform acquisition in detail. Some experi-
ments, such as those involving particle drift chambers, depend only on
time interval or pulse coincidence measurements. Then you can use
a time interval meter, or time-to-digital converter (TDC), a device
that directly measures the time between events. Another specialized
instrument is the boxcar averager, or gated integrator. This is an
analog instrument that averages the signal over a short gate interval
and then optionally averages the measurements from many gates. For
periodic signals, you can use a boxcar averager with a low-speed ADC
to reconstruct very high frequency waveforms.
CAMAC
CAMAC (Computer Automated Measurement and Control) is an old,
reliable, and somewhat outdated standard for data acquisition in the
world of high-energy physics research. It remains a player in research
labs all over the world. LabVIEW has instrument drivers available for
use with a variety of CAMAC instruments and controllers, particularly
those from Kinetic Systems and LeCroy. Because other hardware plat-
forms have replaced CAMAC, we’re no longer going to say much about
it. If you want to read more, you can probably find a previous edition of
this book in which there’s a long CAMAC section.
functions that have been implemented over the years that aren’t always
commonly available.
VXI. VXI is taking over the title of workhorse data acquisition interface
for physics applications, particularly when high-performance transient
digitizers are required. It’s well planned and supported by dozens of
major manufacturers, and offers many of the basic functions you need.
And, of course, there is excellent LabVIEW support. Drivers are avail-
able for many instruments and PCs can be installed right in the crate.
Modules are either full- or half-height and plug into a powered NIM
bin with limited backplane interconnections. Because the CAMAC for-
mat was derived from the earlier NIM standard, NIM modules can
plug into a CAMAC crate with the use of a simple adapter.
Many modern instruments are still based on NIM modules, partic-
ularly nuclear particle detectors, pulse height analyzers, and boxcar
averagers such as the Stanford Research SR250. We still use them in
the lab because there are so many nice functions available in this com-
pact format, such as amplifiers, trigger discriminators, trigger fan-outs,
and clock generators.
Step-and-measure experiments
When you have a system that generates a static field or a steady-state
beam, you probably will want to map its intensity in one, two, or three
dimensions and maybe over time as well. We call these step-and-
measure experiments because they generally involve a cyclic procedure
that moves a sensor to a known position, makes a measurement, moves,
measures, and so forth, until the region of interest is entirely mapped.
Some type of motion control hardware is required, along with a
suitable probe or sensor to detect the phenomenon of interest.
Motion control systems. There are many actuators that you can use to
move things around under computer control—the actuators that make
robotics and numerically controlled machines possible.
For simple two-position operations, a pneumatic cylinder can be con-
trolled by a solenoid valve that you turn on and off from a digital out-
put port. Its piston moves at a velocity determined by the driving air
pressure and the load. Electromagnetic solenoids can also move small
objects back and forth through a limited range.
But more important for your step-and-measure experiments is the
ability to move from one location to another with high precision. Two
kinds of motors are commonly used in these applications, stepper
motors and servo motors. Think of a stepper motor as a kind of
Physics Applications 521
Amplifier or
Motion
Controller Power Supply
Motion
Commands
IB Fe
GP ed
or ba
al Encoder
Seri ck
or Tachometer
n
tio s
Mo and Po
m m we
C o r
Motor
Plug-in Motion
Control Board
Figure 19.4 A typical motion control system consists of a computer, a controller (internal or external), an
amplifier or power supply, and a motor, perhaps with an encoder or tachometer for feedback.
522 Chapter Nineteen
Axis
Load
Initialize Begin Program Load Target Acceleration/ Load Start End Program
Controller.flx Storage.flx Position.flx Deceleration.flx Velocity.flx Motion.flx Storage.flx
Board ID
Accel/
Target Pos 10000 5000 Vel 100000
decel
Figure 19.5 This example stores a motion sequence for later execution on a FlexMotion board. The
stored program can be quite complex, and it can run without intervention from LabVIEW.
Physics Applications 523
Motion application: An ion beam intensity mapper. One lab that Gary
worked in had a commercially made ion beam gun that we used to
test the sputter rate of various materials. We had evidence that the
beam had a nonuniform intensity cross section (the xy plane), and that
the beam diverged (along the z axis) in some unpredictable fashion. In
order to obtain quality data from our specimens, we needed to charac-
terize the beam intensity in x, y, and z. One way to do this is to place
sample coupons (thin sheets of metal or glass) at various locations in
the beam and weigh them before and after a timed exposure to obtain
relative beam intensity values. However, this is a tedious and time-
consuming process that yields only low-resolution spatial data. Pre-
liminary tests indicated that the beam was steady and repeatable, so
high-speed motion and data acquisition was not a requirement.
Our solution was to use a plate covered with 61 electrically isolated
metal targets (Figure 19.6). The plate moves along the z axis, pow-
ered by a stepper motor that drives a fine-pitch lead screw. The stepper
motor is powered by a Kinetic Systems 3361 Stepper Motor Controller
CAMAC module. Since the targets are arranged in an xy grid, we can
Stepper Motor
61 Isolated
Targets
Push Rod
Ion Gun
100 Ω
Current Sampling
Resistors AM
UX
(61) 64
TM
ult
iple
xe
r +
Target – G
+ –
System Ground Bias
Supply MI
O-16
Inp
uts
Figure 19.6 The ion beam intensity mapper hardware. The plate holding 61 isolated
probes moves along the axis of the ion beam. Current at each probe is measured at
various positions to reconstruct a 3D picture of the beam intensity.
Physics Applications 525
obtain the desired xyz mapping by moving the plate and taking data
at each location. Each target is connected to a negative bias supply
through a 100-Ω current-sensing resistor, so the voltage across each
resistor is 100 mV/mA of beam current. Current flows because the tar-
gets are at a potential that is negative with respect to the (positive) ion
gun, which is grounded. A National Instruments AMUX-64T samples
and multiplexes the resulting voltages into an NB-MIO-16 multifunc-
tion board. The bias voltage was limited to less than 10 V because that’s
the common-mode voltage limit of the board. We would have liked to
increase that to perhaps 30–50 V to collect more ions, but that would
mean buying 61 isolation amplifiers.
The front panel for this experiment is shown in Figure 19.7. Two
modes of scanning are supported: unidirectional or bidirectional (out
and back), selectable by a boolean control (Scan Mode) with pictures
pasted in that represent these motions. A horizontal fill indicator shows
the probe position as the scan progresses to cover the desired limits.
Examine the diagram of the main VI in Figure 19.8. An overall While
Loop keeps track of the probe location in a shift register, which starts
at zero, increases to a limit set by Total Range, then steps back to
zero if a bidirectional scan is selected. The VI stops when the scan is
finished then returns the probe to its starting location. Arithmetic in
the upper half of the loop generates this stepped ramp.
Calibration Status
Motor steps/inch 1600 Motor position 3200
Scan Mode
2.00
Figure 19.7 Front panel of the ion beam intensity scan VI. The
user runs the VI after setting all the parameters of the scan. A
horizontal fill indicator shows the probe position.
526 Chapter Nineteen
Scan Mode
Location (steps)
Location 0.00
Step size (in.)
Go the other True
Motor steps/inch
way?
Sign (L/R) 1 +=left, -=right
End
Total range (in.) Position
Position
1 [0..2]
Poll controller til move is done
True
CCW
CW
DONE
Read Count WHOOPS ! ! ! ! !
YOU HIT A
300
LIMIT SWITCH
2 [0..2]
Acquire and average data
position
DAQ device
samples/ch currents
Write To
Spreadsheet
scan rate File.vi
append
Figure 19.8 Diagram of the ion beam intensity scan VI. The While Loop executes once for each step of the
positional ramp. The sequence moves the probe and acquires data.
The Sequence structure inside the main loop has three frames.
Frame zero commands the motor controller to move to the new posi-
tion. Frame one polls the motor controller, awaiting the done flag, which
indicates that the move is complete. If the VI encounters a limit switch,
it tells the user with a dialog box and execution is aborted. Frame two
acquires data by calling a subVI that scans the 64 input channels at
high speed and averages the number of scans determined by a front
panel control. Finally, the position and the data are appended to a
data file in tab-delimited text format for later analysis. As always, the
Physics Applications 527
Langmuir probe
+ +
DAC output
or function Plasma
generator – –
Probe
current Vacuum chamber
Isolation
amplifier or Input protection
power supply +
Current Amplifier Current
Shunt
– measurement
to ADC
Figure 19.9 Basic electrical connections for a Langmuir probe experiment. The probe
voltage is supplied by an isolation amplifier, programmable power supply, or function
generator with a large output voltage swing.
528 Chapter Nineteen
10.0 10.0
8.0
5.0 6.0
mA 4.0
0.0
2.0
-5.0 0.0
-2.0
-10.0 -10.0 -5.0 0.0 5.0 10.0 V
Voltage to
probe
1000 Current vs
Steps Voltage
V end
V start
Ramp Pattern.vi Cycle #
Figure 19.10 A simple V-I scan experiment. This is another way of generating low-speed,
recurrent ramps, using a precalculated array of output voltages. Values are sent to a digi-
tal to analog converter (DAC) repeatedly, and the response of the system under test is
measured and recorded after each output update. The current versus voltage graph acts
like a real-time xy chart recorder.
to write the data to a file. Outside the For Loops, you would open a suit-
able file and probably write some kind of header information. Remem-
ber to close the file when the VI is through.
In faster experiments, LabVIEW may not have enough time to
directly control the stimulus and response sequence. In that case, you
can use a couple of different approaches, based on plug-in data acquisi-
tion boards or external, programmable equipment.
The data acquisition library supports buffered waveform generation
as well as buffered waveform acquisition on plug-in boards with direct
memory access (DMA). Essentially, you create an array of data repre-
senting the waveform, then tell the DMA controller to write the array to
the DAC, one sample at a time, at a predetermined rate. Meanwhile, you
can run a data acquisition operation also using DMA that is synchro-
nized with the waveform generation. A set of examples is available in
530 Chapter Nineteen
The VI in Figure 19.11 uses the first method where DAC updates trig-
ger ADC scans. The controls allow you to specify the ramp waveform’s
start and stop voltages, number of points, and the DAC update rate.
X-Y graph
device
2.0
1 M
e 1.0
AO channel a
s 0.0
0
u
r -1.0
AI channels
e
0,1 -2.0
d
-2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0
Ramp Voltage
Ramp Pattern.vi
V Stop
V Start update rate (points/sec)
# points
per cycle AO channel
device
AO tas�I �
# cycles 1st ch
0 X-Y graph
AI channels AI taskID
ac� done
scan clock source
# scans error
I/O Connector
status
For an AT-MIO-16F-5, hook OUT5 (pin 49, DAC update clock) to OUT2 stop
(pin 46, which clocks each scan when doing interval scanning).
Figure 19.11 Using the Data Acquisition library and an AT-MIO-16F-5 board, this VI simultane-
ously generates a ramp waveform and acquires data from one or more analog inputs. Only one
external jumper connection is required; note that the pin number and signal name vary slightly
between models of DAQ boards.
Physics Applications 531
Transient digitizers
A whole class of instruments, called transient digitizers, have been
developed over the years to accommodate pulsed signals. They are
essentially high-speed ADCs with memory and triggering subsystems,
and are often sold in modular form, such as CAMAC and VXI and more
recently as plug-in boards. Transient digitizers are like digital oscil-
loscopes without the display, saving you money when you need more
than just a few channels. In fact, the digital oscilloscope as we know it
today is somewhat of a latecomer. There were modular digitizers, sup-
ported by computers and with analog CRTs for displays, back in the
1960s. Before that, we used analog oscilloscopes with Polaroid cameras,
and digitizing tablets to convert the image to ones and zeros. Anyone
pine for the “good ol’ days”? The fact is, digitizers make lots of sense
today because we have virtual instruments, courtesy of LabVIEW. The
digitizer is just so much hardware, but VIs make it into an oscilloscope,
or a spectrum analyzer, or the world’s fastest strip-chart recorder.
If you can’t find a transient recorder that meets your requirements,
check out the digital oscilloscopes available from the major manufac-
turers. They’ve got amazing specifications, capability, and excellent
value these days. You might also consider plug-in oscilloscope boards
which have the basic functionality of a ’scope but without the user
interface. National Instruments offers a line of plug-in digital oscil-
loscope boards in PCI, PXI, and PCMCIA formats. At this writing,
boards are available with sampling rates up to 100 MHz at 8-bit reso-
lution and up to 16 Msamples of onboard memory, featuring analog
as well as digital triggering. Some boards are programmed through
NI-DAQ, like any other plug-in board, while others are handled by
the NI-SCOPE driver, which is IVI-based. In either case, example VIs
make it easy to get started and integrate these faster boards into your
application.
There are several other major manufacturers of oscilloscope boards that
supply LabVIEW drivers. Gage Applied Sciences offers a very wide
range of models, in ISA and PCI bus format for Windows machines.
534 Chapter Nineteen
Data storage and sampling. An important choice you have to make when
picking a transient recorder is how much memory you will need and
how that memory should be organized. Single-shot transient events
with a well-characterized waveshape are pretty easy to handle. Just
multiply the expected recording time by the required number of sam-
ples per second. With any luck, someone makes a digitizer with enough
memory to do the job. Another option is streaming data to your com-
puter’s memory, which is possible with plug-in oscilloscope boards. For
instance, a good PCI bus board can theoretically move about 60 million
1-byte samples per second to memory, assuming uninterrupted DMA
transfers. Similarly, VXI systems can be configured with gigabytes of
shared memory tightly coupled to digitizer modules. It appears that
the possibilities with these systems are limited only by your budget.
Far more interesting than simple transient events is the observation
of a long tail pulse, such as atomic fluorescence decay. In this case, the
intensity appears very suddenly, perhaps in tens of nanoseconds, decays
rapidly for a few microseconds, then settles into a quasi-exponential
Physics Applications 535
decay that tails out for the best part of a second. If you want to see the
detail of the initial pulse, you need to sample at many megahertz. But
maintaining this rate over a full second implies the storage of millions
of samples, with much greater temporal resolution than is really neces-
sary. For this reason, hardware designers have come up with digitiz-
ers that offer multiple time bases. While taking data, the digitizer
can vary the sampling rate according to a programmed schedule. For
instance, you could use a LeCroy 6810 (which has dual time bases)
to sample at 5 MHz for 100 υs, then have it switch to 50 kHz for the
remainder of the data record. This conserves memory and disk space
and speeds analysis.
Another common experiment involves the recording of many rapid-fire
pulses. For instance, the shot-to-shot variation in the output of a pulsed
laser might tell you something significant about the stability of the
laser’s power supplies, flash lamps, and so forth. If the pulse rate is too
high, you probably won’t have time to upload the data to your computer
and recycle the digitizer between pulses. Instead, you can use a digitizer
that has segmented memory. The Tektronix RTD720A is such a beast.
It can store up to 1024 events separated by as little as 5 υs, limited only
by the amount of memory installed. After this rapid-fire sequence, you
can upload the data at your leisure via GPIB. Other instruments with
segmented memory capability are various HP, Tektronix, and LeCroy
digital oscilloscopes, and the LeCroy 6810. Figure 19.12 shows how it’s
done with a National Instruments NI-5112 digital oscilloscope board,
which supports multiple-record mode, whereby a series of waveforms
is stored in onboard memory. The board takes 500 ns to rearm between
acquisitions, so you won’t lose too much data. This advanced capability
is only available through the NI-SCOPE driver.
Figure 19.12 With the right oscilloscope board and the NI-SCOPE driver, you can
rapidly acquire multiple waveform records in onboard memory. This bit of code
reads out the data into a 2D array.
536 Chapter Nineteen
generally quite accurate, the same cannot be said for high-speed ADC
scale factors. We like to warm up the rack of digitizers, then apply
known calibration signals, such as a series of precision dc voltages,
and store the measured values. For greater accuracy at high frequen-
cies, you should apply a pulse waveform of known amplitude that
simulates the actual signal as closely as possible. If you want to get
fancy, add an input switching system that directs all the input chan-
nels to a calibration source. Then LabVIEW can do an automatic cali-
bration before and after each experiment. You can either store the
calibration data and apply it to the data later or apply the pre-run
calibration corrections on the fly. Some plug-in digitizers, like the
NI-5112, have onboard calibrators that simplify this task. One pre-
caution: If you have several digitizers with 50-Ω inputs, make sure
they don’t overload the output of your calibrator. It’s an embarrass-
ing, but common, mistake.
What’s all this triggering stuff, anyhow? The experiment itself may be
the source of the main trigger event, or the trigger may be externally
generated. As an example of an externally triggered experiment, you
might trigger a high-voltage pulse generator to produce a plasma.
Experiments that run on an internally generated time base and experi-
ments that produce random events, such as nuclear decay, generally
are the source of the trigger event. The instruments must be armed
and ready to acquire the data when the next event occurs. All of these
situations generate what we might classify as a first-level, or primary,
trigger (Bologna and Vincelli 1983).
First-level triggers are distributed to all of your pulsed diagnostic
instruments to promote accurate synchronization. Some utility hard-
ware has been devised over the years to make the job easier. First, you
probably need a trigger fan-out, which has a single input and multiple
outputs to distribute a single trigger pulse to several instruments, such
as a bank of digitizers. A fanout module may offer selectable gating
(enabling) and inverting of each output. A commercial trigger fanout
CAMAC module is the LeCroy 4418 with 16 channels. Second, you
often need trigger delays. Many times, you need to generate a rapid
sequence of events to accomplish an experiment. Modern digital delays
are very easy to use. Just supply a TTL or ECL-level trigger pulse, and
after a programmed delay, an output pulse of programmable width and
polarity occurs. Examples of commercial trigger delays are the Stan-
ford Research Systems DG535 and the LeCroy 4222. Third, you may
need a trigger discriminator to clean up the raw trigger pulse from
your experimental apparatus. It works much like the trigger controls
on an oscilloscope, permitting you to select a desired slope and level on
the incoming waveform.
A fine point regarding triggering is timing uncertainty, variously
known as trigger skew or jitter. Sometimes you need to be certain
as to the exact timing relationship between the trigger event and the
sampling clock of the ADC. A situation where this is important is wave-
form averaging. If the captured waveforms shift excessively in time,
then the averaged result will not accurately represent the true wave-
shape. Most transient digitizers have a sampling clock that runs all
the time. When an external trigger occurs, the current sample number
538 Chapter Nineteen
Trigger Trigger
fanout delay
Pulsed YAG laser Trigger
Amplifier ∆T to
Flashtube
Photo- Trigger digitizers
diode discriminator
Trigger to
pulsed
HV generator
TTL
Figure 19.13 Typical application of a trigger discriminator, fanout, and delays used in
a pulsed laser experiment.
Physics Applications 539
triggers cause you to record many nonevents. If this occurs very often,
you end up with a disk full of empty baselines and just a few records
of good data. That’s okay, except for experiments where each event con-
sists of 85 megabytes of data. The poor person analyzing the data (you?)
is stuck plowing through all this useless information.
The solution is to use a hardware- or software-based, second-level
trigger that examines some part of the data before storage. For instance,
if you have 150 digitizers, it might make sense to write a subVI that
loads the data from one or two of them and checks for the presence of
some critical feature—perhaps a critical voltage level or pulse shape
can be detected. Then you can decide whether to load and store all the
data or just clear the digitizer memories and rearm.
If there is sufficient time between events, the operator might be able
to accept or reject the data through visual inspection. By all means, try
to automate the process. However, often there is barely time to do any
checking at all. The evaluation plus recycle time may cause you to miss
the actual event. If there is insufficient time for a software pattern rec-
ognition program to work, you can either store all the data (good and
bad) or come up with a hardware solution or perhaps a solution that
uses fast, dedicated microprocessors to do the pattern recognition.
Many modern digital oscilloscopes have a trigger exclusion feature
that may be of value in physics experiments. You define the shape of a
good pulse in terms of amplitude and time, typically with some tolerance
on each parameter. After arming, the ’scope samples incoming wave-
forms and stores any out-of-tolerance pulses. This is good for experi-
ments where most of the data is highly repetitive and nearly identical,
but an occasional different event of some significance occurs.
0.0
-3.0
0 10 20 30 40 49
segments Array Size Index Array Average data for each sample
32 samples
Index Add Array
seg# Array
1 Elements Waveform
data Graph
6810 1-Chan
Read.vi Data
segments
0
Figure 19.14 Averaging waveforms acquired from a digitizer that has segmented memories.
Be really careful with array indexing; this example shows that LabVIEW has a consistent
pattern with regards to indexes. The first index is always on top, both on the panel and on
the diagram.
1 [0..1]
X[] False case is empty 0 [0..1]
Accumulate
True Initialize
sum
number of
records Averaged X[]
1.00
Iteration (0:initialize)
Average (T)
Figure 19.15 This shows an ensemble averager VI. It accumulates (sums) the data from mul-
tiple waveforms, then calculates the average on command. It uses uninitialized shift registers
to store the data between calls.
segments
32 Average only on the last iteration Waveform Graph
avg? 9.0
Ensemble
seg# Waveform
Averager 5.0
Graph
0.0
init
-3.0
6810 1-Chan
Disable 0 10 20 30 40 49
Read.vi
indexing
Figure 19.16 Using the ensemble averager drastically simplifies the diagram and cuts memory
usage while yielding the same results as before.
542 Chapter Nineteen
Trigger
Gate delay
Analog waveform
(sync'd to trigger)
Bandwidth
Fhi continue
3
2 order
Flo 10
Frequency
stop status
Figure 19.18 The Signal Recovery by BP Filter VI demonstrates a way of measuring ampli-
tude in a selected frequency band. For this screen shot, I swept the input frequency around the
100-Hz region while running the VI.
Physics Applications 545
signal reliably. To see how bad the problem can be, try the Pulse Demo
VI from the Analysis examples. Turn the noise way up, and watch the
pulse disappear from view. Note that the addition of some signal con-
ditioning filters and amplifiers, plus proper grounding and shielding,
may remove some of the interference from your signal and is always a
worthwhile investment.
In experiments that operate with a constant carrier or chopping fre-
quency, the signal amplitude can be recovered with lock-in amplifiers,
which are also called synchronous amplifiers or phase-sensitive
detectors (PSDs). The basic principle revolves around the fact that
you need only observe (demodulate) the range of frequencies within
a narrow band around the carrier frequency. To do this, the lock-in
requires a reference signal—a continuous sine or square wave that is
perfectly synchronized with the excitation frequency for the experi-
ment. The lock-in mixes (multiplies) the signal with the reference,
resulting in an output containing the sum and difference frequen-
cies. Since the two frequencies are equal, the difference is zero, or dc.
By low-pass-filtering the mixer output, you can reject all out-of-band
signals. The bandwidth, and hence the response time, of the lock-in
is determined by the bandwidth of the low-pass filter. For instance,
a 1-Hz lowpass filter applied to our example with the 10-kHz signal
results in (approximately) a 1-Hz bandpass centered at 10 kHz. That’s
a very sharp filter, indeed, and the resulting signal-to-noise ratio can
be shown to be superior to any filter (Meade 1983).
Lock-ins can be implemented by analog or digital (DSP) techniques.
It’s easiest to use a commercial lock-in with a LabVIEW driver, avail-
able from Stanford Research or EG&G Princeton Applied Research.
If you want to see a fairly complete implementation of a lock-in in
LabVIEW, check out the example in the lock-in library. For a simple
no-hardware demonstration, we created a two-phase lock-in with
an internal sinusoidal reference generator. This example VI, Two-
phase Lock-in Demo, simulates a noisy 8-kHz sine wave, and you
can change the noise level to see the effect on the resulting signal-to-
noise ratio.
The DAQ 2-Phase Lock-in VI is a real, working, two-phase DSP
lock-in where the signal and reference are acquired by a plug-in board.
The reference signal can be a sine or square wave, and the amplitude
is unimportant. SubVIs clean up the reference signal and return a
quadrature pair of constant-amplitude sine waves. To generate these
quadrature sine waves, we had to implement a phase-locked loop (PLL)
in LabVIEW. We used a calibrated IIR low-pass filter as a phase shift
element. When the filter’s cutoff frequency is properly adjusted, the
input and output of the filter are exactly 90 degrees out of phase. This
condition is monitored by a phase comparator, which feeds an error
signal back to the low-pass filter to keep the loop in lock.
546 Chapter Nineteen
buffering of data, such as the HP3852. Either way, the hardware continues
collecting and storing data to a new buffer even while the computer is
busy analyzing the previous buffer.
If you can’t afford to throw away any of your data, then data reduc-
tion on the fly might help reduce the amount of disk space required.
It will also save time in the post-experiment data reduction phase. Is
your data of a simple, statistical nature? Then maybe all you really
need to save are those few statistical values, rather than the entire
data arrays. Maybe there are other characteristics—pulse parameters,
amplitude at a specific frequency, or whatever—that you can save in
lieu of raw data. Can many buffers of data be averaged? That reduces
the stored volume in proportion to the number averaged. If nothing
else, you might get away with storing the reduced data for every wave-
form, plus the raw data for every Nth waveform. Decimation is another
alternative if you can pick a decimation algorithm that keeps the criti-
cal information, such as the min/max values over a time period. As you
can see, it’s worth putting some thought into real-time analysis. Again,
there may be performance limitations that put some constrains on how
much real-time work you can do. Be sure to test and benchmark before
committing yourself to a certain processing scheme.
Tip 1: Use cyclic processing rather than trying to load all channels at
once. Once you see this concept, it becomes obvious. As an example,
here is a trivial example of the way you can restructure a data acquisi-
tion program. In Figure 19.19A, 10 waveforms are acquired in a For
Loop, which builds a 10-by-N 2D numeric array. After collection, the
2D array is passed to a subVI for storage on disk. The 2D array is
likely to be a memory burner. Figure 19.19B shows a simple, but effec-
tive change. By putting the data storage subVI in the loop, only one
1D array of data needs to be created, and its data space in memory
is reused by each waveform. It’s an instant factor-of-10 reduction in
memory usage.
BIG 2D array
Of course, you can’t always use cyclic processing. If you need to per-
form an operation on two or more of the waveforms, they have to be
concurrently memory-resident. But keep trying: If you only need to
have two waveforms in memory, then just keep two, and not the whole
bunch. Perhaps you can use shift registers for temporary storage and
some logic to decide which waveforms to keep temporarily.
Tip 2: Break the data into smaller chunks. Just because the data source
contains a zillion bytes of data, you don’t necessarily have to load it
all at once. Many instruments permit you to read selected portions of
memory, so you can sequence through it in smaller chunks that are
easier to handle. The LeCroy 6810 CAMAC digitizer and most digital
oscilloscopes have such a feature, which is supported by the LabVIEW
driver. If you’re using a plug-in board, the DAQ VI AI Read permits
you to read any number of samples from any location in the acquisition
buffer. Regardless of the data source, it’s relatively simple to write a
looping program that loads these subdivided buffers and appends them
to a binary file one at a time. Similarly, you can stream data to disk and
then read it back in for analysis in manageable chunks.
GPIB Address
You have some control over the LabVIEW memory manger through
the Options dialog, in the Performance and Disk items. The first
option is Deallocate memory as soon as possible. When a subVI finishes
execution, this option determines whether or not its local buffers will
be deallocated immediately. If they are deallocated, you gain some free
memory, but performance may suffer because the memory will have to
be reallocated the next time the subVI is called. On the Macintosh, there
is an option called Compact memory during execution. When enabled,
LabVIEW tells the Macintosh memory manager to defragment memory
every 30 seconds. Defragmented memory enhances speed when allocat-
ing new buffers. However, this compacting operation usually induces
a lull in performance every 30 seconds which may be objectionable in
real-time applications.
If your application uses much more memory than you think it should,
it’s probably because of array duplication. Try to sift through these
rules (study the application note for more) and figure out where you
might modify your program to conserve memory.
Tip 6: Use memory diagnostics. There are some ways to find out how
much memory your VIs are actually using. Please note that these tech-
niques are not absolutely accurate. The reasons for the inaccuracy are
generally related to the intrusiveness of the diagnostic code and the
complexity of the memory management system. But as a relative mea-
sure, they are quite valuable.
Begin by using the VI Properties dialog in the File menu to find out
how much memory is allocated to your VI’s panel, diagram, code, and
data segments. Of primary interest is the data segment, because that’s
where your buffers appear. Immediately after compiling a VI, the data
segment is at its minimum. Run your VI, and open the VI Properties dia-
log again to see how much data is now allocated. You can try some simple
tests to see how this works. Using an array control, empty the array and
check the data size. Next, set the array index to a large number (such as
10,000), enter a value, and check the data size again. You should see it
grow by 10,000 times the number of bytes in the data type.
The most extensive memory statistics are available through the
Profiling feature of LabVIEW. Select Profile VIs from the Tools >>
Advanced menu, and start profiling with the Profile Memory Usage
and Memory Usage items selected (Figure 19.21). Run your VIs, take
Physics Applications 553
Figure 19.21 The Profiling window gives you timing and memory usage statistics on your application. The first
VI in the list is the biggest memory user; that’s the place to concentrate your efforts.
Bibliography
Accelerator Technology Division AT-5: Low-Level RF LabVIEW Control Software User’s
Manual, Los Alamos National Laboratory, LA-12409-M, 1992. (Available from National
Technical Information Service, Springfield, Va.)
Bogdanoff, D. W., et al.: “Reactivation and Upgrade of the NASA Ames 16 Inch Shock
Tunnel: Status Report,” American Institute. of Aeronautics and Astronautics 92-0327.
Bologna, G., and M. I. Vincelli (Eds.): “Data Acquisition in High-Energy Physics,” Proc.
International School of Physics, North-Holland, Amsterdam, 1983.
Mass, H. S. W., and Keith A. Brueckner (Eds.): Plasma Diagnostic Techniques, Academic
Press, New York, 1965.
Meade, M. L.: Lock-in Amplifiers: Principles and Applications, Peter Peregrinus, Ltd.,
England, 1983. (Out of print.)
2000 LeCroy Research Instrumentation Catalog, LeCroy Corporation, New York, 2000.
(www.lecroy.com.)
This page intentionally left blank
Chapter
555
556 Chapter Twenty
Graphing
The simplest way to display lots of data is in the form of a graph.
We’ve been drawing graphs for about 200 years with pens and paper,
but LabVIEW makes graphs faster and more accurate. Most impor-
tant, they become an integral part of your data acquisition and analy-
sis system.
Part of your responsibility as a programmer and data analyst is
remembering that a well-designed graph is intuitive in its meaning
and concise in its presentation. In his book The Visual Display of Quan-
titative Information (1983), Edward Tufte explains the fundamentals
of graphical excellence and integrity and preaches the avoidance
of graphical excess that tends to hide the data. We’ve been appalled
by the way modern presentation packages push glitzy graphics for
their own sake. Have you ever looked at one of those 256-color three-
dimensional vibrating charts and tried to actually see the information?
Here are your major objectives in plotting data in graphical format:
■ Induce the viewer to think about the substance rather than the
methodology, graphic design, the technology of graphic production,
or something else.
■ Avoid distorting what the data has to say (for instance, by truncating
the scales, or plotting linear data on logarithmic scales).
Data Visualization, Imaging, and Sound 557
The worst thing you can do is over decorate a graph with what Tufte
calls chartjunk. Chartjunk is graphical elements that may catch the
eye but tend to hide the data. You don’t want to end up with a graphical
puzzle. Simply stated, less is more (Figure 20.1). Though this is more of
a problem in presentation applications, there are some things to avoid
when you set up LabVIEW graphs:
■ High-density grid lines—they cause excessive clutter.
■ Oversized symbols for data points, particularly when the symbols
tend to overlap.
■ Colors for lines and symbols that contrast poorly with the back-
ground.
■ Color combinations that a color-blind user can’t interpret (know thy
users!).
■ Too many curves on one graph.
■ Insufficient numeric resolution on axis scales. Sometimes scientific
or engineering notation helps (but sometimes it hinders—the num-
ber 10 is easier to read than 1.0E1).
■ Remove or hide extra bounding boxes, borders, and outrageous col-
ors. Use the coloring tool, the transparent (T) color, and the control
editor to simplify your graphs.
558 Chapter Twenty
0.5 0.5
0.0 0.0
-0.5 -0.5
0 50 100 150 0 50 100 150
...but maybe all you really need is the data itself. Axis values are only required for quantitative graphing.
Controls and legends should not be displayed unless needed.
Figure 20.1 Sometimes, less is more when presenting data in graphs. Use the coloring tool, the
control editor, and the various pop-up menus to customize your displays.
Waveform Graph 1
1.0
1D array
Sine Pattern.vi Waveform Graph 1 0.0
32 1.00
-1.0
1D array 0 50 127
-1.0
Bundle Cluster of Xo, ∆X, [Y] 32 1.00
5 1275
t0 1.0
Y 0
Sine Waveform.vi Waveform Graph 3
10:28:16 PM
Sun, Feb 5, 2040 0.00 0.0
Figure 20.2 The Waveform Graph is polymorphic. This example shows some of the basic data types that
it accepts for the making of single-variable plots.
Waveform Graph 1
Waveform 1.0
Sine Pattern.vi Graph 1 2D array
128 0.0
0 0.00
samples -1.0
2D array 0
0 50 127
Build
Array Xo Cluster of Xo, ∆X, [[Y]]
5.0 Waveform Waveform Graph 2
∆X
Graph 2 Xo 5.00
1.0
10.0
∆X 10.00 0.0
Cluster of Xo,
0 0.00 -1.0
∆X, [[Y]]
0 5 1275
Figure 20.3 Three ways of making multiple plots with a Waveform Graph when the number of data points in
each plot is equal. Note that the Build Array function can have any number of inputs. Also, the 2D array could
be created by nested loops.
second plot. The trick is to use some math to force those two data points
to come out at the left and right edges of the plot.
So far, we’ve been looking at graphs with simple, linear x axes which
are nice for most time-based applications, but not acceptable for para-
metric plots where one variable is plotted as a function of another.
For that, use the XY Graph. Figure 20.6 shows a way of building one of
the data structures that defines a multiplot xy graph. A plot is defined
Figure 20.4 Making multiple plots where each plot has a different number of samples. Build Cluster Array
performs the same function as the combination in the dashed rectangle. Data could consist of waveforms
instead of the 1D arrays used here.
Data Visualization, Imaging, and Sound 561
Array of cluster
Array Size Build of Xo, ∆X, [Y]
Array
Mean.vi
Mean
Figure 20.5Yet another data structure for multiple plots accepted by the
Waveform Graph. In this example, the sine wave is plotted with a line
representing the mean value. This second plot has only two data points.
0.0 X
-20.0 Two Y
Use the Show Name pop-up
-40.0 Cursor Legend item to display cursor name
# of points
150 Spiral: X = i*sin(i), Y = i*cos(i)
X bundle bundle
5.0
Y Build
Array XY Graph
# of points
30 Circle: X = k*sin(i), Y = k*cos(i)
Figure 20.6One of the data structures for an XY Graph is shown here. It probably makes sense to use this struc-
ture when you gather (x,y) data-point pairs one at a time. This is the way it had to be done in LabVIEW 2.
562 Chapter Twenty
60 0
[X] 0 0.00
40
20 [Y] 0 0.00
0
-20
-40
-60
-60 -40 -20 0 20 40 60
# of points
150 Spiral: X = i*sin(i), Y = i*cos(i)
X bundle
5.0 Y
Build
Array XY Graph
# of points
30 Circle: X = k*sin(i), Y = k*cos(i)
Figure 20.7 Similar to the last XY Graph example (Figure 10.6), but
using the other available data structure. This one is convenient when
you have complete arrays of x and y data.
XY Graph
device
1 2.0
channels
Chan 1
0,1 0.0
STOP
-3.0
Interval -4.0 -2.0 0.0 2.0 4.0
500 ms Chan 0
0 Build Array
device
Constant to
initialize XY Graph
shift register
0
channels
Bundle
Interval 1
stop
using the Circular Buffer VI, described in Chapter 18, “Process Con-
trol Applications.”
Bivariate data
Beyond the ordinary cartesian data we are so familiar with, there is
also bivariate data, which is described by a function of two variables,
such as z = f(x,y). Here are some examples of bivariate data that you
might encounter:
Vertical Gain
0.0
max
0.2
0.4
0.6
0.8
1.0 min
Z-angle
Number of Traces
10
Multivariate data
Beyond bivariate data is the world of multivariate data. Statisticians
often work in a veritable sea of data that is challenging to present because
they can rarely separate just one or two variables for display. For instance,
a simple demographic study may have several variables: location, age,
marital status, and alcohol usage. All of the variables are important, yet
it is very difficult to display them all at once in an effective manner.
The first thing you should try to do with multivariate data is analyze
your data before attempting a formal graphical presentation. In the
demographic study example, maybe the location variable turns out to
be irrelevant; it can be discarded without loss of information (although
you would certainly want to tell the reader that location was evalu-
ated). Ordinary xy graphs can help you in the analysis process, perhaps
by making quick plots of one or two variables at a time to find the
important relationships.
Some interesting multivariate display techniques have been devised
(Wang 1978). One method that you see all the time is a map. Maps can
be colored, shaded, distorted, and annotated to show the distribution of
almost anything over an area. Can you draw a map in LabVIEW? If you
have a map digitized as xy coordinates, then the map is just an xy plot.
566 Chapter Twenty
Or, you can paste a predrawn map into your LabVIEW panel and over-
lay it with a graph, as we did in Figure 20.10. In this example, we cre-
ated a cluster array containing the names and locations of three cities
in California with coordinates that corresponded to the map. When the
VI runs, the horizontal and vertical coordinates create a scatter plot.
This might be useful for indicating some specific activity in a city (like
an earthquake). Or, you could add cursors to read out locations, then
write a program the searches through the array to find the nearest
city. Note that a map need not represent the Earth; it could just as well
represent the layout of a sensor array or the surface of a specimen.
The hardest part of this example was aligning the graph with the map.
Data Visualization, Imaging, and Sound 567
The map that we pasted into LabVIEW was already scaled to a reason-
able size. We placed the xy graph on top of it and carefully sized the
graph so that the axes were nicely aligned with the edges of the map.
Through trial and error, we typed horizontal and vertical coordinates
into the cluster, running the VI after each change to observe the actual
city location.
City names are displayed on the graph with cursor names. On the dia-
gram, you can see a Property Node for the graph with many elements—all
cursor-related—displayed. You can programmatically set the cursor
names, as well as setting the cursor positions, marker styles, colors, and
so forth. This trick is very useful for displaying strings on graphs.
270
Histogram XY Plot
10.0 1.0
0.9
9.0
0.8
8.0
0.7
7.0 0.6
6.0 0.5
0.4
5.0
0.3
4.0
0.2
3.0 0.1
2.0 -0.0
-0.1
1.0
0.0 -0.2
0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 -0.2 -0.1 0.0 0.1 0.2 0.3 0.4
100 20 30.0
80 20.0
15
60
10 0.0
40
20 5
-20.0
0 0 -30.0
100 100
80
Mouth
Face width
Mouth angle
9
Eye ht Eye width
40
Vert center Eye V offset
Figure 20.12 A simplified Chernoff Face VI, with only three variables: face width,
eye width, and mouth angle. This VI calls two subVIs that we wrote to draw some
simple objects using the Picture Control Toolkit.
Data Visualization, Imaging, and Sound 569
Figure 20.13 This Draw Centered Oval VI does some simple cal-
culations to locate an oval on a desired center point, then calls
the Draw Oval Picture VI. Note the consistent use of Picture in–
Picture out to maintain dataflow.
Because the Picture functions are all very low-level operations, you will
generally have to write your own subVIs that perform a practical opera-
tion. In this example, we wrote one that draws a centered oval (for the face
outline and the eyes) and another that draws a simple mouth (see Fig-
ure 20.13). All the Picture functions use Picture in–Picture out to support
dataflow programming, so the diagram doesn’t need any sequence struc-
tures. Each VI draws additional items into the incoming picture—much
like string concatenation. Don’t be surprised if your higher-level subVIs
become very complicated. Drawing arbitrary objects is not simple. In fact,
this exercise took us back to the bad old days when computer graphics
(on a mainframe) were performed at this level, using Fortran, no less.
You could proceed with this train of development and implement a
good portion of the actual Chernoff Face specification, or you could take
a different tack. Using the flexibility of the Picture VIs, you could cre-
ate some other objects, such as polygons, and vary their morphologies,
positions, colors, or quantities in response to your multivariate data
parameters. Be creative, and have some fun.
570 Chapter Twenty
3D Graphs
Occasionally, data sets come in a form that is best displayed in three
dimensions, typically as wire-frame or surface plots. When you
encounter such data, the first thing you should do is try to figure out
how to plot it in the conventional 2D manner. That will save you a good
deal of effort and will generally result in a more readable graphical
display. You should be aware that 3D graphing is much slower than 2D
because of the extra computations required (especially for large data
sets), so it’s generally limited to non-real-time applications. Many data
analysis applications have extremely flexible 3D graphing capability.
We prefer to analyze our more complicated data with MATLAB, HiQ,
IDL, or Igor.
If you need to display 3D data in LabVIEW, you are limited to the
Windows version, which includes 3D Graph ActiveX objects that you
drop on the front panel. A set of 3D Graph VIs manipulates that object
through Invoke nodes and Property nodes. As with any 3D plotting
problem, the real challenge is formatting your data in a manner accept-
able to one of the available plot types.
Figure 20.14 shows the 3D Surface Graph indicator displaying
a waterfall plot and a contour plot. Data in this example consists of
an array of sine waves with varying amplitude. In the real world, you
might be acquiring waveforms from DAQ hardware or an oscilloscope.
There are many possible plot styles, and a great deal of extra custom-
ization is possible through the use of the various 3D graph property
VIs. A typical surface plot (plot style cwSurface) is shown with 10 per-
cent transparency selected to allow the axes to peek through. You can
also have the graph show projections to either of the three planes. With
the plot style set to cwSurfaceContour and xy projection enabled, a
shaded contour plot is produced.
Other available 3D display types are the 3D Parametric Graph
and 3D Curve Graph. A parametric graph is most useful for geomet-
ric solid objects where 2D arrays (matrices) are supplied for x, y, and z
data. If the data you are displaying looks more like a trajectory through
space, then the 3D Curve Graph is appropriate. It accepts 1D arrays
(vectors) for each axis.
There’s a great deal of versatility in the 3D Graph, and you’ll have to
explore and experiment with the examples to see how your data can be
massaged into a suitable form. Hopefully, the LabVIEW team will come
up with a cross-platform solution to 3D graphing. Even when they do,
the challenge of data formatting won’t go away.
Historical note: The original 3D graphing package for LabVIEW
was SurfaceVIEW from Metric Systems, a company started by
Jeff Parker, who was one of the original LabVIEW team members.
Data Visualization, Imaging, and Sound 571
Basic Projection
3D Graph 3D Surface.vi Properties.vi Properties.vi
20
Sinewaves with
200
random amplitudes stop
Figure 20.14 Example of a 3D Surface Graph being used as a waterfall display. A rendered surface and a con-
tour plot are shown.
Intensity Chart
Another interesting indicator, the Intensity Chart, adds a third dimen-
sion to the ordinary strip chart. It accepts a 2D array of numbers, where
the first dimension represents points along the y axis and the second
dimension represents points along the x axis, like a block-mode strip
chart. Each value in the array is mapped to a color or intensity level.
The biggest trick is setting the color or gray-scale range to accommodate
your data. Fortunately, there is an option in the Intensity Chart pop-
up for the z-scale called AutoScale Z. That gets you into the ballpark,
572 Chapter Twenty
0.0
100
F
-20.0 80
r
e 60
-40.0 q
40
-60.0 20
0
-80.0
779 Time 894
0 2000 4000 6000 8000 10000 -20
device
Spectrum Plot
channel Intensity
build
samples Chart
array
sample rate
spectrum
Window
Simple Spectrum
Display Unit Analyzer
stop
Mapping (from analysis
examples)
Figure 20.15 The Intensity Chart in action displaying a power spectrum. White regions on
the chart represent high amplitudes. Frequency of a triangle wave was swept from 4 to 2 kHz
while this VI was running. The chart scrolls to the left in real time.
after which you can set the overall range and the breakpoints along
the scale by editing the values on the color ramp. The colors can also
be determined by an attribute node. This is discussed in the LabVIEW
user’s manual.
In Figure 20.15, the Intensity Chart gives you another way to look at
a rather conventional display, a power spectrum. In this example, you
can see how the power spectrum has changed over a period of time. The
horizontal axis is time, the vertical axis is frequency (corresponding to
the same frequency range as the Spectrum Plot), and the gray-scale
value corresponds to the amplitude of the signal. We adjusted the gray-
scale values to correspond to the interesting −20- to −100-dB range of
the spectrum.
another kind of data, and once you’re working in LabVIEW, that data
is going to be easy to handle. In fact, it’s no harder to acquire, analyze,
and display images than it is to acquire, analyze, and display analog
waveforms. Good news, indeed.
The basic data format for an image is a two-dimensional (2D) matrix,
where the value of each element (a pixel) is the intensity at a physi-
cal location. Image acquisition involves the use of a frame grabber
for live video signals or the use of mathematical transforms applied to
other forms of data to convert it into a suitable 2D format. Standard
image processing algorithms operate only on this normally formatted
image data.
Image processing algorithms can be broadly grouped into five areas:
decent configurations. Let’s see what kinds of hardware you might con-
sider above and beyond your generic data acquisition system.
Video I/O devices. Unless your computer already has built-in video
support (and LabVIEW support for that), you will need to install a
frame grabber in order to acquire images from cameras and other
video sources. A frame grabber is a high-speed ADC with memory that
is synchronized to an incoming video signal. Most frame grabbers have
8-bit resolution, which is adequate for most sources. It turns out that
cameras aren’t always the quietest signal sources, so any extra bits of
ADC resolution may not be worth the extra cost. Preceding the ADC,
you will generally find an amplifier with programmable gain (contrast)
and offset (brightness). These adjustments help match the available
dynamic range to that of the signal. For the North American NTSC
standard (RS-170) video signal, 640 by 480 pixels of resolution is com-
mon. For the European PAL standard, up to 768 by 512 pixels may be
available, if you buy the right grabber. Keep in mind that the resolu-
tion of the acquisition system (the frame grabber) may exceed that of
the video source, especially if the source is videotape. Both color and
monochrome frame grabbers are available. Color images are usually
stored in separate red-green-blue (RGB) image planes, thus tripling
the amount of memory required.
Frame grabbers have been around for many years, previously as
large, expensive, outboard video digitizers, and more recently in the
576 Chapter Twenty
form of plug-in boards for desktop computers. With the birth of mul-
timedia, frame grabbers have become cheaper and even more capable
and are sometimes built into off-the-shelf computer bundles. The trick
is to choose a board that LabVIEW can control through the use of suit-
able driver software or DLLs. Alternatively, you can use a separate
software package to acquire images, write the data to a file, and then
import the data into LabVIEW, but that defeats the purpose of building
a real-time system.
In general, there are three classes of frame grabbers. First, there are
the low-cost multimedia types, sometimes implemented as an external
camera with a digitizer, such as the Connectix Quickcam. Also in this
class are cheap plug-in boards and the built-in video capability in some
Macintosh models. The primary limitations on these low-cost systems
are that they include no hardware triggering, so you can’t synchro-
nize image acquisition with an experiment, and they often contain an
autogain feature. Like automatic gain control in a radio, this feature
attempts to normalize the contrast, whether you like it or not. This can
be a problem for quantitative applications, so beware!
The second class includes midrange plug-in boards, such as the
BitFlow Video Raptor, the IMAQ PCI-1408, and Scion LG-3. These boards
have triggering, hardware gain and offset control, and various synchro-
nization options. They offer good value for scientific applications.
High-performance boards make up the third class. In addition to the
features of the midrange boards, you can install large frame buffers on
these boards to capture bursts of full-speed images for several seconds,
regardless of your system’s main memory availability or performance.
Special video sources are also accommodated, such as line scan cam-
eras, variable-rate scanning cameras, and digital cameras. Some DSP
functionality is also available, including built-in frame averaging, differ-
encing, and other mathematical operations, in addition to full program-
mability if you’re a C or assembly language hacker. Examples are the
BitFlow Data Raptor, Imaging Technology IC-PCI, and the Scion AG5.
Now that you’ve got an image, how are you going to show it to some-
one? For live display you can use your computer’s monitor from within
LabVIEW or another application; or you might be able to use an exter-
nal monitor. Hard-copy or desktop-published documents are always in
demand.
One catch in the business of video output from your computer is the
fact that high-resolution computer displays use a signal format (RGB
component video) that is much different than the regular baseband
NTSC or PAL standard format required by VCRs. There are several
solutions (again, multimedia demands easy and cheap solutions, so
expect to see some better ways of doing this). First, you can buy spe-
cial converter boxes that accept various RGB video signals and convert
Data Visualization, Imaging, and Sound 577
IMAQ components
A complete IMAQ application really consists of two major parts. First is
the image acquisition driver, called NI-IMAQ, that talks to the frame
grabber. This driver, including its image acquisition boards, is free from
National Instruments. If you want to use a non-NI board, you can often
obtain driver VIs that are compatible with NI-IMAQ from Graftek. The
NI-IMAQ driver is quite similar to the NI-DAQ driver, including the
use of refnums and error I/O as common threads. You call the IMAQ
Init VI to establish a new session with a channel on your board and
then with various acquisition VIs that are organized in the familiar
easy and advanced levels. If you’ve used DAQ, this is a piece of cake.
The other part of IMAQ is the Vision package, which is a very
comprehensive image processing toolkit that costs about as much as
LabVIEW itself. Without Vision, you’re limited to the basic acquisi-
tion operations and some file I/O. With it, you’re limited only by your
knowledge of image processing. First-time users find that a bit daunt-
ing, because there are literally hundreds of functions in Vision, many
of which have optional parameters that seem to require a Ph.D. to
comprehend (what’s a Waddel Disk Diameter, anyway?). Your best
resources are the example VIs, the IMAQ manuals, and image process-
ing textbooks. Even so, we’ve been pleasantly surprised at how easy it
is to grab an image and extract key features. Let’s look further into how
you use IMAQ.
578 Chapter Twenty
Figure 20.16 A simple IMAQ application that repeatedly grabs an image from a plug-in frame grab-
ber and displays it.
Data Visualization, Imaging, and Sound 579
Image files. IMAQ Vision has a set of VIs that make it easy to read
and write files in standard image formats, including TIFF, JPEG, BMP,
AIPD, and PNG. No data conversion steps are required. All you need
to do is wire your image handle to IMAQ Write File or IMAQ Read
File.
203 Rectangle 3
y2Bottom
Coordinates
0 23
ROI
Descriptor
IMAQ
WindToolsSelect 100
Tool Rectangle
Accept ROI
Raw
Coefficients
(2D)
IMAQ IMAQ
path clustering Close Open Roberts Complex Complex
Method Operation Operation Method Particle Measure
Figure 20.18 An image processing example that uses thresholding, morphology, and spatial filter-
ing to locate concentric circles. It’s deceptively simple.
Our thresholded image isn’t perfect, as you can see, because it con-
tains some holes in the dark area due to noise in the original image.
There are many ways to filter out noise-related pixels and other small
aberrations, but the techniques that work best in this case are dila-
tion, eroding, opening, and closing:
Figure 20.19 IMAQ images can be converted to LabVIEW Pictures for on-panel display.
Other ways to display images. You don’t have to display your images in
a floating IMAQ window, especially if you don’t need any user inter-
action. A simple way to place the image on a LabVIEW panel is to
use a Picture indicator. Figure 20.19 shows a practical example that
includes resampling of the image to adjust its size without losing too
much information. The IMAQ Resample VI lets you choose the X and
Y resolution and one of four methods for interpolation. This step is not
required if your IMAQ image is exactly the size that you want it to be
in the Picture indicator. The next step is to transfer the image data
from the special IMAQ format into a LabVIEW 2D array, which is done
by the IMAQ ImageToArray VI. That’s a useful function any time
you need to manipulate image data in a manner not already included
in IMAQ. Its complement is IMAQ ArrayToImage. Finally, you use
one of the Picture functions such as the Draw 8-bit Pixmap VI to
convert raw 2D data into a picture data type for display.
We added an extra feature to this example. The Picture indicator is
automatically sized to fit the image. You get the array dimensions (cor-
responding to image width and height), bundle them up, and pass them
to a Property Node for the Picture. The DrawSizeArea property is the
one you need to set.
Intensity Graph
0 255
20 225
40
200
60
175
80
150
100
125
120
100
140 Rows
75 195
160
50
180
195 25 Bytes in file
0 25 50 75 100 125 160 31200
0
Array
Subset
Intensity Graph
Bytes
Property Node
in file
columns X Scale.Range:Maximum
X Scale.Range:Minimum
0.00
Y Scale.Range:Maximum
rows Y Scale.Range:Minimum
Plot Area:Size:Width
Plot Area:Size:Height
Figure 20.20 The Intensity Graph displaying an image from a binary file. The
user has to know the size of the image—particularly the number of rows. The
raw data string is converted to an array, which is then transformed into a 2D
array suitable for display.
Intensity Graph sets the y- and x-axis scales to fit the actual number of
rows and columns in the image. Note that the y axis has zero at the top,
rather than its usual place at the bottom. That makes the image come
out right-side up. You also have to select Transpose Array from the
pop-up menu on the Intensity Graph to swap the x- and y-axis data.
An important tweak that you can make is to match the number of
displayed pixels to the data. Ideally, one array element maps to one
pixel, or at least there is an integral ratio between the two. This reduces
aliasing in the image. In this example, we made this adjustment by set-
ting the Plot Area Size (x and y values) with the Property Node.
The Ramp control (part of the Intensity Graph) sets the data range
to 0–200, corresponding to a pleasing gray scale ranging from black to
white. You can adjust the gray scale or color gradations by editing the
numeric markers on the Ramp. Pop up on the Ramp control and set
Data Visualization, Imaging, and Sound 585
Sound I/O
Recording and reproducing sound can be useful for scientific analysis,
operator notification, or as a novelty. The whole idea is to convert acous-
tic vibrations into an electrical signal (probably with a microphone),
digitize that signal, and then reverse the process. In most cases, we use
the human audible range of 20 Hz to 20 kHz to define sound, but the
spectrum may be extended in both directions for applications such as
sonar and ultrasonic work. Depending upon the critical specifications
of your application, you may use your computer’s built-in sound hard-
ware, a plug-in board (DAQ or something more specialized), or external
ADC or DAC hardware. As usual, a key element is having a LabVIEW
driver or other means to exchange data with the hardware.
The simplest sound function that’s included with LabVIEW is the
Beep VI, found in the Sound function palette. On all platforms, it plays
the system alert sound through the built-in sound hardware.
operating system can conveniently record and play back through stan-
dard hardware. Therefore, it’s desirable to use those formats for your
own, DAQ-based sound I/O. In the sections that follow, we’ll look at
some options for handling those formats.
Sound input
It’s very easy to set up an input data streaming application, as shown
in Figure 20.22. The SI Config VI allocates an input buffer and defines
the type of data and sampling rate, much like its DAQ cousin, AI
Config. Then you call SI Start to begin a continuous buffered acquisi-
tion. There’s typically no triggering hardware available on consumer-
grade sound boards, so the SI library does not support triggering.
(Perhaps National Instruments could add software triggering like the
NI-DAQ driver has.) Once the acquisition is started, you put the SI
Read VI in a loop to read buffers as they become available. If you don’t
read often enough, you’ll get a buffer overflow error. Adding a timer in
the loop prevents the SI Read operation from taking up all the CPU
time while it waits for the next available buffer. Data is returned as I16
or I8 arrays; stereo data is 2D. When you’re done, call SI Stop and then
SI Clear to free up memory.
If all you want to do is a single-shot acquisition of sound, there’s a
simpler VI in the sound palette, called Snd Read Waveform. It com-
bines all the steps of Figure 20.22, but without the While Loop, and
returns all possible data types (mono, stereo, 8- and 16-bit).
588 Chapter Twenty
Amplitude
44100 0
11025 -300
766662 771071
STOP Time
error
SI Read.vi
Buffer size
100
stop
Figure 20.22 A chain of sound input VIs performs a hardware-timed, buffered acquisi-
tion from your computer’s built-in sound hardware.
Sound output
Sound output functions are similar to those for input, with the addi-
tion of a few extra features that help control the flow of data. In
Figure 20.23, a waveform is computed and then written to the output
buffer by the SO Write VI. In contrast with sound input behavior, this
buffer size is variable and depends only on the amount of data that
you pass to SO Write. Once the buffer is filled, you call SO Start to
generate the signal. If the buffer is fairly long and you need to know
when it’s done, you can call the SO Wait VI, which returns only when
the current sound generation is complete. It’s also possible to call the
SO Pause VI to temporarily stop generation; call SO Start again to
restart it.
For single-shot waveform generation, you can use the Snd Write
Waveform VI, which performs the configure, write, start, wait, and
clear steps.
Sound files
Included with the sound VIs are utilities to read and write Windows
standard .wav files with all available data types and sample rates.
They’re called Snd Read Wave File and Snd Write Wave File, and
Data Visualization, Imaging, and Sound 589
16 bit f2
-40000
0.15 STOP 1 10 100 1000 10000
f2
True
Sound Play Sound
format SO Config.vi SO Write.vi SO Start.vi SO Wait.vi SO Clear.vi
taskID
error
100
stop
Figure 20.23 Sound output is as easy as sound input. In this example we generate a cool sound-
ing chirp each time the Play button is pressed.
they work on all platforms. The interface to these VIs meshes with the
sound-in and sound-out VIs by including the sound format cluster and
the four data array types.
For Macintosh users who need access to AIFF format files, Dave
Ritter of BetterVIEW Consulting has created a CIN-based VI to play
them. It’s available at www.bettervi.com.
Bibliography
Cleveland, William S.: The Elements of Graphing Data, Wadsworth, Monterey, Calif.,
1985.
Gonzalez, Rafael C., and Paul Wintz: Digital Image Processing, Addison-Wesley, Reading,
Mass., 1987.
Tufte, Edward R.: The Visual Display of Quantitative Information, Graphics Press,
Cheshire, Conn., 1983.
Wang, Peter C. C.: Graphical Representation of Multivariate Data, Academic Press,
New York, 1978.
This page intentionally left blank
Index
“\” Codes Display, 110 AN084 (Using Quadrature problem definition, 186–191
Encoders with E Series process for, 196–197
DAQ Boards), 523 pseudocoding, 203–204
A AN114 (Using LabVIEW to Create specifications definition, 187–189
Abortable wait, 156 Multithreaded VIs for user interface prototyping for,
Aborting, of For Loops, 67 Maximum Performance and 192–196
Absolute encoders, 523 Reliability), 145, 389–390 user needs analysis for, 186–187
Absolute timing functions, 139, AN154 (LabVIEW Data Storage), with VI hierarchy, 201–202
145–146 132 Application development
AC & DC Estimator, 544 AN168 (LabVIEW Performance (for LabVIEW FPGA),
AC signals, 297 and Memory Management), 406–414
Ac time domain, 280 122, 548, 551 clocked execution, 411–413
Accessibility, random, 177 Anafaze controllers, 462 compiling, 406–408
Acquire Semaphore VI, 164 Analog design, 314 debugging, 408
Action Instruments, 292 Analog Devices 6B Series, 302, 303 parallelism, 413, 414
Active filters, 292 Analog filters, 308 pipelining, 413–415
ActiveX objects, 126 Analog IO Control Loop synchronization, 408–411
Actual scan rate, 499 (hw timed), 499 Application events, 81
Actuators, 278–279 Analog outputs, 299 Application Note 084 (Using
Acyclic dataflow, 491 Analog signals, 305 Quadrature Encoders with E
A/D (analog-to-digital) system, 317 Analog transmitters, 447 Series DAQ Boards), 523
Adaptive controllers, 446 Analog voltage, 278 Application Note 114 (Using
ADCs (see Analog-to-digital Analog-to-digital converters LabVIEW to Create
converters) (ADCs), 309–314 Multithreaded VIs for
Addressing, 254 characteristics of, 309 Maximum Performance and
Adobe Illustrator, 237 coding schemes for, 315–316 Reliability), 145, 389–390
Aerotech Unidex, 523 errors in, 311 Application Note 154 (LabVIEW
AI Clear, 532 multiplexing vs., 313 Data Storage), 132
AI Config, 531 signals and, 275 Application Note 168 (LabVIEW
AI CONVERT, 385 use of, 305 Performance and Memory
AI Read, 531, 549 Analog-to-digital (A/D) system, 317 Management), 122, 548, 551
AI Start, 531 AnaVIEW, 462 Application program interface
AI STARTSCAN, 385, 499 Anderson, Monnie, 64 (API), 28, 245
Air Force Research Lab Antialiasing, 292 Applications:
MightySat II.1, 34 Antialiasing filter, 307 event-driven, 100–102
Alarm acknowledge, 508 Anti-reset window, 445 mega-applications, 22
Alarm annunciator panels, 512 AO Clear, 532 Applied Research Laboratory, 11
Alarm Handler VI, 509–511 AO Config, 531 Approach control gain
Alarm handlers, 508–511 AO Start, 531 adjustment, 496
Alarm summary displays, 465 AO UPDATE, 385 Arbitrary waveform generator, 532
Alarms, 506–512 AO Write, 531 Architecture, 87, 200
alarm handler for, 508–511 Aperture time, 313 Arnold, Tom, 34
for operator notification, 511–512 API (see Application program Array constant, 112, 113
Algorithms, 51 interface) Array Size function, 115
Aliasing, 306 Append True/False String Array to Cluster function, 130, 135
Allen-Bradley: function, 107 Array To Spreadsheet String, 111
Data Highway protocol, 457, 462 Apple Computer, 17 Arrays, 53–54, 114–122
PLCs, 244 Application Builder (LabVIEW), clusters and, 124
Alliance Program, 244 25, 26 coercion/conversion when
Alpha channels, 585 Application design, 185–218 handling, 549
Ames Research Center’s block diagrams for, 189–191 defined, 114
HFFAF, 546 with DAQ Assistant, 197 empty, 69
Amperes (Current) controls, 78 debugging, 204–207, 214–218 FPGA (see Field-programmable
Amplifier(s), 291, 522 design patterns for, 200–201 gate arrays)
for instrumentation, 289 for error handling, 207–213 initialization of, 117–119
isolation, 291, 515 I/O hardware specification for, memory usage/performance
lock-in, 545 191–192 of, 119–122
PGIA, 314 with LabVIEW sketching, one-dimensional, 69
Amplitude measurements, 280 202–203 shift registers and, 431
AMUX-64T, 525 for modularity, 198–200 two-dimensional, 112
591
592 Index
Documentation (cont.) Empty Array (function), 118, 119 ETS (Equivalent time
distribution of, 241–242 Empty arrays, 69, 118 sampling), 543
formal, 238–241 Empty Path, 169 Event driven programming, 434
of instrument drivers, 273–274 Empty String/Path, 269 Event handling, functional globals
outline of, 238–239 Emulation mode, 408 and, 212
printing, 235–238 Enable chain, 410 Event output, 72
programming examples in, 241 Enable Database Access, 182 Event structure, 81
screen images, 236–237 Encapsulated PostScript (EPS) Event-driven applications,
of terminal descriptions, 240–241 files, 236 100–102
VI descriptions and, 231–232, Encoders: Events, 81–87
239, 240 absolute, 523 controls for, 85
of VI history, 234–235 incremental, 523 defining, 81, 151, 153
writing, 238–241 PWM, 404 dynamic, 85–87
DOS systems, 12 quadrature, 523 dynamic user-defined, 82
Double-buffered DMA, 320 reading of, 523 filter, 83–85
Double-precision floating point Encoding images, 573 Notify, 153
(DBL) numeric type, 115 End or identify (EOI) signal, 254 notify, 81–83
Dr. T (see Truchard, Jim) End-of-line (EOL), 206 Sub Panel Pane, 81–82
Draw 8-bit Pixmap, 583 Enhancement, image, 573 synchronization of, 153–155
Drawing standards, 442–443 Enqueued, 91, 100, 210 user-defined, 212
Drivers, 191 (See also Instrument Ensemble averager, 540 Value Change, 86–87
drivers) Enterprise Connectivity Toolset, VI, 81
DSC (see Datalogging and 505–506 Excel (Microsoft), epoch
Supervisory Control module) Enumerated constraints time in, 147
DSOs (see Digital storage (enums), 96 Excitation, 293
oscilloscopes) Enumerated types (enums), Exclusion, trigger, 539
DSP board, 320 133–134 Executable code, 420, 423
dstp (DataSocket Transport Enums, 133–134 Execution:
Protocol), 483 EOF function, 183 of timing, 143–145
Dynamic events, 85–87, 154, 212 EOI (end or identify) signal, 254 of VIs, 54
Dynamic Link Libraries (DLLs), EOL (end-of-line), 206 Execution highlighting, 57
29, 59, 149 Epoch time, 145–147 Execution overhead, 62
Dynamic links, 45 EPS (encapsulated PostScript) EXIT command, 67
Dynamic memory allocation, 431 files, 236 Exp String To Number
Dynamic user-defined events, 82 Equivalent time sampling function, 109
(ETS), 543 Expansion, 453
Equivalent time-sampling, 306 Exponent, 128
E Erosion, 582 Exporting timing data, 146–147
Error(s): Extended precision (EXT)
Ectron, 292 in actuators, 279 float, 549
Edit Format String, 110 in ADCs, 311 External IOBs (input/output
Editors, 352–362 automatic recovery of, 261 blocks), 407
interactive, 355–357 gain, 311 External timing, 498, 499
for menu-driven systems, in grounding/shielding External triggers, 316–317
360–361 connections, 288 Extract Single Tone Information
pop-up, 357–360 offset, 311 VI, 148
static, 353–355 Error Code File Editor, 209
status displays for, 360–361 Error Event VI, 212
EG&G Automotive Research, 301
Elapsed Time, 222
Error handler, 208 F
Error handling:
Electromagnetic fields, 286–288 application design for, 207–213 Faceplate displays, 465
Electronics WorkBench Multisim, 32 in CLAD, 220 False String function, 107
Electrostatic shield, 286 Error in, 262, 263 Familiarization, 249–250
Elemental I/O, 429 Error I/O, 64, 257 Fan-out, trigger, 537
Embedded Development Module flow control drivers, 261–265 Faraday cage, 286
(LabVIEW), 419 in VISA functions, 272 Fast Fourier Transform (FFT), 281
Embedded image (BMP) files, 236 Error out, 262, 263 FASTBUS, 519
Embedded LabVIEW (see Error Query, 265 Feature creep, 25
LabVIEW Embedded) Error Queue, 210 Feedback, 444
Embedded Project Manager, E-Series DAQ boards, 523 Feedback control, 453
424–425 Ethernet adapters, 304 Feedforward, 495
Embedded Target Creation Ethernet Device Server, Femtosecond laser cutting
Utility, 426 246–247 system, 34
EMF objects, 236 Ethernet devices, 245, 457 Fencepost, 503
Index 597
Graphs (cont.) High-accuracy timing, 147–149 IMaging ACquisition, 574 (See also
with multivariate data, 565–569 High-frequency measurements, IMAQ Vision)
3D, 570–571 517–518 IMAQ tools (see specific types,
with waveform and Cartesian Highlighting, execution, 57 e.g.: Morphology)
data, 558–563 High-resolution timing, 147–149 IMAQ Vision, 577–585
Ground, 284–286 High-side, of power source, 517 display in, 583
Ground loops, 286–287 High-voltage systems, 514–516 example, 579–582
Ground return (low side), 517 HighwayView, 244, 460–461 files in, 579
Grounding, 284–291 HIST Data to Text File, 505 image handles of, 578–579
of differential/single-ended HIST package, 503–505 intensity graphs and, 583–585
connections, 288–291 Histograms, 506 interactive windows in, 579
electromagnetic fields and, Historical trends, 499, 502–503 Importing timing data, 146–147
286–288 Hit (reset), 458 Impulse response, 309
error sources in, 288 Honeywell-Measurex In Range and Coerce Function,
ground, 284–286 Corporation, 35 205, 269, 434
Ground-referenced analog HP Interface Bus (HP-IB), 247 Incremental encoders, 523
inputs, 294 HP-IB (HP Interface Bus), 247 Independent parallel loops, 89–90
Gruggett, Lynda, 156 HTML (see Hypertext Markup Index Array, 115, 486–487
GUI (see Graphical user interface) Language) Indicator events, 82
HyperDrives, 16 Inductance, 286
Hypertext Markup Language Inductive coupling, 286
H (HTML), 232 Industrial standards (process
for document distribution, 242 control applications),
Hall Effect devices, 516 printing to, 236 438–443
Handle blocks, 132 Hypervelocity Free-Flight for drawing and design, 442–443
Hard real time, 378 Aerodynamic Facility for piping and instrument
Hardware: (HFFAF), 546 diagrams/symbols, 439–442
GPIB issues with, 253–254 Hysteresis, 279 Infinite loops, 65, 105
instrument driver issues with, Hysteresis On/Off Controller, 493 Info string, 80
252, 254 Informative alarms (see Alarms)
I/O, 191–192 Init (IMAQ), 577, 578
for LabVIEW Embedded, 422
for LabVIEW RT, 379–382
I Initialization, 179
of arrays, 117–119
for physics applications, 514–520 I16 (2-byte integer) numeric of boolean control, 478
CAMAC, 518 types, 115 sequential control problems
FASTBUS, 519 Iconic programming, 33 with, 489–490
NIM module, 519–520 Icons, 49–50 Initialization VIs, 267–268
for signal conditioning, IEEE (Institute of Electrical and Initialize and then loop, 87–89
514–518 Electronics Engineers), 247 Initialize Array function, 117, 119
VME bus, 519 IEEE 488, 247, 253 Inline C Node, 430
VXI, 519 IEEE 488.2, 248 Inplaceness, 20
remote I/O, 302 IEEE-488 bus, 9 Input:
serial instruments issues with, IFFT (Inverse Fast Fourier characteristics of, 534
252–253 Transform), 281 count, 173
for signal conditioning, 514–518 Ignore previous input, 157 data, 179
for current measurements, IIR filters, 347–348 delimiter, 111, 174
516–517 Illustrator (Adobe), 237 ignoring, 157
for high-frequency Image encoding, 573 for length, 117
measurements, 517–518 Image handles, 578–579 previous, 157
for high-voltage Image Source, 581 sound, 587–588
measurements, 514–516 Images, 573 type, 179, 181
Hardware platforms: ImageToArray (IMAQ), 583 Input offset current, 296
for LabVIEW FPGA, 379, 380 Imaging, 555, 572–586 Input scanners, 476–477
RIO, 379, 380 computer configuration for, 575 Input signals, 294–298
HART, 448 with IMAQ Vision, 577–585 AC signals, 297
Help: display in, 583 digital inputs, 297–298
compiled files for, 232 example, 579–582 floating analog inputs, 294–295
custom online, 233–234, 242 files in, 579 ground-referenced analog
Help Tags, 232 image handles of, 578–579 inputs, 294
Hex Display, 110 intensity graphs and, 583–585 thermocouples, 295–296
HFFAF (Hypervelocity Free-Flight interactive windows in, 579 Input/output blocks (IOBs),
Aerodynamic Facility), 546 system requirements for, external, 407
Hide Control, 236 574–577 input/output (I/O) signals:
Hide Front Panel Control, 118 video I/O devices for, 575–577 circular buffered, 374–375
Index 599
segmented, 535 Mutliple time bases, 535 Number of Cycles control, 531
state, 70 Mutually exclusive (mutex), Numbers and numbering:
Memory manager, 121 163, 388 incorrect, 96
Memory Usage, 119–122, 552 MXProLine, 35 of rows, 173
Menu-driven systems, editors for, MyGraphData, 483 state, 94
360–361 Numeric types, 104–105
Message passing: clusters and, 115
of GPIB instruments, 254 N DBL, 115
of instrument drivers, 254–256 I16, 115
of serial instruments, 254–256 Namespace conflicts, 161 SGL, 115
Metainformation, 176 Naming: Nyquist rate/criterion,
Metal-oxide varistors (MOVs), 516 of cursors, 567 305–307
Metric Systems, 570 of tags, 439–441, 467
Microcontrollers, 455 unique, 71
Microsoft Excel, epoch time in, 147 NaN (see Not-a-number)
Nanosecond waveforms, 542
O
Microsoft Windows 3.0, 15, 23
Microstepping, 521 NASA: Object files, 423
MightySat II.1 (Air Force Ames Research Center’s Object oriented programming
Research Lab), 34 HFFAF, 546 (OOP), 22
Milliseconds (ms) timing, New Millennium Deep Space 1, 34 Occurrences, 155–158
138, 139 National Electrical Code, 285, 443 OCDIComments, 428
MMI (see Man-machine interfaces) Naval Research Lab Coriolis OCX (oven-controlled crystal
MMI G Wizard, 469 spacecraft, 34 oscillator), 149
Modbus protocol, 457 Needs analysis, user, 186–187 Offset, 180, 183
Mode control, 216, 486 Negative feedback, 444 Offset errors, 311
Model-based controllers, 496 Network connections, 482–484 Ohm’s Law, 516
Modem, null, 247, 253 Network-Published Shared OLE for Process Control
Modicon, 457 Variable, 398 (OPC), 456
Modularity, 198–200, 265–266 New Millennium Deep Space 1 Omega Engineering Temperature
Moore Products, 292 (NASA), 34 Handbook, 296
Morphological transformations, 582 Newport Corporation, 523 On-chip debugging, 421
Morpohology (IMAQ), 582 NI Spy, 256 One-dimensional (1D) string array,
Motion control systems, 520–523 NI-FBUS host software, 448 69, 111
Motion Toolbox, 214 NI-IMAQ, 577 Online help, custom (see Custom
Motorola 68000 microprocessors, 15 NIM (Nuclear Instrumentation online help)
Move, 169 Manufacturers) module, On/off controllers, 444
Moving averager, 308 519–520 OOP (object oriented
Moving averages, 348–350 NI-PSP (Publish-Subscribe programming), 22
MOVs (metal-oxide varistors), 516 Protocol), 398 OPC (OLE for Process Control), 456
ms time out, 156 NI-SCOPE driver, 533 OPC-based PLC (example),
Ms (milliseconds) timing, 138, 139 Nodes, 38, 202 458–460
Multidrop systems, 246 call by reference, 471 Open standards, 450
Multimedia, 555 CINs, 58–59 Open VI Reference node, 471
Multimeter, digital, 250 property, 45–47, 80, 194, 471, 474 Open/Create/Replace File, 169,
Multiple state machines, 488 Noise: 170, 182
Multiple-loop controllers, 462 dithering of, 318 Opening, 582
Multiple-record mode, 535 elimination of, 284 Operating systems,
Multiplexing: in signal sampling, 317–319 multithreaded, 164
ADCs vs., 313 Nonlinearity, integral, 312 Operation mode, 170
in signal conditioning, 293 Normal-mode signal, 289, 485 Operators, notification alarms
timing skew and, 312–313 Not A Number/Path/Refnum, for, 511–512
Multipulse data, 539–543 105, 207 Opto-22, 302
Multitasking, 41–42, 389 Not-a-number (NaN), 105, 207 Output (out) control, 467
cooperative, 41–42 Notification VI, 159 Output data:
preemptive, 145 Notifiers, for synchronization, analog, 299
in RT software design, 389–391 158–160 data distribution of, 477–479
Multithreaded operating Notify events, 83, 153 manipulation of, 444–447
systems, 164 NuBus architecture, 20 in process control applications,
Multithreading, 27, 42–43, 145, Nuclear Instrumentation 444–447
389–391 Manufacturers (NIM) for sound, 588
MultiThreshold (IMAQ), 581 module, 519–520 Oven-controlled crystal oscillator
Multivariate data, on graphs, Null modem, 247, 253 (OCX), 149
565–569 Null (zero) terminators, 134 Overflow, 104, 310
Mutex (Mutually exclusive), 163 NuLogic, 522 Oversampling, 308, 319, 344
602 Index
Set Waveform Attribute, 127 Signal conditioning, 284, 291 communications techniques of,
Setpoint (SP), 466, 467 hardware for, 514–518 457–458
Settling time, 292, 313–315 for current measurements, HighwayView, 460–461
SFCs (sequential function charts), 516–517 OPC-based example of, 458–460
490 for high-frequency single-loop controllers, 461–463
SGL (single-precision floating measurements, 517–518 Snapshot button, 216
point) numeric types, 115 for high-voltage Snd Read Wave File, 588–589
S/H (sample-and-hold) measurements, 514–516 Snd Read Waveform, 587
amplifier, 310 multiplexing in, 293 Snd Write Wave File, 588–589
Shah, Darshan, 28, 29 for safety, 292–293 Snd Write Waveform, 588
Shannon sampling theorem, Signal Processing Toolkit, 281, 282 Snider, Dan, 214
305–307 Signal recovery, 543–546 SNR (signal-to-noise ration), 291
Shared libraries, 59, 149 Signal Recovery By BP Filter, 544 SO Pause, 588
Shared resources: Signal sampling, 305–321 SO Wait, 588
defined, 388 analog-to-digital converters for, SO Write, 588
for RT software design, 388–389 309–314 Soft real time, 378, 379
synchronization of, 161–165 averaging in, 308–309 Software constructs, in CLAD, 219
Shared Variable Engine (SVE), 398 coding schemes for, 315–316 Software design (RT), 382–394
Sheldon Instruments, 321 digital-to-analog converters for, context switching in, 393–394
Shielding, 286 (See also Grounding) 314–315 for measuring performance,
Shift registers: filtering in, 307–309 383–388
for large arrays, 431 noise in, 317–319 multithreading/multitasking in,
for looping, 67–71 sampling theorem, 305–307 389–391
uninitialized, 69–71 throughput and, 319–321 shared resources for, 388–389
Shinskey, Greg, 446 triggering and, 316–317 VI organization in, 391–393
Short-circuit (crowbar), 516 Signal-to-noise ration (SNR), 291 Software Engineering Group
Show Buffer Allocations, 121 Signed integers, 104 (SEG), 244
Show VI Hierarchy, 201 Significant figures, 105 Software-generated triggers, 317
Shunting, 516–517 Simple diagram help, 239 Sound files, 588–589
SI Clear, 587 Simple Error Handler, 171, 209, Sound I/O, 586–589
SI Config, 587 210, 212, 532 DAQ for, 586–587
SI Read, 587 Simplicity, 44 files for, 588–589
SI Stop, 587 Simulation mode, 485 functions of, 587
Sign bit, 315, 316 Sine Waveform VIs, 125 input, 587–588
Signal(s), 275–304 Single-element (scaler) types, 103 output, 588
actuators, 278–279 Single-ended connections, 288–291 SoundBlaster, 587
ADCs and, 275 Single-loop controllers (SLCs), 453 Source boolean (string), 262–263
analog, 305 Single-precision floating point Source pins, 523
balanced, 51, 290 (SGL) numeric types, Sources (timing), 141–143
categories of, 279–283 115, 549 SP (see Setpoint)
common-mode, 289 Single-shot data: Sp low, 497
connections between, 284–304 continuous data vs., 338 SP-50, 448
amplifier use for, 291–294 real time analysis with, 338–339 Span (range), 497
grounding/shielding of, Single-stepping, 55–57 Spatial filtering, 582
284–291 Sins (see Code Interface Nodes) SPC (statistical process control),
of input signals, 294–298 Site machines, 94–98 505–506
with networking, 303–304 6B Series (see Analog Devices Specifications definition,
of output signals, 298–299 6B Series) 187–189
controller, 447 16550-compatible UART, 246 Spectrogram, 281
EOI, 254 sketching (LabVIEW), 202–203 Spectrum Astro, Inc., 34
input, 294–298 Skew, 537 Speed, of real time analysis,
AC signals, 297 Skin effect, 286 340–341
digital inputs, 297–298 “Skip if busy,” 393 Speed Tester, 191
floating analog inputs, 294–295 SLCs (single-loop controllers), 453 Spike, 313
ground-referenced analog Slew rate, 314–315 Split Number, 132, 585
inputs, 294 Slices, 401, 407 Spreadsheet programs, 12
thermocouples, 295–296 Slider controls, 78, 466 Spreadsheet String To Array
I/O subsystem for, 299–303 Small Computer Systems Interface specifier, 111
normal-mode, 289 (SCSI) ports, 19 Spreadsheets, strings in, 110–114
origins of, 275–283 Smart controllers, 452, 455–463 SQC (see Statistical process control)
sensors, 277–278 I/O subsystems, 463 SRQs (service requests), 270
transducers, 276–277 programmable logic controllers, Standard Commands for
Signal bandwidth, 343–344 455–461 Programmable Instruments
Signal common, 285–286 advantages of, 456–457 (SCPI), 248, 258
606 Index
VISA Open, 267 Wait At Rendezvous VI, 165 Windows (OS), 454
VISA Property node, 267 Wait For RQS, 270 WindToolsSelect (IMAQ), 579
VISA serial port functions, 246 Wait on Notification, 158–159 Wire-frame plots, 570
VISA session identifier, 257, 258 Wait On Occurrence, 155 Wiring, 180
VISA Transition Library (VTL), 257 Wait Until Next ms Multiple, Wiring issues:
Visible Items, 127 138, 139 of GPIB instruments, 253–254
Vision (IMAQ), 574 Warnings (see Alarms) of serial instruments,
Vision Builder (IMAQ), 577 Watchdog program, 458 252–253
Vision package (IMAQ), 577 Watcom C, 24 Write Datalog, 181
The Visual Display of Quantitative Waterfall plots, 563, 565 Write File (IMAQ), 579
Information (Edward Watts (Power) controls, 78 Write function (DataSocket), 483
Tufte), 556 Waveform Chart, 499 Write Mark, 171
VM (virtual memory), 547 Waveform Generation VIs, 125 Write To Measurement
VME bus, 519 Waveform Graph, 558 File VI, 176
VMEbus Extensions for Waveform Measurement Write to Spreadsheet File, 172
Instrumentation (VXI), 6, utilities, 125 Write to Text File, 60, 171
311, 519 Waveform Scan VI, 499 Writing:
Voltage: Waveforms, 125–127, 558 to binary files, 178–179
analog, 278 attributes of, 126 to datalog files, 181–182
GPIB and, 278 generation of, 403, 529 to text files, 170–172
peak, 240 graphs using, 558–563 Wrong numbering, 96
waveform data of, 278 nanosecond, 542
Volts (Potential) controls, 78 recording of, 516
VTL (VISA Transition Library), 257 of voltage, 278 X
VXI (see VMEbus Extensions for Wells, George, 291
Instrumentation) Which Button, 467, 470, 481 XY Graph, 500, 528, 560
VXI instrument, 6 While Loops, 64–66
Wind up, 445
WindClose (IMAQ), 579 Z
W WindDraw (IMAQ), 579 Z index pulse, 523
Wait, abortable, 156 WindGetROI (IMAQ), 579 Zero (null) terminators, 134
Wait (ms), 138, 140 Windoid, 55 Z-scale, 571