Preface: Wind Energy Modeling and Simulation
Preface: Wind Energy Modeling and Simulation
Preface: Wind Energy Modeling and Simulation
Paul Veers
NREL
People are modelers. Every human being uses mental models to capture expertise and facilitate success.
Mental models allow us to think ahead, essentially running thought simulations of possible outcomes,
evaluating what might happen in the future given different choices in the present. Specific models are
created for each location, specialty or vocation. The mental calculations that go on are automatic,
almost subconscious in some ways. The models are based on experience and are updated as these
experiences expand. The models, over time and experience, become more comprehensive and complex.
People use mental models to process the endless streams of data acquired by the senses every second
to organize these data, make sense of what is happening around us, and plan how we are going to
respond. When I go into my kitchen to get a drink, find some food, begin a meal, or even to do some
cleaning, I know what is behind closed doors, how different implements are used, and where to find the
food. When a snake found its way into the middle of the kitchen floor, there was no place in this mental
model for such an object. It was interpreted as a ribbon, perhaps a belt, or part of an apron. Once its
motions confirmed the reality, my mental model of objects in the kitchen was modified forever—slender
serpentine objects on the floor now trigger different emotional responses than they ever did before.
Mental models developed by one set of experiences (data sets) need to be continuously revised with the
expansion of experience and exposure to new data.
The human use of scientific knowledge is therefore one of accessing and exercising the various models
we possess of the way the natural world works. Human engineering extends these models to the physics
of constructed systems and devices built for individual or societal benefits. When engineers and
scientists work on the advancement of technology, they are exercising both the innate mental models
that come from learning and experience, but also use mathematical models that enable them to work
beyond what can be held in a single human mind. Math is the language of scientific knowledge; it is
widely recognized that if you cannot describe a phenomenon with a mathematical model, you really
don’t understand it.
Mathematical models of the natural world go way back in history, as far back as there are human
records. The ancient Egyptians and Babylonians were doing sophisticated mathematical analyses of the
structures they were building. Leonardo da Vinci, in the fifteenth century, expressed the view that math
is essential to the study of science. He noted, “There is no certainty in sciences where mathematics
cannot be applied.” [1. Paris Manuscript, Notebooks/J.P. Richter, 1158,3: James McCabe, “Leonardo do
Vinci’s De Ludo Geometrica,” Ph.D. Dissertation, UCLA, 1972, as quoted in Isaacson, Leonardo da Vinci,
Simon and Shuster, New York, 2017, p 200.] However, he was also convinced that nature, appearing
continuous in character, is not well suited to computational solutions. He wrote, “Arithmetic is a
computational science in its calculation, with true and perfect units, but it is of no avail in dealing with
continuous quantity.” [2. Codex Atlanticus (1478-1518) Biblioteca Ambrosiana, Milan, as quoted in
Isaacson, Leonardo da Vinci, Simon and Shuster, New York, 2017, p 201]. Leonardo therefore focused on
geometry to describe the phenomena he discovered. It was not until Newton and Leibnitz invented the
calculus that mathematical descriptions of continuous natural phenomena through differential
equations brought the continuous into the realm of the discrete. However, without large-scale
computing, the use of differential equations was limited to those with closed-form solutions, or at most
very simple numerical approximations. The direct numerical integration of the continuous equations of
physics remained intractable. Yet, the differential view of dynamics did presage the eventual use of
computational systems that chop up the continuous fields and timelines of nature into large numbers of
simple equations amenable to large-scale numerical solution. But the ubiquitous use of discretized
computational methods needed to wait further centuries for the development of computers that were
up to the task.
Long before computers of sufficient size to solve meaningful continuum mechanics problems were
created, people were dreaming of ways that discretizations could lead to enhanced solutions. Haupt, et
al., in Chapter 3 of Volume 1 of this work, tell of a dream of Lewis Fry Richardson, atmospheric scientist
and namesake of the Richardson number, who as far back as 1904 was suggesting how these massive
computational systems might work for numerical weather prediction. He suggested a thought
experiment of a massive round theater (picture the galactic senate scene in the Star Wars movies)
wherein each seat was a location on the globe and the occupant was solving a single equation. Their
solution would be displayed so those in the seats surrounding them could see their answers and use
them in their own computations, thus connecting the calculations step by step around the entire planet.
In the realm of structural analysis, massive matrix equations describing the truss-work structure of giant
Zeppelins were being solved by sheer brute force as teams of arithmeticians used mechanical adding
machines and slide rules to invert the matrices. Similar to the way those who used machines to write
with fixed typefaces were known as “type-writers,” the massive groups using simple machines to do
individual arithmetic operations in concert were known as “computers.” So, in an irony of progress,
while in the past we tried to make people act like computers, we now try to make computers act as
people.
The last few decades have revealed an amazing transformation in the practice of science and
engineering driven by the continuous development of larger and faster computers. This progress has
been sustained over so many decades that it hardly seems to be a breakthrough, but it has certainly
been a revolution. These massive computational capabilities make it possible to envision the creation of
models of unprecedented scope and resolution. Parallel architectures with ever-faster processors make
it appear that Richardson’s vision has indeed become a reality, but with each seat containing the
computational power that just a couple decades ago did not exist in even the fastest of single cores.
Many phenomena we would like to model are described by their governing equations and defined by
unique sets of boundary conditions so complex that the outcome cannot be visualized directly by the
human brain. The solution is only made observable when the computational models are solved, and the
results are displayed in graphical form. Computational fluid dynamics (CFD) is one such area in which the
use of color-coded numerical results has brought previously unimaginable complexity to light, so much
so that practitioners now joke that uninformed use of the techniques, resulting in questionable results,
are actually delivering “colorful fluid dynamics.” When used properly, computational models are now
capable of delivering a high enough resolution to reveal intricacies and phenomena that previously lay
hidden within the mathematics.
The computational models and problems being solved have now progressed to the level where
computational results, well validated in controlled experiments where measurements can be obtained,
provide insight into domains where experimental results are unattainable. Phenomena that lay hidden
where instrumentation is not capable of revealing the details of interest are possible with a
computational model that can estimate the finer details by refining the resolution and time steps as far
as necessary. This may not be true of all quantities of interest, but it is certainly true for some and is an
increasingly common usage for high-fidelity models of specific phenomena.
And yet, we are limited. As a graduate student, I took classes in finite element methods from one of the
leaders of the era: Thomas J. R. Hughes. He liked to tell students that the size and speed of computers
are constant: they are always too small and always too slow and will always be that way. The nature of
technological progress is to continue to present problems that push the boundaries of what is currently
possible. The problem of atmospheric turbulence is a good example. We know the governing equations
for fluid flow that the atmosphere must follow, from the largest synoptic scales down to the tiniest
eddies of turbulence. Berg and Kelly, in their Volume 1 chapter on turbulence modeling, note that using
direct solution of the Navier-Stokes equations to resolve the smallest eddies in an atmospheric flow with
a domain large enough to also capture the major length scales will require discretization of at least a
billion-billion (1018) cells—well out of the reach of even the exascale machines that are still on the
drawing board. Therefore, we need to focus models on capturing the primary scales of interest for
particular investigations so that the discretization remains tractable. Not all phenomena can be included
in every computational investigation. This is certainly the case now with mesoscale models separate
from microscale models separate from turbine models, which in turn are separate from detailed models
of turbine subsystems, from blade materials to bearings. While computational models of each scale are
already well developed, the interaction between the scales is still a significant challenge to define and
execute with a high level of accuracy. These interfaces are often quite low fidelity and have embedded
approximations that limit the accuracy of the total cascade from the largest to the smallest scales.
The opportunity now exists to use the increasing size and speed of computers to again expand the scope
and complexity of the models by attempting to bridge scales and combine solutions of previously
separate problems into a single modeling domain. It is this opportunity that is driving a revolution in
modeling and simulation for wind energy technology. It creates a grand challenge in computational
science, which is the topic of the first chapter of Volume 1 by Robinson and Sprague. The mathematical
models that capture the physics from one scale and use that information to provide boundary
conditions for the next higher resolution scale are forced to make compromises and simplifications that
will inherently lose information. By combining the scales, the information is implicitly contained in the
computation, thereby providing the hope of truly enhanced fidelity of the combined result.
The first successful generation of commercial wind turbines was based on combined models of the
aerodynamics and the structures—aeroelastic models. My personal exposure to these models in the
early 1980s was to use them to attempt to predict fatigue lifetime of the blades, many of which were
failing in the field. When simulated load time histories were compared with measured load time
histories, it became readily apparent that the simplified steady-wind input to the aeroelastic models,
which included wind shear but no turbulence, was completely inadequate for matching the complexity
revealed by the measurements.
Although these early aeroelastic models had successfully coupled aerodynamic and structural elements,
they needed to couple another element—atmospheric turbulence—to be predictive of turbine loads
and hence fatigue durability. It was not until computational models were constructed that combined the
atmospheric turbulence and shear AND the rotational dynamics of the blades moving through the wind
AND the aerodynamics that generate a force on the blades AND the structural dynamics that cause
amplification of those loads at natural frequencies that the nature of the measurements could be
reproduced. Only then was it possible to even consider the fatigue implication operating in the
atmosphere. This began the long road of adding physical processes into the models of wind turbines so
that the bundled collection could accurately represent the actual environments of the turbine and be a
basis for the design loads it must survive.
Wind energy achieves much of its cost effectiveness from the fact that extracting energy from the air
relies on the continuum properties of the fluid. The solid rotor does not need to actually contact
particles of air to extract energy from them. The airfoil produces a lift force that is felt throughout the
continuum and extracts energy from the moving mass of air far from the structure. Therefore, a very
slender blade structure can rapidly sweep through a mass of air, creating a pressure reaction that slows
the air across the entire swept area both downwind and upwind of the rotor. Even the earliest
aerodynamic models recognized this two-way coupling by including the “induction effect” in the flow of
the air into the rotor, correcting for the reduced velocity of the upstream air, as described by Hansen in
Chapter 1 of Volume 2. However, when turbulence models were combined, the induction was often still
approximated as relatively steady and uniform across the rotor, even though the inflow and the
aerodynamic forces in the new combined models are far from uniform. A more accurate coupling of the
inflow and the turbine requires individual blade forces to be applied to the turbulent fluid, as described
by Moriarty and Churchfield in Chapter 6 of Volume 1. The blade is modelled as a lifting line,
approximating the true three-dimensional flow around the blade with two-dimensional simplifications
using lift and drag correction factors evolved through decades of research on both helicopters and wind
turbines. The result has been a significant improvement in the understanding of the interaction of the
turbine with turbulent wind fields, especially in how the wake is generated and convected downstream.
These full-wind-plant models are creating new understanding of the nature of the flow through and
around multiple wind turbines in a plant and initiating a flurry of activity around what to do with it.
Two-dimensional airfoil theory was used very successfully early in the development of wind turbines and
remains the basis for most design tools used today, as described in Chapter 1 of Volume 2. Early
researchers also coupled the elastic distortions of the structure due to the aerodynamic loads into the
relative velocities and angles of attack, which have resulted in the aeroelastic models described in
Chapter 2 of Volume 2. Validation studies have shown that corrections to two-dimensional airfoil
approximations can be very accurate when applied within the domain of their prior validation and
tuning. They also have the tendency to produce results of variable accuracy when taken out of their
comfort zone and applied to significantly different configurations. More accurate representation of the
blade aerodynamics requires coupling that resolves the actual blade shape and computes the three-
dimensional flow around it as it rotates through the turbulent winds. Vijayakumar and Brasseur in
Chapter 2 of Volume 1 give some insight into how to implement these “blade-resolved” models in
conjunction with a realistic inflow. Coupling from the flow field down to the surface of the blade
removes approximations of two-dimensional airfoil theory and attempts to solve for the surface
pressures everywhere on the blades. Removal of the two-dimensional airfoil simplification enables the
modeling of innovative concepts not well captured by those assumptions by using a higher-fidelity,
three-dimensional computational simulation. The high-fidelity model has the possibility of providing
more accurate results in applications where the models were not tuned by resolving the physics directly.
The size and computational complexity of blade-resolved models make them capable of simulating even
the most complex design alternatives, but unable to be used to flesh out the response over the entire
design envelope. Only a few inflow situations can be simulated with a high-fidelity model, whereas
typical design calculations require that a turbine structure be evaluated with respect to thousands of
potential inflow and control realizations. There is still a need to capture the fundamental physics of the
machine, but with less computational intensity as found in the modeling and simulation capabilities
focused on detailed design, as described in Volume 2.
Modeling a wind plant accurately requires not only coupling down to the millimeter-thick boundary
layer of the airfoils on the blades but upward to where the flows originate in the large-scale forces that
drive the atmospheric mesoscale, which covers hundreds of kilometers of area and encompasses the
planetary boundary layer. Large-scale energy transfer mechanisms, thermal mixing, high- and low-level
jets, and other mesoscale effects are what drive the winds at the surface and determine the nature of
the turbulence as well. Haupt, et al. provide a sweeping and comprehensive explanation of the nature of
the weather modeling challenge as it has progressed throughout the last few decades and how it now
relates to wind energy in Chapter 3 of Volume 1. Mirocha, in Chapter 4 of Volume 1, describes how the
meso-to-microscales are bridged to bring the atmospheric flow features down to the scale of the
turbines and why that is important. This pair of chapters is the foundation for the inputs to all the other
chapters on the book.
Another bridge between mesoscale effects and the wind plant is captured in the models that forecast
wind plant power production for use in planning and operating the electrical grid. Weather forecasting is
a rapidly improving field with greater accuracy driven by the revolution in computer size and speed. But
wind plant power production depends on a portion of the weather forecast, wind speed at 100‒200
meters above ground, that has not been the central focus of weather modeling. In addition, the plant
power output depends on many factors controlling the dynamics of plant production, such as turbine
power versus wind speed characteristics, wake interactions, terrain effects, and other major challenges
to wind plant modeling. In Chapter 8 of Volume 1, Zack shows how these forecasts are derived and
highlights some of the challenges.
Of potentially greater value than simply forecasting output is controlling output. van Wingerden et al., in
Chapter 7 of Volume 1, describe the relatively new field of wind-plant control. Appropriate attention to
plant control can increase efficiency of the plant by better managing the way the individual turbines
operate with respect to each other, managing the wind resource and turbine-operating parameters to
create a plant-wide objective that is greater than the sum of the parts. Actively managing the wind by
steering wakes has shown potential to increase plant output. And actively controlling the wind-power
output to meet grid-driven demands, such as ramping and fault ride through, is just beginning to be
considered in grid-operating strategies. With advances in both wind plant control and wind plant
optimization (Ning and Dykes, Chapter 8 of Volume 2), it becomes possible to consider the features and
capabilities of the plant controller in the original design of the wind plant, a capability rarely exercised at
this point but showing great promise.
Following the flow of forces downstream from the atmosphere to the turbine to inside the turbine
depends on the aeroelastic model (see Hansen, Chapter 2, Volume 2), which calculates the system loads
driven by the atmospheric turbulence (see Berg and Kelly, Chapter 5, Volume 1). Models of machine
elastic dynamics themselves are highly complex and nonlinear. These aeroelastic models have become
sufficiently accurate to where they have established the scientific underpinnings for the highly
successful march of turbine technology to low cost and high reliability. Turbulence models used in
design, simple as they are, have also been highly effective at approximating the atmospheric
environments used to define designs that keep the primary structure safe from extreme loads with
adequate fatigue durability. This is achieved by coupling the models of the machine dynamics to models
of the individual components: the rotor, drivetrain, foundation, and turbine controller. Each of these
sub-systems are addressed in separate chapters within Volume 2.
Internal loads at critical interface locations are taken from the aeroelastic model and applied to the
individual subsystems for detailed design evaluation. Within a full-system simulation, each of the
subsystems is represented with only a few key degrees of freedom while suppressing all the details. This
results in significant simplifications but does accurately represent the interface loads between sub-
systems. Designers of the subsystems can then take those interface loads and use them to estimate
detailed requirements within the subsystem. It has thus been a highly effective practice for subsystem
experts to focus their modeling efforts on the particular issues within their specialty. The drivetrain can
be designed with a comprehensive definition of the loads on the low-speed shaft as described by Zhao
and Guo in Chapter 4 of Volume 2. Bearing location and configuration, gearbox design features, and
generator selection are all driven by the rotor loads. The foundations of land-based systems are similarly
designed given the loads on the tower. One place where a separation has been shown to be problematic
is for offshore floating systems. There is enough interaction between the motions of the floating
foundation and the rotor that the aerodynamics of the wind and hydrodynamics of the waves need to
be considered together, as described by Matha, et al. in Chapter 5 of Volume 2.
The rotor, however, is the system that generates all the atmosphere-driven loads that are then passed
to other subsystems, and therefore requires special attention for optimization. Bottasso and Bortolotti in
Chapter 3 of Volume 2 describe the level of detail and integration between the aeroelastic and
structural modeling required to conduct detailed design optimization of the rotor subsystem. Ning and
Dykes continue the theme in Chapter 7 of Volume 2 with a detailed look at optimization techniques that
are especially useful to enable the rotor optimization to be both tractable and linked to the greater
challenge of fully integrated turbine system design.
Every design simulation is forced to integrate the controller into the design load calculations from the
very start. One cannot model the aeroelastic response of a wind turbine in the absence of its controller.
Wright, et al. describe the control system at many levels, from power only to loads attenuating control
to ones that engage the complexity of floating systems wherein the inputs are from both the top and
bottom (see Chapter 6 of Volume 2).
The objective of a wind power plant is to generate electricity and feed that power into the local grid,
which is connected to the continent-wide electrical grid, often described as the largest machine ever
built. The electrical generators and associated power electronics have their own dynamics that both
interact with the individual machines, the intra-plant collection system, and the grid to which it is
connected. Muljadi and Gevorgian in Chapter 9 of Volume 2 provide insight into how the various types
of generators are modeled as well as how these dynamics feed up to the wind plant interconnection.
Miller and Stenclik in Chapter 10 of Volume 2 look outward from the interconnect and explain the issues
around modeling the plant interconnectivity with the regional grid. These two chapters present the
tremendous progress that has been made in managing the integration of very large variable generation
plants into an electrical system that must balance load and generation in real time. They also provide
some insight into the grid of the future, which is likely to be dominated by variable inputs with very little
classical inertia-dominated generation. Modeling this grid system and capturing the emerging
capabilities of wind plant control will be essential to meeting the challenge of high-penetration wind
energy.
Ning and Dykes in Chapter 8 of Volume 2 describe the full wind plant optimization problem and how
modeling and simulation faces challenges and solves them in this area. Multidisciplinary analysis and
optimization techniques are borrowed from aerospace beginnings and applied to the wind plant design
challenge. These modeling frameworks are beginning to pull together all the disparate technologies that
make up the subsystems of the wind turbine and a wind plant and bring them into a unified system so
that trade-offs can be assessed and significant innovative approaches to wind plant design, control,
operations, and maintenance can progress. It is only by understanding the principal elements of how to
model each of the subsystems that these larger full system optimization frameworks can be made to
work.
System optimization modeling at the power plant level, as well as at the turbine level, requires models
that reflect the capital cost, maintenance costs, and financing costs (reflecting the risks and
uncertainties), as well as estimates of the financial return through energy production and sales. The
chapter on cost of energy modeling by Hand, et al. (Chapter 9 of Volume 1) brings out all the subtleties
of that process. Cost models are crucial for evaluating technology innovation opportunities and are also
essential elements of plant financial decision-making and for assessing the effects of local and national
policy.
Book Structure
The ability to compute solutions to the governing equations describing the physics of wind power plant
operation is growing by leaps and bounds. The entire breadth of the system is now routinely modeled in
the design process, and the sophistication of the models and computers that solve them continues to
grow as well. It is these computational simulations that help designers understand the complexity of
how the parts of the system interact, and hence allow them to solve problems of system interaction in
ways that could not be considered before these computational resources became available.
The breadth of the wind energy system problem has drawn the attention of major computational
science organizations, declaring it to be a “grand challenge” type of problem. The scales of fluid
dynamics alone range from the boundary layer of wind turbine blades to the regional flows and large-
scale eddies of the planetary boundary layer. The structures and mechanical parts of a turbine include
flexible blades and towers up to 100 meters in length as well as the tribology of the interfaces in
bearings and gears at a fraction of a millimeter. Electrically, models must cover the range from the air
gaps within the generator to the electrical grid that wind plants must dynamically support.
Each of the subsystems must be simulated to capture the accurate effect of design options. Simplified
models of the subsystems and the interactions between them must also be simulated to predict how
design options will operate at a system level. Sophisticated optimization algorithms and software are
applied to this complex system engineering challenge. The modeling and simulation approaches used in
each subsystem as well as the system-wide solution methods to optimize across subsystem boundaries
are described in this book. Chapters are written by technical experts in each field to describe the current
state of the art in modeling and simulation for wind plant design. Special attention is directed at the
fundamental challenges and issues to be solved to extend the content beyond simply describing current
practice to a level that will provide long-lasting insight into the methods that will need to be developed
as the technology matures.
There are too many individual chapters to be included in a single volume. However, it is not entirely
natural to divide the book because the topics are intricately interrelated. The separation of the content
in to these two volumes is based on the following two main themes. Volume 1: Atmospheric Flows and
Wind Plants; and Volume 2: Turbines and Systems.