Basic Exercises
Basic Exercises
Basic Exercises
for:
Reflection Seismology using
Seismic Unix (CSM), Linux
and Perl
(GEOL 4068)
Fall 2013
LSU
Baton Rouge
Juan M. Lorenzo
Contents
Acknowledgements 4
Introduction 5
Introduction12
Perl (Hoffman, 2001) Practical Extraction and Report Language 21
Introduction to Perl 21
5) Use the login name and passwork Dr. Lorenzo gave you to login.
6) After putting all the information, click Connect. And you will see all the
files
7) Simply, Drag the file(s) into the folder you want it to be.
Making sure that youre still connected in SSH, run XLaunch to configure
Xming to connect to lgc10. Choose one window, then make sure that Start
no client is checked. Click Next>Next>Finish. Logout of
SSH(File>Disconnect) and then reconnect by selecting the lgc10 profile.
If you are having problems connecting, open the lgc10 profile in SSH and
go to Edit>Settings. Under the Tunneling option on the tree, make sure that
the Tunnel X11 Settings option is checked. Make sure to save your profile.
You now know you correctly edited the .login file if it reads DISPLAY:
undefined variable. If you get something with error in it, check to make
sure the setenv line is commented out.
Shells are the basic sets of instructions for handling the operating system
and perl is a mature, widespread computer language ideal for file
manipulation. Perl can serve as a simple glue to make diverse pieces of
software talk to each other.
Linux
The single-most advantage of linux is that the code is freely available so
many people around the world participate in its improvement continuously.
I first view Linux as a communal, philanthropic exercise which takes
advantage of the cooperative nature of our species. Linux is also a
collection of instructions in software that allow you to use the hardware in
your computer.
History of Linux
Important Instructions in sh
Q. What is a shell?
Example, ls.
csh: theC-shell
The csh improves upon the sh because it introduces convenient
programming tools inherited from C
In any operating system, linux programs and user directories are stored
in predictable locations. Exercise
Logging in to your account
answer "yes" to the question involving "authenticity". You should only see
this question the first time you log on from each machine.
Lgc10:/home/yourname %
Every file and directory in linux has assigned codes which dictate the
degree of authority by each user of the computer to alter each file. There
are four types of user status on linux. First is the overall supreme
administrator known as root and who can do anything to any file on the
system. Next comes the specific original owner/user of each file. All users
can belong to one or several named groups of users. Finally anyone who is
not specified as belonging to your group or is not the supreme administrator
is considered belong to the outside world, or all other users. Within each
of the status levels: owner, group, world, binary codes or their letter
equivalents may be set to indicate whether a file may be only browsed
(read), modified (write), and/or executed as a program (executable).
Note that it is the files themselves that carry this important information with
them. The file permissions are consulted first to determine whether an
individual user has authority to manipulate the file in any way.
Read and write 6 Write and execute 3 Read, write and execute
7 (add all three numbers together)
For example:
% ls l
My_perl_file r _ _ r _ _ r _ _
chmod u+x
which only adds (+) the setting that gives only the owner (u)
executing privileges
Or, equivalently
chmod 600
In the numeric form the last two zeros mean that group and others
priviliges are nill. As you can see the numeric form can alter permissions for
all the three types of linux users at once.
Here is a summary list of options used for setting file permissions and
understanding file types on the linux system
u user r read
g group w write
o others x execute
a all
+ add
- remove
d director
y
l link
Examples:
Letters Numeri
symbols cal symbols
chmod chmod
u+rwx 700
chmod
u+rwx
chmod
chmod g+rw 761
chmod o+x
%cp /home/refseis13/pl/
%sftp loginname@remotemachinename
Once you are connected to the remote machine, the following basic
instructions will get you going:
help
(you can also type help once you are inside the remote machine)
Deleting files
%rm filename
Finding files
% locate filename
Renaming files
%mv filename
A free linux shell scripting tutorial:
http://www.freeos.com/guides/lsst/
Example 1 Example 2
%prog_name %prog_name 2
#! /bin/sh #!/bin/sh
echo "Enter the two numbers to
output=hello be added:"
read n1
echo $output read n2
answer=$(($n1+$n2))
echo $answer
% gimp
Repetitive tasks
do
done
The variable called action has three potential values. Each value is a
word that is sent to the screen using echo within the do done set of
instructions. The $ sign in front of action assigns its value to be sent to the
screen each time following the word I.
When it comes to collating all your directories and their contens into a
single manageable file that can keep a record of the directory structure use
the useful instruction called tar as follows:
Concatenating files
When you have one files you would like to append to another use the
Seismic Unix
Introduction
%suxedit SH_geom_2s.su
Fourier Transform
>f24 (this graphs the strength of the frequency content at trace #24)
>h (provides help to the user)
All data traces have a "header" that consists of descriptive variables, e.g.
length of the data set, date it was collected etc.
Display
% suximage < SH_geom_2s.su (The < or redirect symbol sends the data
set file into this program)
Bandpass Filtering
(The | symbol or "pipe" streams the output of one program into the
mouth of the other program)
Exercise
Put all the above instructions into a script called "my_first.sh". Confirm
that this file runs correctly
Notch Filtering
Notes: Verify your filter worked. Run suxedit and plot out the frequency
spectrum to examine whether a notch filter has been applied.
Exercise 2
Create a script that .
(1). reads a file SH_geom_2s su, (2) removes frequency below 120 Hz, (3)
applies automatic gain control to compensate for geometric spreading and
(4) plots it to the screen (5) hand in a hardcopy or e-mail me a *.gif file by
the next time we meet..
Create an iterative set of instructions that will allow you to test the data
set for the best set of filters. The ground roll is in the lower frequency range
(~<= 80Hz). Try at least the following four filter sets:
do
gc=1 |suximage
done
For this exercise you are expected to hand in two scripts and one image.
The first script will show the interactive tests you conducted for different
filters. The second script will show the final set of filters that best remove
the ground roll but keep the rest of the data. An image of the best-filtered
data set with the ground-roll minimized is what I expect to be handed in by
the next lab. Make sure you understand the accompanying linux script
exercise.
min in the number of the first trace to kill and count is the number of
traces starting with min that will be deleted.
Compare the same file before and after traces 16, 17 and 18 have been
removed, e.g.
Reordering traces
tracf is the header value that is used to reorder the traces. In this case
tracf is the trace number. The negative sign implies that the reordered file
will have the traces ordered according to the decreasing value of tracf. So
if tracf = 1,2 ,3. 24 in the input file, tracf=24,23,22,21,20 in the output file
If only part of the data set needs to be used, for example only the first
half second we can use suwind
In this case we have selecting all traces who have values for tracf
between 1 and 24 and all samples between the time 0 s and half a second.
An example that shows how to kill traces, reorder and cut a window of
data from a certain data set
While logged into your lgc10 account, go to directory ~/refseis13 and
copy over to your local directory the following file: LSU1_1999_TJHughes.tz
This scripts that follow show the implementation of sukill, susort and
suwind
Give me the reasons you think why susort was used, why suwind was
used and why sukill was used? Please give me one reason for each. You
will need to image the seismic data to see how the files look BEFORE they
are affected by suwind, susort, and sukill as well as AFTER. The differences
should allow you to see why each program was used and for which reasons..
This is a dropdead date and time with no extensions. You can answer in text
in three to four sentences only. But, you will have to view the data and
perform sukill, susort and suwind. You do not have to send me any images
you created. The reasons you give will show that you understand what
occurred.
% sukeyword tracf
The output will appear on the screen explaining that tracf is the trace
number within the field record
If b or c are equal to 0, then their products with (i/j) are also equal to 0
and there is no change to the patterc of the header value within adjacent
shot gathers.
For example:
offset= 33 36 39
sx=3 gx= 36 39 42
SHOT #2
offset= 33 36 39
sx=6 gx= 39 42 45
SHOT #3
offset= 33 36 39
key=sx,offset,gx,fldr,tracf\
>filename_out
Step 1: set the sx field of the first 3 traces to 0, the second set of 3 traces
to 3, the third set of 3 traces to 6; i.e. the shot stays at the same place for
whole shot gather and only increments when a new shot is taken (i.e. every
3 traces)
Step 2: set the offset field of the first shot (first set of 3 traces) to
33,36,39 , the second shot (next set of 3 traces) to 33,36,39, and thelast shot
(third set of 3 traces) to 33,36,39.
| sushw \
Step 3: set the X oordinate of the geophone position to 33, 36, 39 for the
first shot; to 36,39,42 for the second shot (next 3 traces), and to 39,42,45 for
the last shot (final 3 traces)
| sushw \
set x
filename_in=1000.su
filename_out=1000_geom.su
sushw <$filename_in \
| sushw \
| sushw \
>$filename_out
or we can make a single call to sushw and place the variables together, in
its briefest form:
#!/bin/sh
set -x
filename_in=1000.su
filename_out=1000_geom.su
sushw <$filename_in \
a=0,33,33,1001,1 \
b=0,3,3,0,1 \
c=3,0,3,0,0 \
j=3,3,3,3,3 \
>$filename_out
suchw imilar to sushw but where we use the header values to do the
math:
value of key1 = (a + b* value of key2 + c * value of key3)/d
< filename_out
/home/refseis10/shell_exs/makecmp.sh
How to fix a data set with a variable time delay or a data set that has
false time breaks or how to cross-correlate two traces.
Cross correlation describes the similarity between two time series. For
us a trace consists of a series of amplitude values at regular intervals of time
or a time series. Mathematically, cross-correlation is like convolution, but
where none of the traces are reversed prior to the steps involving shifting,
multiplication and addition (See lecture PowerPoint Presentation entitled
XCor for cross-correlation and the PowerPoint presentation entitled
CMP for convolution, both hyperlinked fromthe main syllabus pertaining
to this class ).
Lets start by assuming that the geology does not significantly change
from between two adjacent shots. Then, if for one shot gather, the recording
time accidentally starts at a different time with respeect to the shot going off
to that of another shot gather the true delay must be reset. Why?
Well, whereas delay keyword in the headers will have the same value the
data will be at the wrong time. We must change the delay header value so
that the data should appear at the correct time.
NO DATA
NO DATA ???
delay = NO DATA
T
delrt correction ???
0
Once the data is corrected for this wrong delay value then we must make
all the shot gathers have the same length in time starting at tmin=0 (shot
time) You will find however, that before you can do that the data you have
corrected to perhaps a later time now has missing data. What to do???
To see how this might be done copy to your directory, then modify
accordingly and run the following script that is located in
/home/refseis10/shell_exs/change_delay.sh.
I recommend that once you have populated your header values for offset
and CDP you should sort the data before sending it to NMO.
After you obtain an initial brute stack you are ready to start refining
many of your processing parameters. It is during this stage that your sunmo
can read the results of additional velocity analyses. More on that later
/home/refseis10/shell_exs/study_NMO_STACK.sh
Perl (Hoffman, 2001) Practical Extraction and Report
Language
Introduction to Perl
There are certainly better ways to write code, but here are my reasons
to use perl:
(3) Perl can easily incorporate shell programming scripts. Perl can be
used as a glue to organize a computational workplace. Perl can be used to
communicate between different modular command-line Open Source
programs.
(4) Perl can be used for more complicated programs that require setting
up functions or sub-routines that help keep complicated programs modular
and simple
(5) Handling text files and their content is carried out more easily than
with other programs
When you know of an easier way that will save you time and frustration.
Yes, even using well-known libraries such as GTK, Qt, and of course the
old, classsical Tk interface.
Learning Perl on your own
A great place to start is to use the online tutorials in linux. Use google to
find a Perl tutorial, e.g.: http://www.perl.com/pub/2000/10/begperl1.html
Also, use Perl itself that comes with documentation. Check this out:
% info perl
Tutorials
Notes: Use the up and down arrow keys to move to the line you want to
select
In order to give you some courage to start working with this new
language, especially if you have not worked with one too extensively before,
lets consider writing one that is classical across most beginning tutorials
and that provides a stimulating output to the terminal.
#!/usr/bin/perl
(1) The first line denotes the location of the perl binary
(2) From now on all items that are output to the screen will be included
in parentheses and double inverted commas. Double-inverted commas
permit Perl to interpret the different items. For example some items are
read as text and others as special characters when needed. (Try out single
commas just to see what would happen). If you want to null the value of a
special character put a \ before it. For example \\n makes \n come out
just like the characters you see. (Try it out).
(3) the \n is a shorthand code that means include a new line when the
rest of the text is written out. There is a new line before the start of writing
and there are two new lines after the start of writing.
(4) All lines except the first and the line commented out end with a ;
denoting the end of an instruction. Omission of the ; is a very common
mistake that we all make.
(5) The symbol #on the second line means that these words are
informational for the reader and will not be considered by Perl to be a
meaningful instruction.
If you want to read and write data to hard drive you must first tell the
system you are ready to access a part of the hard drive. This is done by
opening a FILEHANDLE or a file address. You must also provide a name.
The FILEHANDLE should be closed when you are done reading or writing to
the file.
#!/usr/bin/perl
$i=0;
$line[$i] = $read;
$i=$i + 1;
$imax = $i;
close (FILE);
for ($i=0;$i<$imax;$i=$i+1) {
print ($line[$i]);
}
$! is a special operator indicating a system error has occurred.
<> is the line-reading operator which continues by itself until the end
of the file is encountered
(1) remember that lines of data may have invisible characters that you
may want to remove
(2) you can not read a file unless you know its internal makeup.
#!/usr/bin/perl
$imax=3;
for ($i=0;$i<$imax;$i=$i+1) {
$line[$i] = $i;
print ($line[$i]);
open (FILE, > filename) || die (cant open this file $!);
close (FILE);
Note that the only important difference between reading and writing is
that we have a redirect sign > before the filename.
Documentation in Perl
There is another way of documenting perl programs that can later be
used to automatically generate a formatted description of the program to
newcomers. We call this using perlpod, which stands for perls plain old
documentation format, an html-like way of embedding documentation
within a perl script.
For example, here is a version of the same program above with a more
sophisticated and professional documented body. Make sure you leave a
space before the first line that starts with =
#! /usr/bin/perl
=pod
=head1 NAME
My first program
=head1 SYNOPSIS
perl Hello_World2.pl
=head1 DESCRIPTION
=head1 ARGUMENTS
None
=cut
print("Hello World\n\n");
=pod
=head1 AUTHOR
=head1 DATE
Sept-16-2013
=head2 TODO
=cut
=pod
=head3 NOTES
Although this is just my first program, I can use it as a template with
=cut
(that is, other than keeping notes on HOW the program works for the
next user?)
Yes, there are some advantages to using perlpod that outweigh the extra
time and thought required to place the comments inside your program. One
advantage is that it is relatively easy to convert your documentation (just the
documentation and not the rest of the program) into a different format, such
as PDF, or MSWord.
Data Types
A perl variable is a place to store the value, which is called the literal.
For example:
#!/usr/bin/perl
$number = 2;
When writing out text, note that text consists of individual characters
strung together in a line, including minus signs, plus signs, spaces, tabs,
end-of-line-characters, etc. A string of characters is just that, a string. In
the example above we assign (Hello world) to the variable $output_text.
Lists of Variables (data) or Arrays (vaiable)
If you want to include various lines of texts it might be cleaner to break
up the text into different segments. In order to handle this we can create a
list of lines of text. The list consists of many scalar literals which are
assigned to ordered portions of the array.
#!/usr/bin/perl
List variables carry the @ sign at the beginning of their name and
will print out their whole content, as in the example above. The list is
ordered starting at 0 and not at 1.
Yes, you could also write the list with a different syntax:
#!/usr/bin/perl
#!/usr/bin/perl -w
print(@output_text[2]\n);
print($output_text[2]\n);
There are a couple special arrays which will need later when we write
functions and perl programs that can interact with the user, that is they
require input from the user such as a number or a file name on the
command line : e.g.,
%perl sum.pl 1 2
The first variable is called @ARGV and keeps track of the order of the
values that follow the name of the program above (e.g., @ARGV[0], and
@ARGV[1]).
Scalars
Scalars are single-value data types. That is, only one value is assigned to
that variable and the value can be a string or a number. Scalars are
indicated by a $ sign at the beginning of the variable.
There is one special variable in perl that is useful to know. Commonly
you will want to know the number of values your array. The length of your
array or the number of values in your array would be equal to the largest
index plus 1. For this purpose there is a special scalar variable in perl you
can use. This special variable has a literal value equal to the last index in
the array:
#!/usr/bin/perl -w
$array_size = $#output_text + 1;
print(\t\t@output_text[$#output_text]\n");
Hashes
Useful Operations
For-loop/Do-loop in perl
Do-loops (herein for-loop) are a term inherited from Fortran (and bash).
In Perl there is a simple syntax to handle repetitive tasks that is very similar
to C and Fortran, and Matlab. After all, computers ARE supposed to be used
for doing repetitive tasks very fast. Here is how we do a loop:
#!usr/bin/perl
# NAME:
$max = 10;
$output_number_array[$i] = $i+1;
Inside the parentheses, after the for, there are three instructions. The
first instruction $i=0 provides the START of the loop. That is, the first
instruction is the first thing that is carried out in the loop. Remember this!
The second time the loop is run, the third instruction is carried out, i.e.
the $i value is updated by adding 1 to the previous value. At that point the
second instruction must be met for the calculations to enter the loop again.
If the second instruction is not me then the loop is exited and the $i
retains its previous value from the end of the last loop. To be safe, you can
examine the value of $i when the loop is exited.
Note that we can work the index in reverse as well and that the values of
$i can increment by more than just 1 each time.
Perl operators
Various symbols exist in perl that are very similar to operators in other
programming languages. Operators can be of several types depending on
whether you are dealing with NUMBERS or CHARACTER STRINGS.
Arithmetic
+ addition
- subtraction
* multiplication
/ division
Numeric comparison
== equality
!= inequality
String comparison
eq equality
ne inequality
lt less than
gt greater than
&& (and)
or (or)
! (not)
Miscellaneous
= assignment
. string concatenation
x string multiplication
$a += 1; # same as $a = $a + 1
$a -= 1; # same as $a = $a - 1
Conditional if
#!/usr/bin/perl
$value[1] = 1.1;
$value[2] = 1.0;
else {
All that you have learnt prior to perl regarding the linux OS and shell can
still be used within perl. Say, for example you wish to generate the following
working set of directories:
/home/loginID
/jpg /tiff
#!/usr/bin/perl
$HOME = (/login/loginID);
$DATA = $HOME
system ( \\
mkdir p @directory[1] \\
);
Example 1
Note that (1) we are doubling the second value in each list of filter
parameters: 6,12,24,48; (2) the gap (Hz) between the second and third
values is kept at 30 Hz.
#!usr/bin/perl
# NAME: filter_test.pl
$max_number_ofcases = 4;
=pod
$f1 = 3 ;
$f2 = $f1 * 2;
$f3 = $f2 + 30;
$f4 = $f3 * 2;
=cut
=pod
=cut
$case_number++,$f1=$f1*2) {
$f2 = $f1 * 2;
$f4 = $f3 * 2;
$filter_parameter_array[$case_number] = ("$f1,$f2,$f3,$f4");
print("$filter_parameter_array[$case_number]\n\n");
=pod
f=$filter_parameter_array[1] \\
system($instructions);
print($instructions);
Example 2:
For example, (a) the parameter names can be changed to suit the user
into a term that is more meaningful geophysically and more self-explanatory
(b) package name can also be changed when a new version (or instance)
is used inside a main perl program
(c) The user is NOT REQUIRED to use all the parameters. Parameters
that are not needed do not need to be called. In the seismic unix family of
programs all the parameters have set defaults which the user can not see
but can read about in the manual. As written now, the default values can be
changed by the user inside the package.
(d) if there are any subroutines that bear a similarity in their behavior to
other subroutines the common behavior can be factored out and shared
among the packages. Of course this requires more observant use of the
programming language and more planning ahead of what he different
subroutines do, but this is not too hard to do with Seismic Unix programs
because they are written to be independent of each other for the most part
and so do not share a lot of functionality.
Example 3
Modify Xamine.pl in order to kill bad traces. First you will have to
circumvent one problem:
The large number of files in the data set and the location of the different
traces that need to be killed.
where min in the number of the first trace to kill and count is the number
of traces starting with min that will be deleted.
min in the number of the first trace to kill and count is the number of
traces starting with min that will be deleted.
Compare the same file before and after traces 16, 17 and 18 have been
removed, e.g.
Now we can automate this process further for the case of a real data set
by using the following program to display and select the traces:
Select_tr_Sukill .pl
Sukill.pl
Reverse_Polarity.pl
Frequency-wavenumber filters
Our data set may contain different types of noise: some random and some
coherent. Coherent noise is sometimes identifiable as linear because
unwanted data is aligned along one or more characteristic slopes and the
same noise is superimposed on good signal. In the T-X domain of shot gather
data, a slope has units of s/m or the inverse of velocity. Two-dimensional
integral transformations of these data map into the frequency-wavenumber
domain (f-k). At least one useful simplification occurs during the integral
transformation in that linear X-T data that share a common slope are
mapped on to a linear arrangement as well in the f-k domain. If the slope of
the noise is different enough to that of the good signal the noise and the
signal separate out into distinctly different regions of the f-k domain.
Sudipfilt.pl
This collection of Seismic Unix flows will produce 6 plots across your
screen. Their size and location can be changed within the Perl program, so
you are encouraged to copy it over to the equivalent directory in your work
area.
The top row of three panels shows the data before it is f-k filtered and the
bottom row shows them after they have been filtered.
Currently the key dip-filter settings include two sets of four values each.
The inner two values mark the region of interest and the outer two mark the
limiting transition zoneThe units for f-k samples per trace, for ease of use,
rather than m/s. As when we estimated bandpass filter parameters we must
be careful not to choose outer dip-filter values that are too close numerically
to the inner ones or we will generate unwanted noise.
The inner region can be assigned for removal or not as can the region
outside the outer limits. As with the bandpass-filter parameters we do this
by applying weights of 1 through 0 where the following example shows a
zeroing effect to the inner-defined region:
@sudipfilter[1] = ( sudipfilt \\
dt=1 dx =1 \\
amps=1,0,0,1 \\
slopes=-11,-7,-4,-3 \\
);
You should try to change the amp values to the following and note the
differences:
@sudipfilter[1] = ( sudipfilt \\
dt=1 dx =1 \\
amps=0,1,1,0 \\
slopes=-11,-7,-4,-3 \\
);
first_val is first value of the header word for each group of traces or
trace gather.
intra_gather_inc increments the header word value between traces in, for
example, a shot gather.
inter_gather_inc increments the header word between adjacent (shot)
gathers
i trace number within the whole file: e.g., 0,1,2,3,4,5,6,7,8
Note that i starts at 0. We do not need to set i in the
following example.
gather_size number of traces to jump between shots
sushw->name(sx,offset,gx,fldr,tracf);
Step 1: set the sx field of the first 24 traces to 0, the second set of 24
traces to 3, the third set of 24 traces to 6; i.e. the shot stays at the same
place for whole shot gather and only increments when a new shot is taken
(i.e. every 24 traces)
sushw->name(sx);
sushw->first_val(0);
sushw->intra_gather_inc(0);
sushw->inter_gather_inc(300);
sushw->gather_size(24);
Step 2: set the offset field of the first shot (first set of 24 traces) to
450,750,1050 , the second shot (next set of 24 traces) to 450,750,1050,
and the last shot (third set of 24 traces) 450,750,1050, Note that the
offsets DO NOT change between shot gathers.
sushw->name(offset);
sushw->first_val(450);
sushw->intra_gather_inc(300);
sushw->inter_gather_inc(0);
sushw->gather_size(24);
Step 3: set the X coordinate of the geophone position to 450, 750, 105,
for the first shot; to 750,1050,1350, for the second shot (next 24 traces),
and to 1050,1350,1650, ..for the last shot (final 24 traces)
sushw->name(gx);
sushw->first_val(450);
sushw->intra_gather_inc(300);
sushw->inter_gather_inc(300);
sushw->gather_size(24);
#! /usr/bin/perl
# instantiation of programs
use SU;
use SeismicUnix qw ($in $out $on $go $to $suffix_ascii $off $suffix_su);
my $log = new message();
my $run = new flow();
my $setheader = new sushw();
my ($DATA_SEISMIC_SU) =
System_Variables::DATA_SEISMIC_SU();
my (@flow);
$file_in[1] = 'All_clean_kill';
$sufile_in[1] = $file_in[1].$suffix_su;
$inbound[1] = $DATA_SEISMIC_SU.'/'.$sufile_in[1];
$outbound[1] = $DATA_SEISMIC_SU.'/'.
$file_in[1].'_geom'.$suffix_su;
$setheader ->clear();
$setheader ->first_val(450,0,450,-100,1001,1);
$setheader ->intra_gather_inc(300,0,300,0,0,1);
$setheader ->inter_gather_inc(300,300,0,0,1,0);
$setheader ->gather_size(24,24,24,24,24,24);
$setheader ->name('gx','sx','offset','scalco','fldr','tracf');
$setheader[1] = $setheader ->Step();
# create a flow
@items = ($setheader[1],$in,$inbound[1],$out,$outbound[1]);
$flow[1] = $run->modules(\@items);
# log process
print "$flow[1]\n";
#$log->file($flow[1]);
You probably have noted that the script modifies some other header
values, such as scalco, tracf, and fldr.
fldr and tracf are used as additional counters within the gathers
(tracf=1,2,3,4 24) and between gathers (fldr=1001,1002,1003, etc.)
MATLAB
Create a matrix of numbers
a =[1 2 3 4 5]
a=
1 2 3 4 5
Sin function
>> sin(a)
ans =
0.841470984807897 0.909297426825682
0.141120008059867 -0.756802495307928 -0.958924274663138
Plot function
plot(ans)
Make a plot showing at least 10 full sine waves within the plot.
What formula did you end up with? Complete by September 16, at 12.30
2
2h
T T
2
0
2
V
Plot for T= 0 to 1 seconds and X= 0 to 1000 m; Plot for V1=1500 m/s
You will need to generate matlab code to answer this exercise. E-mail me
the resulting code. E-mail me your answer to the question as well. Please e-
mail me this exercise by Friday, 23 September at 12.30 p.m. Hint: You will
need to learn how to use the forend construction so that you can
automatically add in 100 terms.
Take the code you generated in exercise 3 and add a constant phase to
each of the frequency components and plot your results. Try adding a phase
value of 90, 180 and 270 degrees to each of the frequency components. Plot
each case. MATLAB CODE . Now add the subplot(2,2,2) case where phase
value = 360 degrees.
Take the code you generated in exercise 3 and add a phase to each of the
frequency component that is linearly dependent on frequency. Us the
following relation:
MATLAB CODE
Exercise 2- Matlab
% hyperbola
xmin=-500;xmax=500;tmin=0;tmax=0.4
x=xmin:1:xmax; % meters
%first plot
%plot hyperobola
V1=1500;
for V1=1000:500:3000
subplot(2,1,1)
plot(x,t)
axis ij;
hold on
end
t= -pi/4:.001:pi/4;
clear Amplitude
phase = 0 ;
for freq=2:100
Amplitude = (Amplitude + cos (t .* 2 .* pi .* freq + phase));
end
subplot(2,2,1)
length(freq)
plot(t,Amplitude/100)
title('phase=0 deg')
phase = pi/2;
clear Amplitude
for freq=2:100
end
subplot(2,2,2)
plot(t,Amplitude/100)
title('phase=90 deg')
phase = -1 * pi ;
for freq=2:100
end
subplot(2,2,3)
plot(t,Amplitude/100)
title('phase=-180 deg')
% looking at phase
t= -pi/4:.001:pi/4;
clear Amplitude
m = 20*pi/100
m = 0;
for freq=2:100
end
subplot(2,2,1)
length(freq)
plot(t,Amplitude/100)
m = pi/2 /100;
clear Amplitude
for freq=2:100
end
subplot(2,2,2)
plot(t,Amplitude/100)
for freq=2:100
end
subplot(2,2,3)
plot(t,Amplitude/100)
Extras
(CONTENTS of start.sh: )
--leave xedit