PPL Unit-01[1]

Download as pdf or txt
Download as pdf or txt
You are on page 1of 25

UNIT-01

Here are some reasons for studying the concepts of programming languages:

1. Understand How Languages Work


By learning the core concepts, you’ll know how programming languages are designed and
function. It helps you understand the “why” behind language features like loops, functions,
or data types.

Ex:why some languages require variables to be declared(like C) and others don’t(Python).

2. Make Better Language Choices


There are hundreds of programming languages, and each one has strengths and
weaknesses. Knowing the principles helps you choose the right language for the right task.

Example: Use Python for data science, JavaScript for web development, and C++ for
system programming.

3. Write Better Code


Understanding language concepts improves your coding skills. You’ll learn to write clean,
efficient, and bug-free code, regardless of the language you use.

Example: Knowing about recursion and type systems helps you solve complex problems
more efficiently.

4. Learn New Languages Easily


When you know the core principles, learning new languages becomes easier. You’ll
recognize that many languages share similar concepts, just with different syntax.

Example: If you know loops in C, you’ll easily understand loops in Python, Java, or any
other language.

5. Build Your Own Language or Tools


Studying these concepts can help you create new programming languages or improve
existing ones. It also helps in creating compilers, interpreters, and tools that work with
code.

Ex: Developers created Python to make programming more readable and simple.

6. Improve Program Security


Understanding concepts like memory management, type safety, and exception handling
helps you write more secure code that avoids common vulnerabilities.

Example: Knowing how memory works helps you avoid buffer overflow attacks.
7. Increased ability to express ideas.
Programming Domains and Their Associated Languages
Different programming domains refer to the areas where programming is applied. Each
domain has its specific needs, and different languages are better suited to solve different
types of problems.
Let’s explore the main programming domains and the languages commonly used in each.

1. Scientific and Data Analysis Domain

• This domain focuses on scientific research, mathematical computations, and data


analysis.
• In the early 40s computers were invented for scientific applications.
• The applications require large number of floating point computations.
• Fortran was the first language developed scientific applications.

Common Tasks:
• Performing calculations
• Analyzing large datasets
• Creating visualizations

Popular Languages:
• Python – Easy to use, rich libraries (NumPy, Pandas)
• R – Built for statistical analysis
• MATLAB – Used for mathematical modeling
• Fortran – Used in scientific computing (older but still used)

2. Business and Enterprise Domain

• This domain focuses on creating software for businesses, like ERP systems, CRM
tools, and financial software.
• The use of computers for business applications began in the 1950s.
• Special computers were developed for this purpose, along with special languages.
• The first successful high-level language for business was COBOL

Common Tasks:
• Building large-scale applications
• Managing databases
• Automating business processes
Popular Languages:
• Java – Common in enterprise applications
• C# – Popular for Windows applications
• SQL – For managing databases
• Python – For automation and backend

3. Artificial Intelligence and Machine Learning Domain

• This domain focuses on creating intelligent systems that can learn from data.
• Symbolic rather than numeric computations are manipulated.
• Symbolic computation is more suitably done with linked lists than arrays.
• LISP was the first widely used AI programming language.

Common Tasks:
• Building AI models
• Implementing machine learning algorithms
• Natural language processing (NLP)

Popular Languages:
• Python – Most popular for AI/ML (TensorFlow, PyTorch)
• R – For data analysis and statistical modeling
• Java – Used in large-scale ML systems
• Lisp/Prolog – Used in AI research

4. Systems Programming Domain

• This domain involves creating low-level software that interacts with hardware.
• The O/S and all of the programming supports tools are collectively known as its
system software.
• Need efficiency because of continuous use.

Common Tasks:
• Building operating systems
• Developing device drivers
• Creating embedded systems

Popular Languages:
• C – Used for OS and hardware-level programming
• C++ – Advanced system programming
• Rust – A modern language for safe systems programming
• Assembly – For direct hardware interaction

5. Web Development Domain


This domain focuses on building websites and web applications.

Common Tasks:
• Designing user interfaces (UI)
• Creating backend servers
• Managing databases

Popular Languages:
• HTML/CSS/JavaScript – Frontend web development
• PHP – Backend development
• Python (with Django/Flask) – Backend development
• JavaScript (Node.js) – Full-stack development
• Ruby (Rails) – Backend framework

Language Evaluation Criteria


When we evaluate a programming language, we need to check how good it is for solving a
specific problem. The Language Evaluation Criteria are the qualities or features we use to
judge how effective, efficient, and easy a language is.
Let’s break down the main criteria and the steps involved in evaluating a language.

Main Criteria to Evaluate a Language:


Here are the key factors used to evaluate a programming language:

1. Readability
How easy is it to read and understand the code written in the language?
1. Overall Simplicity
o Fewer basic constructs improve readability. Complex languages with too
many features can confuse programmers, especially if subsets of features vary
between authors.
o Feature multiplicity (e.g., multiple ways to increment variables in Java:
count++, ++count, count += 1) and operator overloading (e.g., defining + for
unconventional uses) can hinder clarity.
2. Orthogonality
o Orthogonality refers to the ability to combine a small set of constructs in
consistent ways. For example, VAX assembly language allows operands in
both registers and memory, while IBM mainframes use separate instructions
for similar tasks.
o While orthogonality reduces exceptions and simplifies learning, excessive
orthogonality (as in ALGOL 68) can lead to unnecessary complexity by
enabling too many combinations.
3. Data Types
o Adequate data types enhance readability. For instance, Boolean types clarify
intent (timeOut = true) compared to numeric flags (timeOut = 1).
4. Syntax Design
o Clear and consistent syntax aids readability. Languages like Ada, which use
descriptive endings (end if, end loop), are more readable than those relying
on generic symbols like braces {} in C.
o Ambiguity in syntax (e.g., static in C having multiple meanings) or allowing
special words as variable names (e.g., Do in Fortran) can confuse readers.

2. Writability
How easy is it to write code in the language to solve a problem?
1. Simplicity and Orthogonality
o Fewer primitives and consistent rules improve writability. Programmers can
learn and use constructs effectively. However, excessive orthogonality can
allow errors to go undetected, making debugging harder.
2. Support for Abstraction
o Abstraction simplifies program development by hiding details. Languages
supporting process abstraction (e.g., subprograms for reusable operations)
and data abstraction (e.g., classes for binary trees in Java) improve writability
compared to those requiring repetitive and manual implementations (e.g.,
Fortran 77).
3. Expressivity
o Expressive languages provide convenient syntax for common tasks. For
instance, count++ in C is more concise than count = count + 1. Similarly,
constructs like Ada’s and then operator for short-circuit evaluation and Java’s
for loops enhance writability by reducing verbosity.

3. Reliability
Does the language allow you to write bug-free, secure, and stable code?
Type Checking
Detecting type errors during compilation or execution is essential. Compile-time type
checking, as seen in Java, reduces runtime errors. Older versions of C lacked type checking
for function parameters, leading to errors that were hard to diagnose. Modern C has
addressed this issue.
Exception Handling
The ability to intercept and handle runtime errors enhances reliability. Languages like Ada,
C++, Java, and C# provide robust exception-handling mechanisms, unlike C and Fortran.
Aliasing
Aliasing occurs when multiple names refer to the same memory location, increasing
complexity and error risks. Some languages limit aliasing to improve reliability.
Readability and Writability
Languages that allow natural expression of algorithms lead to fewer errors. Readable and
writable code is easier to maintain and modify, enhancing reliability.

4. Efficiency
How fast does the language execute programs? How much memory does it use?

5. Portability
Can the language run on different platforms and devices without much modification?

6. Cost
What is the cost of using the language.
Training Programmers
Simpler and more orthogonal languages reduce training costs.
Writing Programs
Languages closely aligned with application needs lower development costs.
Compilation Costs
Faster compilation with minimal optimization suits students, while optimized code is
preferred in production.
Execution Costs
Design influences runtime efficiency. Languages with fewer runtime checks execute faster.
Implementation System Cost
Free or affordable systems, as with Java, encourage adoption.
Reliability Costs
Failures in critical systems can lead to high financial and reputational losses.
Maintenance Costs
Readability is crucial for efficient maintenance, as it often involves developers other than the
original authors. Maintenance can cost up to four times the development expense.

Steps Involved in Language Evaluation Criteria:


Here’s a step-by-step process for evaluating a programming language:

Step 1: Identify the Purpose of the Language


First, understand why the language was created and what problems it solves.
Example:
• C is great for system programming (like OS development).
• Python is great for data analysis and web development.

Step 2: Analyze the Language Syntax and Features


Look at the syntax and features of the language to see how readable and writable it is.
Example:
• Python has simple syntax and is easy to read.
• C++ has more complex syntax, which makes it harder to read and write.

Step 3: Test the Language’s Performance


Run sample programs in the language to see how fast they execute and how much memory
they use.
Example:
• C++ is faster than Python because it is a compiled language.
• Python is slower but more flexible and easier to use.

Step 4: Check for Reliability and Security


Look at how the language handles errors and security issues.
Example:
• Java is more reliable than C because it has automatic memory management and
built-in exception handling.

Step 5: Evaluate Portability and Flexibility


Check if the language can run on different platforms and if it can be used for different types
of projects.
Example:
• JavaScript can be used for both frontend and backend development.
• C is mainly used for system-level programming.

Step 6: Calculate the Cost


Consider the cost of using the language, including:
• Development time
• Maintenance
• Hardware resources
Example:
• Python is cheaper in terms of development time.
• C might require more time and effort but produces faster programs.
Influences on Language Design
The design of a programming language is influenced by many factors, such as computer
architecture and programming methodologies. These factors shape how a language looks,
what features it has, and how it behaves.

1. Computer Architecture Influence


Programming languages are often designed to match the way computers work. Since
computers are built using a specific architecture, languages must be designed to efficiently
communicate with the hardware.
The main computer architecture that influenced language design is the Von Neumann
Architecture.

What is Von Neumann Architecture?

The diagram shows the basic structure of the Von Neumann Architecture:
1. Memory stores both data and instructions.
2. The Processor includes:
o Control Unit: Manages instruction flow.
o Arithmetic Logic Unit (ALU): Performs calculations and logical operations,
storing results in the Accumulator.
3. Input and Output devices allow data to enter and leave the system.
Data flows between memory, the processor, and input/output, following a step-by-step
process.
How Does It Influence Language Design?
• Languages like C, Java, and Python are influenced by this architecture.
• These languages use variables to store data in memory and statements to execute
instructions in sequence.
• The concept of loops, conditional statements, and functions matches how a CPU
processes instructions.

Example:
When you write a for loop in C:
for (int i = 0; i < 10; i++) {
printf("%d\n", i);
}
This loop matches the step-by-step execution process of the CPU in Von Neumann
machines.

2. Programming Methodologies Influence


Programming methodologies refer to different ways of thinking about how to solve
problems with code. Over time, different approaches to programming have influenced how
languages are designed.
Let’s look at three main programming methodologies:

1. Procedural Programming
This was one of the first methodologies. It focuses on writing step-by-step instructions to
solve a problem.

Influence on Language Design:


Languages like C, Pascal, and Fortran were designed based on this methodology.

Example:
In C:
void main() {
printf("Hello, World!");
}

2. Object-Oriented Programming (OOP)


As software systems grew more complex, a new methodology called OOP was introduced. It
focuses on creating objects that represent real-world entities.
Influence on Language Design:
Languages like Java, C++, and Python were designed based on this methodology.

Example (In Java):


class Car {
int speed;
void drive() {
System.out.println("Driving...");
}}

3. Functional Programming
Functional programming focuses on writing code using functions without modifying
variables or changing state.

Influence on Language Design:


Languages like Haskell, Lisp, and Scala were designed based on this methodology.

Example (In Haskell):


square x = x * x

Language Categories
1. Imperative Language:
o Focuses on giving the computer step-by-step instructions to perform tasks.
o A procedural language follows a sequence of statements or commands in
order to achieve a desired output.
o Each series of steps is called a procedure, and a program written in one of
these languages will have one or more procedures within it.
o Example: C, Python
o How it works: You tell the program what to do in a sequence (e.g., loops,
conditionals).
2. Logical Language:
o Based on logic and rules, where the program is asked to solve problems using
facts and relationships.
o logic programming language expresses a series of facts and rules to instruct
the computer on how to make decisions.
o Example: Prolog, Datalog, ASP
o How it works: You define facts and rules, and the computer figures out how
to solve queries.
3. Functional Language:
o Focuses on using functions that take inputs and produce outputs, without
changing data and state.
o Example: Haskell, Lisp, Scala
o Recursion is used instead of loops.
o How it works: The program is written using pure functions that do not modify
state.
o Example (in Lisp) :
• factorial 0 = 1
• factorial n = n * factorial (n-1)
4. Object-Oriented Language:
o Organizes the program into objects that encapsulates both data and
methods.
o promotes reuse (through inheritance) and modularity.
o Example: Java, C++
o How it works: Programs are built around objects (like cars, books) that
interact with each other.
5. Scripting Language:
o Scripting languages are used for automating tasks or gluing together system
components.
o Lightweight and easy to write.
o Typically interpreted rather than compiled.
o Frequently used for quick development and automation.
o Examples: Python, JavaScript, Bash, PHP

Language Design Trade-offs


1. Reliability vs. Cost of Execution:
o Reliability focuses on ensuring the program runs without errors or crashes,
often using extra checks or safeguards.
o Cost of Execution refers to how fast the program runs and how much
memory it uses. Adding reliability features (like error checking) can slow
down execution.
o Example: Java (reliable but might be slower) vs. C (faster but prone to errors).
2. Readability vs. Writability:
o Readability means that the code is easy for humans to understand, which
helps in maintaining and debugging.
o Writability focuses on how quickly and easily you can write code, often at the
cost of making it harder to understand later.
o Example: Python (high readability, but sometimes more verbose) vs. Perl
(quick to write, but hard to read).
3. Writability vs. Reliability:
o Writability means writing code quickly and easily, but it might sacrifice
reliability (e.g., more bugs or mistakes).
o Reliability ensures that the code runs smoothly and safely, but it may require
extra effort to write correctly and might slow development.
o Example: JavaScript (easy to write but error-prone) vs. Rust (safe and reliable,
but more difficult to write).
Summary
These trade-offs involve balancing the needs for reliability, readability, and writability, with
the cost of execution—choosing which aspects are most important based on the goals of
the programming language.

Implementation Methods
1. Compilation Method:
o In this method, the entire program is translated into machine code (binary
code) all at once by a compiler before it is executed.
o The compiled program can run directly on the computer without needing the
source code anymore.
Lexical Analysis: The lexical analyzer gathers characters into lexical units (e.g., identifiers,
keywords, operators, punctuation). Comments are ignored since they are irrelevant to the
compilation process.
Syntax Analysis: Constructs parse trees, representing the syntactic structure of the program.
Semantic Analysis: Checks for complex errors (e.g., type mismatches) that cannot be caught
during syntax analysis.
Intermediate Code Generation: Produces an intermediate representation of the program,
often easier to optimize than machine code.
Optimization: Improves the program's performance by making it faster or smaller.
Code Generation: Translates optimized intermediate code into machine language.
Symbol Table: A database used throughout the compilation process to store the types and
attributes of user-defined names.
Example: C, C++

2.Pure Interpretation:
o In this method, the source code is executed line-by-line by an interpreter,
without first converting it to machine code.
o The program is not compiled; instead, the interpreter directly executes the
instructions while the program is running.

Step 1: The source code (written by the programmer) is written in a high-level


language (e.g., Python, JavaScript).
Step 2: The interpreter reads the source code line-by-line.
Step 3: For each line, the interpreter translates it into machine code and then
executes it immediately.
Step 4: This process happens continuously as the program runs, without the
need for an executable file.
Step 5: If there are errors, the interpreter will stop and report them
immediately, allowing for quick fixes during execution.
Example: Python, JavaScript
3.Hybrid Implementation Method:
o The program is first compiled to an intermediate code, then interpreted at
runtime.
o Step 1: The source code (written by the programmer) is written in a high-level
language (e.g., Java).
o Step 2: The compiler translates the source code into an intermediate code
(often called bytecode).
o Step 3: The intermediate code is not fully machine-specific and cannot be
directly executed. Instead, it is sent to an interpreter or a virtual machine
(e.g., Java Virtual Machine (JVM)).
o Step 4: The interpreter or virtual machine reads and executes the bytecode,
converting it into machine code at runtime.
o Step 5: The program can run on any system that has the corresponding
interpreter or virtual machine, making it portable.
Example: Java, C#
Summary
• Compilation converts the entire program into machine code before execution.
• Interpretation runs the program line-by-line during execution.
• Hybrid combines both methods for a balance of speed and flexibility.
Programming Environments Syntax and Semantics
Programming Environments: A programming environment is a collection of tools that help
developers create software. These tools typically include:
• File System: To organize and manage files.
• Text Editor: To write and edit code.
• Compiler: To translate code into machine language.
• Linker: To combine different parts of a program into a single executable.
Different environments offer additional tools and features to simplify the software
development process. Examples of Programming Environments:
1. UNIX
2. Borland JBuilder
3. Microsoft Visual Studio .NET
4. NetBeans
Introduction to Syntax and Semantics
In programming languages, syntax and semantics are two essential concepts that define
how a program is written and how it behaves.
• Syntax refers to the structure or rules that govern how a program's code should be
written. It defines how symbols, keywords, and expressions should be arranged to
form valid statements.
o Example (Java): while (boolean_expr) statement
• Semantics refers to the meaning of the program. While syntax focuses on the form,
semantics defines what the program does when it is executed.
o Example (Java): The while statement executes the statement as long as the
condition (boolean_expr) is true.
o If the condition is false, the program skips the while loop and continues with
the next code.
The General Problem of Describing Syntax and Semantics
Describing a language requires not only an understanding of syntax, which focuses on the
structure of sentences, but also semantics, which deals with the meaning conveyed by these
structures. Syntax and semantics together provide a complete framework for understanding
and validating sentences in a language.
Describing Syntax
Syntax provides the rules and structure that determine whether a given sequence of
symbols (or string of characters) is valid in the language.
Key Concepts in Syntax
Sentence: A sentence is a sequence of characters constructed from a specific alphabet
according to the syntax rules.
Examples: In English: "The cat is sleeping."
In Java: int x = 10;
Language: A language is a set of all valid sentences that follow the defined syntax rules.
Examples: English is the set of all valid English sentences.
Java is the set of all syntactically correct Java statements.
Lexeme: A lexeme is the smallest syntactic unit in a language, often corresponding to words
or symbols.
Examples: In the expression x + 10, the lexemes are x, +, and 10.
In System.out.println, the lexemes are System, ., out, ., and println.
Token: A token is a category that groups lexemes based on their syntactic role.
Examples: x → Identifier token.
+ → Operator token.
10 → Literal token.

Languages Recognizers
A recognizer determines if a given input (sentence) belongs to the defined language. This
process involves checking if the sentence adheres to the language's syntax rules.
Example: In a compiler, the syntax analysis phase acts as a recognizer. It checks whether a
program's code (input strings) conforms to the syntax of the programming language.
How it Works:
• Input: A string of characters (e.g., a line of code).
• Process: The recognizer compares the input to predefined rules of the language.
• Output: Accept (valid sentence) or Reject (invalid sentence).

Languages Generators
A generator is a tool or device that creates valid sentences belonging to the language. By
producing sentences, it implicitly defines the structure and rules of the language.
Purpose:
To define what constitutes a valid sentence in the language. By comparing a given sentence
to the generator's rules, one can check its correctness.

How it Works:
• A generator applies a set of grammar rules to produce sentences.
Example:
In English, the grammar rule "Subject + Verb + Object" can generate valid sentences like:
• "The cat eats fish."
• "The dog barks loudly."
• Similarly, in programming, a generator could create valid Java statements like:
• int x = 5;
• System.out.println("Hello");

FORMAL METHODS OF DESCRIBING SYNTAX


1. Backus-Naur Form (BNF)
• Explanation: BNF is a notation used to describe the syntax of a programming
language. It was developed by John Backus in 1959 to describe the syntax of ALGOL
58. It uses rules called production rules to define how symbols (such as keywords or
operators) can be combined to form valid expressions. Each rule consists of a left-
hand side (non-terminal) and a right-hand side (a combination of terminals and non-
terminals).
• Example: Here's a simple example for arithmetic expressions:
• <expression> ::= <number> | <expression> "+" <expression>
• <number> ::= "0" | "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9"

2. Context-Free Grammar (CFG)


• Explanation: A Context-Free Grammar (CFG) is a type of grammar used to describe
languages where each production rule's left-hand side consists of a single non-
terminal symbol. CFGs are powerful because they can describe many programming
languages and constructs, like arithmetic expressions or control structures.
• Example: In CFG, a rule might look like:
• <expression> ::= <term> "+" <term> | <term>
• <term> ::= "a" | "b" | "c"
This says an expression can be a term or two terms joined by a plus sign.

3. Extended Backus-Naur Form (EBNF)


• Explanation: EBNF is an extension of BNF that simplifies the notation by adding
special symbols like [] for optional parts, {} for repetition, and () for grouping. EBNF
makes it easier to express complex syntax.
• Example: The same rule for expressions in EBNF would look like:
• expression ::= term { "+" term }
• term ::= "a" | "b" | "c"
Here, { "+" term } means zero or more occurrences of the + symbol followed by a term.

4. Grammars and Recognizers


• Explanation: A grammar is a set of rules that defines the structure of valid sentences
in a language. A recognizer is a machine (like a parser) that takes an input and checks
whether it fits the grammar (i.e., whether it's a valid sentence in the language).
• Example: Given a grammar for arithmetic expressions, a recognizer might check if the
string a + b is valid. It would match the structure of the grammar and confirm it's
valid, while a + might be rejected due to incomplete syntax.

5. Describing Lists in Syntax


• Explanation: Lists in programming languages (such as arrays or sequences) can be
described in syntax using recursion or repetition. Typically, a list is either empty or
contains an element followed by another list.
• Example:
• <list> ::= <element> | <element> "," <list>
• <element> ::= "a" | "b" | "c"
This means a list can be a single element or an element followed by a comma and another
list.

6. Parse Tree
• Explanation: A parse tree is a tree-like diagram that shows how a string of symbols is
derived from a grammar. It shows the structure of the expression and how each part
of the grammar applies to the input.
• Example: For the expression a + b, the parse tree would look something like this:

7. Ambiguity in Grammars
• Explanation: A grammar is said to be ambiguous if a single string can be parsed in
more than one way (i.e., it has multiple valid parse trees). Ambiguity can lead to
confusion or errors in interpretation, especially in programming languages.
• Example: The arithmetic expression a + b + c can be parsed in two ways:
• (a + b) + c or a + (b + c)
Both interpretations are valid according to the grammar, making the grammar ambiguous.

8. Unambiguous Expression Grammar


An unambiguous expression grammar ensures that every valid string has one unique parse
tree, avoiding multiple interpretations.
Example:
Unambiguous Grammar:
<expr> → <expr> - <term> | <term>
<term> → <term> / const | const
Now, in a - b / c, division (/) will always be done first, making the grammar unambiguous.
Attribute Grammars
An attribute grammar is a formal way of defining the syntax and semantics of a
programming language or formal language. It extends context-free grammar (CFG) by
associating attributes with the non-terminal symbols and defining rules for calculating their
values. Attributes provide additional information about the syntax and can be used for
semantic analysis, type checking, or other purposes.
Key Concepts
1.Attributes: Attributes are like labels that give more information about the symbols (like
variables or expressions) in the grammar.
There are two main types:
▪ Synthesized Attributes: These are values calculated from the children and passed up to
the parent in the tree.
▪ Inherited Attributes: These are values passed down from the parent to the children in the
tree.

2.Rules: Each grammar rule also includes semantic rules that show how to calculate the
attributes.
For example, you can specify that two variables in an expression must have the same type.
Example – Simple Grammar
Let’s use this simple language:
<assign> → <var> = <expr>
<expr> → <var> + <var> | <var>
<var> → A | B | C
Here:
• <assign> represents an assignment, like A = B + C.
• <expr> represents an expression, like B + C.
• <var> represents a variable, like A, B, or C.
Adding Attributes
We want to add type information to these grammar symbols, to know whether variables
and expressions are compatible with each other in the program. For example, we want to
check if we’re trying to add two numbers or assign a number to a variable.
• actual_type: This tells us the actual type of a variable or expression (e.g., int or
float).
• expected_type: This tells us the expected type of an expression (e.g., when doing an
assignment).

Example of Attribute Grammar


For the grammar:
<assign> → <var> = <expr>
<expr> → <var> + <var> | <var>
<var> → A | B | C
Here’s how we could add attributes:
For <assign> (like A = B + C): The expected_type of <expr> is the type of the variable we’re
assigning to (A in this case).
The actual_type of <var> (the variable on the left side) must match the actual_type of
<expr> (the right side).
For <expr> (like B + C): The actual_type of <expr> is the same as the actual_type of the first
<var> (i.e., B).
The actual_type of B must match the actual_type of C because we’re adding them (e.g.,
both should be int).
For <var> (like A, B, C): Each variable has an actual_type that’s looked up from some variable
table (e.g., A might be int, B might be float).

Example Calculation for A = B + C


1. Look up the types of B and C: (Let’s say B and C are both int).
2. Check if their types match: (They do because both are int).
3. Set the type of B + C: The result of B + C is also int because both operands are int.
For the assignment A = B + C: Ensure the type of A matches the type of B + C (both are int),
so the assignment is valid.
Describing the Meanings of Programs
Describing the meanings of programs is all about understanding what a program does or
what its behaviour is when it runs. This is often referred to as semantics in programming
language theory. While syntax refers to how a program is written (the structure of the code),
semantics focuses on the meaning of that code.
In simple terms, program semantics explains what each part of a program does and how the
different parts work together to produce the desired outcome.
Two Main Types of Semantics:
1. Static Semantics (or Syntactic Semantics):
o Definition: This deals with the meaning of programs based on their structure
without actually executing them. It often includes rules about valid variable
types, variable scope, and other language constraints.
o Example: In a programming language, a rule might say that you can only add
numbers together, not a number and a string. This is part of the static
semantics.
2. Dynamic Semantics:
o Definition: This is about the meaning of a program when it runs. It describes
how the program's state changes as it executes, such as how variables' values
are modified or how functions produce results.
o Example: When you execute a program that adds two numbers, dynamic
semantics describes how the values are fetched from memory, added
together, and stored back.
How We Describe the Meanings of Programs:
1. Operational Semantics:
o Explanation: This method describes how a program executes step-by-step. It
defines the behaviour of the program by explaining the transitions between
states during execution.
o Example: If you have the program x = 3 + 4;, operational semantics would
explain that the number 3 and 4 are added together, and the result (7) is
assigned to the variable x.
2. Denotational Semantics:
o Explanation: Denotational semantics gives meaning by mapping each part of
the program to a mathematical object. It doesn't describe how the program
runs but rather what it represents in a formal, mathematical sense.
o Example: The expression 3 + 4 could be mapped to the number 7 in
denotational semantics.
3. Axiomatic Semantics:
o Explanation: This method defines the meaning of a program in terms of logic
and mathematical proofs. It uses logical assertions to describe the program’s
behaviour, particularly useful in proving correctness.
o Example: In axiomatic semantics, you might prove that x = 3 + 4 will always
result in x being equal to 7 after the execution of the program.
Simple Example:
Consider a simple program:
x=3
y=x+4
• Syntax: The structure of the program is valid (i.e., the syntax is correct).
• Static Semantics: It is valid to assign integers to variables and perform addition, so
this part checks for type correctness.
• Dynamic Semantics: When the program runs, the value of x is assigned as 3. Then, x
+ 4 is evaluated to 7, and this value is assigned to y.
In summary, describing the meanings of programs helps us understand both the rules of
writing valid programs (static semantics) and how the program behaves when it runs
(dynamic semantics).

You might also like