Me8691 LN
Me8691 LN
Me8691 LN
Lecture Notes
Product cycle integrate processes, people, data, and business and gives a product information for
industries and their extended activity. Product cycle is the process of managing the entire lifecycle of a
product from starting, through design and manufacture, to repair and removal of manufactured products.
Product cycle methods assist association in managing with the rising difficulty and engineering
challenges of developing new products for the worldwide competitive markets.
Product lifecycle management (PLM) can be part of one of the following four fundamentals of a
manufacturing information technology structure.
(i) Customer Relationship Management (CRM)
(ii) Supply Chain Management (SCM)
(iii) Enterprise resource planning (ERP)
(iv) Product Planning and Development (PPD).
The core of PLM is in the formation and management of all product information and the
technology used to access this data and knowledge. PLM as a discipline appeared from tools such as
CAD, CAM and PDM, but can be viewed as the combination of these tools with processes, methods
and people through all stages of a product’s life cycle. PLM is not just about software technology but is
also a business approach.
1.2.1. Product Cycle Model
There are several Product cycle models in industry to be considered, one of the possible product
cycle is given below (Fig.1.1.):
Step 1: Conceive
Imagine, Specify, Plan, Innovate
The first step is the definition of the product requirements based on company, market and
customer. From this requirement, the product's technical data can be defined. In parallel, the early
concept design work is performed defining the product with its main functional features. Various media
are utilized for these processes, from paper and pencil to clay mock-up to 3D Computer Aided Industrial
Design.
Step 2: Design
Describe, Define, Develop, Test, Analyze and Validate
This is where the completed design and development of the product begins, succeeding to
prototype testing, through pilot release to final product. It can also involve redesign and ramp for
improvement to existing products as well as planned obsolescence. The main tool used for design and
development is CAD. This can be simple 2D drawing / drafting or 3D parametric feature based
solid/surface modeling.
This step covers many engineering disciplines including: electronic, electrical, mechanical, and
civil. Besides the actual making of geometry there is the analysis of the components and assemblies.
Optimization, Validation and Simulation activities are carried out using Computer Aided
Engineering (CAE) software. These are used to perform various tasks such as: Computational Fluid
Dynamics (CFD); Finite Element Analysis (FEA); and Mechanical Event Simulation (MES). Computer
Aided Quality (CAQ) is used for activities such as Dimensional tolerance analysis. One more task
carried out at this step is the sourcing of bought out components with the aid of procurement process.
Step 3: Realize
Manufacture, Make, Build, Procure, Produce, Sell and Deliver
Once the design of the components is complete the method of manufacturing is finalized. This
includes CAD operations such as generation of CNC Machining instructions for the product’s
component as well as tools to manufacture those components, using integrated Computer Aided
Manufacturing (CAM) software.
It includes Production Planning tools for carrying out plant and factory layout and production
simulation. Once details components are manufactured their geometrical form and dimensions can be
verified against the original data with the use of Computer Aided Inspection Equipment (CAIE). Parallel
to the engineering tasks, sales and marketing work take place. This could consist of transferring
engineering data to a web based sales configuration.
Step 4: Service
Use, Operate, Maintain, Support, Sustain, Phase-out, Retire, Recycle and Disposal
The final step of the lifecycle includes managing of information related to service for repair and
maintenance, as well as recycling and waste management information. This involves using tools like
Maintenance, Repair and Operations Management software.
The basis of information should be significant, including existing results. Reverse engineering
can be a successful technique if other solutions are available in the market. Added sources of
information include the trade journals, available government documents, local libraries, vendor catalogs
and personal organizations.
2. Feasibility assessment
The feasibility study is an analysis and assessment of the possible of a proposed design which is
based on detail investigation and research to maintain the process of decision creation. The feasibility
assessment helps to focus the scope of the project to spot the best situation. The purpose of a feasibility
assessment is to verify whether the project can continue into the design phase.
3. Conceptualization
A Concept Study is the stage of project planning that includes developing ideas and taking into
account the all features of executing those ideas. This stage of a project is done to reduce the likelihood
of assess risks, error and evaluate the potential success of the planned project.
5. Preliminary design
The preliminary design fills the gap between the design concept and the detailed design phase.
During this task, the system configuration is defined, and schematics, diagrams, and layouts of the
project will offer early project configuration. In detailed design and optimization, the parameters of the
part being produced will change, but the preliminary design focuses on creating the common framework
to construct the project.
6. Detailed design
The next phase of preliminary design is the Detailed Design which may includes of procurement
also. This phase builds on the already developed preliminary design, aiming to further develop each
phase of the project by total description through drawings, modeling as well as specifications.
The advancement CAD programs have made the detailed design phase more competent. This is
because a CAD program can offer optimization, where it can shrink volume without compromising the
part's quality. It can also calculate displacement and stress using the FEM to find stresses throughout the
part. It is the responsibility of designer to find whether these stresses and displacements are acceptable,
so the part is safe.
Sequential engineering is the term used to In concurrent engineering, various tasks are
explain the method of production in a handled at the same time, and not essentially
linear system. The various steps are done in the standard order. This means that info
one after another, with all attention and found out later in the course can be added to
resources focused on that single task. earlier parts, improving them, and also saving
time.
Sequential engineering is a system by Concurrent engineering is a method by which
which a group within an organization several groups within an organization work
works sequentially to create new products simultaneously to create new products and
and services. services.
Both process and product design run in Both product and process design run in
serial and take place in the different time. parallel and take place in the same time.
Process and Product are not matched to Process and Product are coordinated to
Decision making done by only group of Decision making involves full team
experts. involvement.
Geometric Modeling
2.1. Introduction
Geometric modeling is a part of computational geometry and applied mathematics that studies
algorithms and techniques for the mathematical description of shapes.
The shapes defined in geometric modeling are generally 2D or 3D, even though several of its
principles and tools can be used to sets of any finite dimension. Geometric modeling is created with
computer based applications. 2D models are significant in computer technical drawing and typography.
3D models are fundamental to CAD and CAM and extensively used in many applied technical branches
such as civil engineering and mechanical engineering and medical image processing.
Geometric models are commonly differentiated from object oriented models and procedural,
which describe the shape perfectly by an opaque algorithm that creates its appearance. They are also
compared with volumetric models and digital images which shows the shape as a subset of a regular
partition of space; and with fractal models that provide an infinitely recursive description of the shape.
Though, these differences are often fuzzy: for example, a image can be interpreted as a collection of
colored squares; and geometric shape of circles are defined by implicit mathematical equations. Also, a
fractal model gives a parametric model when its recursive description is truncated to a finite depth.
A curve is an entity related to a line but which is not required to be straight. A curve is
a topological space which is internally homeomorphism to a line; this shows that a curve is a set of
points which close to each of its points looks like a line, up to a deformation.
A conic section is a curve created as the intersection of a cone with a plane. In analytic
geometry, a conic may be described as a plane algebraic curve of degree two, and as
a quadric of dimension two.
There are several of added geometric definitions possible. One of the most practical, in that it
involves only the plane, is that a non circular conic has those points whose distances to various point,
called a ‘focus’, and several line, called a ‘directrix’, are in a fixed ratio, called the ‘eccentricity’.
2.2.1.Conic Section
Conventionally, the three kinds of conic section are the hyperbola, the ellipse and the parabola.
The circle is a unique case of the ellipse, and is of adequate interest in its own right that it is sometimes
described the fourth kind of conic section. The method of a conic relates to its ‘eccentricity’, those with
eccentricity less than one is ellipses, those with eccentricity equal to one is parabolas, and those with
eccentricity greater than one is hyperbolas. In the focus, directrix describes a conic the circle is a limiting
with eccentricity zero. In modern geometry some degenerate methods, such as the combination of two
lines, are integrated as conics as well.
The three kinds of conic sections are the ellipse, parabola, and hyperbola. The circle can be
taken as a fourth kind of ellipse. The circle and the ellipse occur when the intersection of plane and cone
is a closed curve. The circle is generated when the cutting plane is parallel to the generating of the cone.
If the cutting plane is parallel to accurately one generating line of the cone, then the conic is unbounded
and is mentioned a parabola. In the other case, the figure is a hyperbola.
Different factors are connected with a conic section, as shown in the Table 2.1. For the ellipse,
the table shows the case of ‘a’ > ‘b’, for which the major axis is horizontal; for the other case,
interchange the symbols ‘a’ and ‘b’. For the hyperbola the east-west opening case is specified. In all
cases, ‘a’ and ‘b’ are positive.
Table 2.1. Conic Sections
The non-circular conic sections are accurately those curves that, for a point ‘F’, a line ‘L’ not
having ‘F’ and a number ‘e’ which is non-negative, are the locus of points whose distance
to ‘F’ equals ‘e’ multiplies their distance to ‘L’. ‘F’ is called the focus, ‘L’ the directrix,
and ‘e’ the eccentricity.
i. Linear eccentricity (c) is the space between the center and the focus.
ii. Latus rectum (2l) is parallel to the directrix and passing via the focus.
iv. Focal parameter (p) is the distance from the focus to the
A Hermite curve is a spline where every piece is a third degree polynomial defined in Hermite
form: that is, by its values and initial derivatives at the end points of the equivalent domain interval.
Cubic Hermite splines are normally used for interpolation of numeric values defined at certain dispute
values x1,x2,x3, ….., xn, to achieve a smooth continuous function. The data should have the preferred
function value and derivative at each Xk. The Hermite formula is used to every interval (Xk, Xk+1)
individually. The resulting spline become continuous and will have first derivative.
Cubic polynomial splines are specially used in computer geometric modeling to attain curves
that pass via defined points of the plane in 3D space. In these purposes, each coordinate of the plane is
individually interpolated by a cubic spline function of a divided parameter‘t’.
Cubic splines can be completed to functions of different parameters, in several ways. Bicubic
splines are frequently used to interpolate data on a common rectangular grid, such as pixel values in a
digital picture. Bicubic surface patches, described by three bicubic splines, are an necessary tool in
computer graphics. Hermite curves are simple to calculate but also more powerful. They are used to
well interpolate between key points.
Figure 2.3 shows the functions of Hermite Curve of the 4 functions (from left to right: h1, h2, h3, h4).
A closer view at functions ‘h1’ and ‘h2’, the result shows that function ‘h1’ starts at one and
goes slowly to zero and function ‘h2’ starts at zero and goes slowly to one.
At the moment, multiply the start point with function ‘h1’ and the endpoint with function ‘h2’.
Let s varies from zero to one to interpolate between start and endpoint of Hermite Curve. Function ‘h3’
and function ‘h4’ are used to the tangents in the similar way. They make confident that the Hermite
curve bends in the desired direction at the start and endpoint.
Cubic Bezier curves and Quadratic Bezier curves are very common. Higher degree Bezier
curves are highly computational to evaluate. When more complex shapes are required, Bezier curves in
low order are patched together to produce a composite Bezier curve. A composite Bezier curve is usually
described to as a ‘path’ in vector graphics standards and programs. For smoothness assurance, the
control point at which two curves meet should be on the line between the two control points on both
sides.
A general adaptive method is recursive subdivision, in which a curve's control points are verified
to view if the curve approximates a line segment to within a low tolerance. If not, the curve is further
divided parametrically into two segments, 0 ≤ t ≤ 0.5 and 0.5 ≤ t ≤ 1, and the same process is used
recursively to each half. There are future promote differencing techniques, but more care must be taken
to analyze error transmission.
Analytical methods where a Bezier is intersected with every scan line engage finding roots of
cubic polynomials and having with multiple roots, so they are not often applied in practice. A Bezier
curve is described by a set of control points P 0 through Pn, where ‘n’ is order of curve. The initial and
end control points are commonly the end points of the curve; but, the intermediate control points
normally do not lie on the curve.
As shown in the figure 2.5, the given points P0 and P1, a linear Bezier curve is merely a straight
line between those two points. The Bezier curve is represented by
As shown in the figure 2.6, a quadratic Bezier curve is the path defined by the function B(t),
given points P0, P1, and P2,
This can be interpreted as the linear interpolate of respective points on the linear Bezier curves from P 0
to P1 and from P1 to P2 respectively. Reshuffle the preceding equation gives:
The derivative of the Bezier curve with respect to the value ‘t’ is
From which it can be finished that the tangents to the curve at P 0 and P2 intersect at P1. While ‘t’
increases from zero to one, the curve departs from P 0 in the direction of P1, then turns to land at P2 from
the direction of P1.
The following equation is a second derivative of the Bezier curve with respect to ‘t’:
A quadratic Bezier curve is represent a parabolic segment. Since a parabola curve is a conic
section, a few sources refer to quadratic Beziers as ‘conic arcs’.
As shown in figure 2.7, four control points P0, P1, P2 and P3 in the higher-dimensional space
describe as a Cubic Bezier curve. The curve begins at P0 going on the way to P1 and reaches at P3
coming from the direction of P2. Typically, it will not pass through control points P 1 / P2, these points
are only there to give directional data. The distance between P0 and P1 determines ‘how fast’ and ‘how
far’ the curve travels towards P1 before turning towards P2.
The function B Pi, Pj, Pk (t) for the quadratic Bezier curve written by points Pi, Pj, and Pk, the
cubic Bezier curve can be described as a linear blending of two quadratic Bezier curves:
For several choices of P1 and P2 the Bezier curve may meet itself.
Any sequence of any four dissimilar points can be changed to a cubic Bezier curve that goes via
all four points in order. Given the beginning and ending point of a few cubic Bezier curve, and the
points beside the curve equivalent to t = 1/3 and t = 2/3, the control points for the original Bezier curve
can be improved.
The following equation represent first derivative of the cubic Bezier curve with respect to t:
The following equation represent second derivative of the Bezier curve with respect to t:
The Bezier curve starts at P0 and ends at Pn; this is known as ‘endpoint interpolation’ property.
The Bezier curve is a straight line when all the control points of a cure are collinear.
The beginning of the Bezier curve is tangent to the first portion of the Bezier polygon.
A Bezier curve can be divided at any point into two sub curves, each of which is also a Bezier
curve.
A few curves that look like simple, such as the circle, cannot be expressed accurately by a Bezier;
via four piece cubic Bezier curve can similar a circle, with a maximum radial error of less than one
part in a thousand (Fig.2.8).
Each quadratic Bezier curve is become a cubic Bezier curve, and more commonly, each degree ‘n’
Bezier curve is also a degree ‘m’ curve for any m > n.
Bezier curves have the different diminishing property. A Bezier curves does not ‘ripple’ more
than the polygon of its control points, and may actually ‘ripple’ less than that.
Bezier curve is similar with respect to t and (1-t). This represents that the sequence of control
points defining the curve can be changes without modify of the curve shape.
Bezier curve shape can be edited by either modifying one or more vertices of its polygon or by
keeping the polygon unchanged or simplifying multiple coincident points at a vertex (Fig .2.19).
The figure 2.10 shows the function for a linear Bezier curve can be via of as describing how far
B(t) is from P0 to P1 with respect to ‘t’. When t equals to 0.25, B(t) is one quarter of the way from point
P0 to P1. As ‘t’ varies from 0 to 1, B(t) shows a straight line from P0 to P1.
As shown in figure 2.11, a quadratic Bezier curves one can develop by intermediate points Q 0
and Q1 such that as ‘t’ varies from 0 to 1:
Point Q0 (t) modifying from P0 to P1 and expresses a linear Bezier curve.
Point Q1 (t) modifying from P1 to P2 and expresses a linear Bezier curve.
Point B (t) is interpolated linearly between Q0(t) to Q1(t) and expresses a quadratic Bezier curve.
The rational Bezier curve includes variable weights (w) to provide closer approximations to
arbitrary shapes. For Rational Bezier Curve, the numerator is a weighted Bernstein form Bezier and the
denominator is a weighted sum of Bernstein polynomials. Rational Bezier curves can be used to signify
segments of conic sections accurately, including circular arcs (Fig.2.13).
UNIT III VISUAL REALISM
Visual Realism
3.1. Introduction
Visual Realism is a method for interpreting picture data fed into a computer and for creating
pictures from difficult multidimensional data sets. Visualization can be classified as :
Parallel projections
Perspective projection.
Hidden line removal
Hidden surface removal
Hidden solid removal
Shaded models
Hidden line and surface removal methods remove the uncertainty of the displays of 3D models
and is accepted the first step towards visual realism. Shaded images can only be created for surface and
solid models. In multiple step shading process, the first step is removing the hidden surfaces / solids and
second step is shades the visible area only. Shaded images provide the maximum level of visualization.
The processes of hidden removal need huge amounts of computing times and also upper end
hardware services. The creation and maintenance of such a models are become complex. Hence, creating
real time images needs higher end computers with the shading algorithms embedded into the hardware.
3D parts are simply manufactured and frequently happen in a CAD design of such a part. In
addition, the degrees of freedom are adequate to show the majority of models and are not overwhelming
in the number of constraints to be forced. Also, almost all the surface-surface intersections and shadow
computations can be calculated analytically which results in significant savings in the number of
computations over numerical methods.
Face Priority
ABCE 1
ADFG 1
DCEF 1
ABHG 2
EFGH 2
BCEH 2
ABCD, ADFG, DCEF are given higher priority-1. Hence, all lines in this faces are visible, that
is, AB, BC, CD, DA, AD, DF, FG, AG, DC, CE, EF and DF are visible.
AGHB, EFGH, BCEH are given lower priority-2. Hence, all lines in this faces other than
priority-1 are invisible, that is BH, EH and GH. These lines must be eliminated.
3.3. Hidden surface removal
The hidden surface removal is the procedure used to find which surfaces are not visible from a
certain view. A hidden surface removal algorithm is a solution to the visibility issue, which was one of
the first key issues in the field of three dimensional graphics. The procedure of hidden surface
identification is called as hiding, and such an algorithm is called a ‘hider’. Hidden surface identification
is essential to render a 3D image properly, so that one cannot see through walls in virtual reality.
Hidden surface identification is a method by which surfaces which should not be visible to the
user are prohibited from being rendered. In spite of benefits in hardware potential there is still a
requirement for difficult rendering algorithms. The accountability of a rendering engine is to permit for
bigger world spaces and as the world’s size approaches infinity the rendering engine should not slow
down but maintain at constant speed.
There are many methods for hidden surface identification. They are basically a work out in
sorting, and generally vary in the order in which the sort is executed and how the problem is subdivided.
Sorting more values of graphics primitives is generally done by divide.
In Z-buffering, the depth of ‘Z’ value is verified against available depth value. If the present
pixel is behind the pixel in the Z-buffer, the pixel is eliminated, or else it is shaded and its depth value
changes the one in the Z-buffer. Z-buffering helps dynamic visuals easily, and is presently introduced
effectively in graphics hardware.
Algorithm:
loop on y;
loop on x;
zbuf[x,y] = infinity;
loop on objects
{
loop on y within y range of this object
{
loop on x within x range of this scan line of this object
{
if z(x,y) < zbuf[x,y] compute z of this object at this pixel &
test zbuf[x,y] = z(x,y) update z-buffer
image[x,y] = shade(x,y) update image (typically RGB)
}
}
}
Basic operations:
1. compute y range of an object
2. compute x range of a given scan line of an object
3. Calculate intersection point of a object with ray through pixel position (x,y).
The ‘painter's algorithm’ shows to the method employed by most of the painters of painting
remote parts of a scene before parts which are close thereby hiding some areas of distant parts. The
painter's algorithm arranges all the polygons in a view by their depth and then paints them in this order,
extreme to closest. It will paint over the existing parts that are usually not visible hence solving the
visibility issue at the cost of having painted invisible areas of distant objects. The ordering used by the
algorithm is referred a 'depth order', and does not have to respect the distances to the parts of the scene:
the important characteristics of this ordering is, somewhat, that if one object has ambiguous part of
another then the first object is painted after the object that it is ambiguous. Thus, a suitable ordering can
be explained as a topological ordering of a directed acyclic graph showing between objects.
Algorithm:
sort objects by depth, splitting if necessary to handle intersections;
loop on objects (drawing from back to front)
{
loop on y within y range of this object
{
loop on x within x range of this scan line of this object
{
image[x,y] = shade(x,y);
}
}
}
Basic operations:
1. compute ‘y’ range of an object
2. compute ‘x’ range of a given scan line of an object
3. compute intersection point of a given object with ray via pixel point (x,y).
4. evaluate depth of two objects, determine if A is in front of B, or B is in front of A, if they don’t
overlap in xy, or if they intersect
5. divide one object by another object
Advantage of painter's algorithm is the inner loops are quite easy and limitation is sorting
operation.
The Warnock algorithm is a hidden surface algorithm developed by John Warnock that is
classically used in the area of graphics. It explains the issues of rendering a difficult image by recursive
subdivision of a view until regions are attained that is trivial to evaluate. Similarly, if the view is simple
to compute effectively then it is rendered; else it is split into tiny parts which are likewise evaluated for
simplicity. This is a algorithm with run-time of O(np), where p is the number of pixels in the viewport
and n is the number of polygons.
The inputs for Warnock algorithm are detail of polygons and a viewport. The good case is that if the
detail of polygons is very simple then creates the polygons in the viewport. The continuous step is to
divide the viewport into four equally sized quadrants and to recursively identify the algorithm for each
quadrant, with a polygon list changed such that it contains polygons that are detectable in that quadrant.
Ray-Tracing algorithm
Optical ray tracing explains a technique for creating visual images constructed in three
dimensional graphics environments, with higher photorealism than either ray casting rendering practices.
It executes by tracing a path from an imaginary eye via every pixel in a virtual display, and computing
the color of the object visible via it.
Displays in ray tracing are explained mathematically by a programmer. Displays may also
incorporate data from 3D models and images captured like a digital photography.
In general, every ray must be tested for intersection with a few subsets of all the objects in the
view. Once the nearest object has been selected, the algorithm will calculate the receiving light at the
point of intersection, study the material properties of the object, and join this information to compute the
finishing color of the pixel. One of the major limitations of algorithm, the reflective or translucent
materials may need additional rays to be re-cast into the scene.
3.5. Shading
Shading defines to describe depth perception in three dimensioning models by different levels of
darkness. Shading is applied in drawing for describes levels of darkness on paper by adding media heavy
densely shade for darker regions, and less densely for lighter regions.
There are different techniques of shading with cross hatching where perpendicular lines of
changing closeness are drawn in a grid pattern to shade an object. The closer the lines are combining, the
darker the area appears. Similarly, the farther apart the lines are, the lighter the area shows.
Fig.3.7. Shading
The image shown in figure 3.8 has the faces of the box rendered, but all in the similar color.
Edge lines have been rendered here as well which creates the image easier to view.
The image shown in figure 3.9 is the same model rendered without edge lines. It is complicated
to advise where one face of the box ends and the next starts.
Fig.3.10. Image with Shading
The image shown in figure 3.10 has shading enabled which makes the image extra realistic and
makes it easier to view which face is which.
3.5.1. Shading techniques:
In computer graphics, shading submits to the procedure of changing the color of an object in the
3D view, a photorealistic effect to be based on its angle to lights and its distance from lights. Shading is
performed through the rendering procedure by a program called a ‘Shader’. Flat shading and Smooth
shading are the two major techniques using in Computer graphics.
UNIT IV ASSEMBLY OF PARTS
Assembly of parts
4.1. Introduction
In today’s global situation, two main things are significant for the industry: cost reduction and
environment protection. Since the late 70’s it has been developed that the assembly procedure normally
signify one third of the product cost. Hence, it is essential to design appropriate plans for parts assembly:
manufacturing, and disassembly: recycling.
A realistic assembly procedure can increase efficiency, cost reduction and improve the recycling
of product. To overcome these problems, various simulations based on digital mock-ups of products are
required. Even though modeling and analysis software, presently applied at various stages of the Product
Development Process, can suggest results to several of the above stated needs, the progress of a
committed assembly and disassembly combine simulation stage is still a need.
To attain an optimum assembly method, various complex software for assembly analysis and, as
well as simulation programs based on multi agent methods or which apply contact data between
assembly components, were created. Newly, Virtual Reality (VR) has broadly developed towards
Assembly realistic simulation.
As the contact between objects is at the basis of the assembly simulations need 3D objects
shapes, the contact detection is addressed here as the first step in the Assembly simulation process. The
equivalent procedure establishes links between shapes, contact mock-ups and component kinematics,
which gives a basic set of meaningful data
All mechanical parts are applying one of the common CAD modelers. Thus, the existing
assembly modules of 3D CAD software and their definite method to modeling assemblies have a tough
influence on how products are calculated. Also, for the realistic simulation, the data exchange CAD to
Virtual Reality is one of the significant problems presently faced by the virtual prototyping community.
Functioning within the framework of an assembly is prepared easier by accepting to apply more
commands to other parts and sub-assemblies. These contain the Annotation Text, Inquire, Point, Datum
Plane and Pattern Component commands. Bigger assembly performance is improved by removing
unwanted redraws and improved display management while zooming.
Assembly models have additional data than simply the sum of their components. With assembly
modeling interference verifies between parts and assembly specific data such as mass properties.
Bottom up Hierarchy:
The ‘bottom up’ assembly design hierarchy of the basic assembly is shown in figure 4.2. All the
parts exist prior to Part1. When Part1 is generated, it becomes the active. It would utilize the menu
sequence to add Bracket and it becomes the active part.
As per example shown in figure 4.2., ‘Bracket’ is a child of Part1. The dashed line represents
that ‘Bracket’ exists in the 3D file Parts Z3. The dotted line represents that ‘Bracket’ is inserted into
Part-1. After Bracket is added, Part1 is redefined. Bolt and Washer are then added the same
process and Part-1 is reactivated again.
Fig.4.2. Bottom up Design – Part 1
Module of subassembly is added similar as ‘Bracket’, ‘Bolt’, and ‘Washer’ again becoming a
child of Part-1. But, because Module Subassembly already has the two items Seal and Module, they are
added and continue as its children.
If File-1 is eliminated from the active assembly before it is saved
and Part1 are removed.
The original parts placed in the file Parts Z3 are not changed.
If File-1 is saved and Part1 is also saved.
If File-1 is erased and Part1 is also erased.
Bracket is a child of Part-1. The dashed line illustrates that by default when Bracket is
generated; it is attached to File-1. The dotted line illustrates that Bracket is attached into Part-1. When
Bracket is executed Part1 is reactivated. Bolt and Washer are then generated using the similar process
and Part-1 is reactivated again.
Subassembly Module is generated like the Bracket, Bolt, and Washer again will be a child of
Part1. But, Module Subassembly remains active when seal is developed. Seal will be the active part
and by default also exists in File-1 but is inserted into Module Subassembly hence it was active at the
time of seal was created. Subassembly Module is then reactivated and Module is generated like
a Seal.
In automated assembly schemes, most parts are assembled along with the principal axis. Hence,
to fine interference between parts while assembly, the projected technique referred six assembly
directions along with the principal assembly axis: +x, -x, +y, -y, +z, and -z. But, the method could be
improved, to think other assembly directions, as required. The projected system uses projection of part
coordinates onto planes in three principal axis (x, y ,z) to find the obstruction between parts sliding along
some of the six principal assembly axis. The projections overlap between any two parts in a specified
axis direction shows a potential interference between the two parts, when one of the two parts slides
along the specified direction, with respect to the other. Vertex coordinates for overlapped
projections are then evaluated to find if real collisions would happen between parts with overlapped
projections. The planned process stores the determined interference data for allocated assembly direction
in a group of interference free matrices, for compatibility with previous planners of assembly.
The swept volume interference and the multiple interference detection systems are appropriate
for three-dimensional interference determination between B-REP entities. But, both techniques were
developed for real-time interference detection between two moving parts in a simulation environment.
As a result, these two techniques are expensive in computationally. For the assembly planning issue,
actual collision finding capacity along subjective relative motion vectors is not require. Instead, a
efficient computational technique is required for finding if two parts will collide when they are
assembled in a specified order along any one of the six principle assembly axis.
Tolerance analysis is a title to a different approaches applied in product design to know how
deficiencies in parts as they are manufactured, and in assemblies, influence the ability of a product to
meet customer needs. Tolerance analysis is a way of accepting how basis of deviation in part dimensions
and assembly constraints distribute across parts and assemblies, and how that total deviation
affects the ability of a drawing to reach its design necessities within the process capabilities of
organizations and supply chains.
Tolerance openly affects the cost and performance of products. In electrical machines, safety
needs that the power supply to be situated a minimum gap from adjacent components, such as one more
sheet-metal component, in order to remove electrical short circuits. Tolerance analysis will describe
whether the small clearances specified will meet the safety requirement, assigned manufacturing and
assembly variability force on the minimum clearance.
As per the example, the required lengths ‘Li ‘may vary from the nominal lengths ‘λi’ by a small
value. If there is higher variation in the ‘Li’ there may well be important problems in accepting G > 0.
Thus it is sensible to limit these changes via tolerances. For similar tolerances, ‘Ti’, represent an ‘upper
limit’ on the absolute variation between actual and nominal values of the i th detail part dimension, it is
means that |Li − λi| ≤ Ti. It is mostly in the interpretation of this last inequality that the different methods
of tolerance stacking vary.
The nominal value ‘γ’of G is typically computed by replacing in equation L1 − L2 − L3 − L4 −
L5 − L6, the actual values of Li’s by the corresponding nominal values of λi, that is γ = λ1 − λ2 − λ3 −
λ4 − λ5 − λ 6 .
4.5.2. Statistical method for tolerance analysis (RSS) :
In RSS method, tolerance stacking a significant new element is added to the assumptions,
specifically which the detail differences from nominal are random and independent from part by part. It
is expensive in the sense that it frequently commanded very close tolerances. That all variations from
nominal should dispose themselves in worst case method to defer the higher assembly tolerance is a
relatively unlikely proposition. On the other hand, it had the advantage of assurance the resulting
assembly tolerance. Statistical tolerance in its typical form operates under two basic hypotheses:
As per Centered Normal Distribution, somewhat considering that the ‘Li’ can occur anywhere
within the tolerance distribution [λi − Ti, λi + Ti], assume that the ‘Li’ are normal random variables, that
is change randomly according to a normal distribution, centered on that similar interval and with a ±3σ
distribute equal to the span of that interval, hence 99.73% of all ‘Li’ values occur within this gap. As per
the normal distribution is such that the ‘Li’ fall with upper frequency in the middle near ‘λi’ and with
low frequency closer the interval endpoints. The match of the ±3σ distribution with the span of the detail
tolerance span is hypothetical to state that almost all parts will satisfy the detail tolerance limits as shown
in figure 4.8.
Typically Tstat assy is considerably smaller than T arith assy. For n=3, the scale of this variation is
simply visualized and valued by a rectangular box with side lengths T1, T2 and T3. To obtain from one
corner of the box to the diagonally opposite corner, one can cross the gap T21 + T22 + T23 along that
diagonal and follow the three edges with lengths T1, T2, and T3 for a total length T = T1 + T2 +
arith assy
Second Order Tolerance Analysis is required to find what output is going to be when the
assembly function is not linear. In classical mechanical engineering developments kinematic changes
and other assembly performances result in non-linear assembly operations. Second order estimates are
more complex so manual calculations are not suitable but the computation is greatly improved and
becomes feasible within tolerance analysis software.
As shown in the figure 4.10, the axes do not create a best reference hence a small error in
squareness of the base of the cylinder origins the object to tilt away from the vertical axis.
An axis should always pass via a surface that is firmly linked with the bulk of the component.
As shown in the figure 4.11, it would be best to position the origin (Z=0) at the end of the component
rather than the fitting that is freely dimensioned virtual to the end.
4.6.1. Calculating Center of gravity location
The center of gravity of an object is:
described the ‘center of mass’ of the object.
the location where the object would balance.
the single point where the static balance moments are all zero about three mutually
perpendicular axis.
the centroid of object the volume when the object is homogeneous.
the point where the total mass of the component could be measured to be concentrated while
static calculations.
the point about where the component rotates in free space
the point via the gravity force can be considered to perform
the point at which an exterior force must be used to create translation of an object in space
Center of gravity location is stated in units of length along the three axes (X, Y, and Z). These
three components of the vector distance from the base of the coordinate system to the Center of gravity
location. CG of composite masses is computed from moments considered about the origin. The essential
dimensions of moment are Force and Distance. On the other hand, Mass moment may be utilized any
units of Mass times Distance. For homogeneous components, volume moments may also be considered.
Care should be taken to be confident that moments for all parts are defined in compatible units.
Component distances for CG position may be either positive or negative, and in reality their
polarity based on the reference axis position. The CG of a homogeneous component is determined by
determining the Centroid of its volume. In practical, the majority of components are not homogeneous,
so that the CG must be calculated by adding the offset moments along all of the three axes.
Standards for computer graphics- Graphical Kernel System (GKS) - standards for
exchangeimages- Open Graphics Library (OpenGL) - Data exchange standards - IGES, STEP,
CALSetc. - communication standards.
CAD Standards
5.1. Introduction
The purpose of CAD standard is that the CAD software should not be device-independent and
should connect to any input device via a device driver and to any graphics display via a device drive.
The graphics system is divided into two parts: the kernel system, which is hardware independent
and the device driver, which is hardware dependent. The kernel system, acts as a buffer independent and
portability of the program. At interface ‘X’ , the application program calls the standard functions and sub
routine provided by the kernel system through what is called language bindings. These functions and
subroutine, call the device driver functions and subroutines at interface ‘Y’ to complete the task required
by the application program (Fig.5.1.).
GKS arrange its functionality into twelve functional stages, based on the complexity of the
graphical input and output. There are four stages of output (m, 0, 1, 2) and three stages of input (A, B,
C). NCAR GKS has a complete execution of the GKS C bindings at level 0 A.
GKS is based on a number of elements that may be drawn in an object know as graphical
primitives. The fundamental set of primitives has the word names POLYLINE, POLYMARKER,
FILLAREA, TEXT and CELLARRAY, even though a few implementations widen this basic set.
i) POLYLINES
The GKS function for drawing line segments is called ‘POLYLINE’. The ‘POLYLINE’
command takes an array of X-Y coordinates and creates line segments joining them. The elements that
organize the look of a ‘POLYLINE’ are (Fig.5.3):
Line type : solid, dashed or dotted.
Line width scale factor : thickness of the line.
Polyline color index : color of the line.
ii) POLYMARKERS
The GKS ‘POLYMARKER’ function permits to draw symbols of marker centered at coordinate
points. The features that control the look of ‘POLYMARKERS’ are (Fig.5.4.):
Marker characters : dot, plus, asterisk, circle or cross.
Marker size scale factor : size of marker
Polymarker color index : color of the marker.
iii) FILLAREA
The GKS ‘FILL AREA’ function permits to denote a polygonal shape of a zone to be filled
with various interior shapes. The features that control the look of fill areas are (Fig.5.5.):
FILL AREA interior style : solid colors, hatch patterns.
FILL AREA style index : horizontal lines; vertical lines; left slant lines;
right slant lines; horizontal and vertical lines; or left
slant and right slant lines.
Fill area color index : color of the fill patterns / solid areas.
iv) TEXT
The GKS TEXT function permits to sketch a text string at a specified coordinate place. The features
that control the look of text are:
Text font and precision : text font should be used for the characters
Character expansion factor : height-to-width ratio of each character.
Character spacing : additional white space should be inserted between characters
Text color index : color the text string
Character height : size of the characters
Character up vector : angle the text
Text path : direction the text should be written (right, left, up, or down).
Text alignment : vertical and horizontal centering options for the text string.
Fig.5.6. GKS TEXT
v) CELL ARRAY
The GKS CELL ARRAY function shows raster like pictures in a device autonomous manner.
The CELL ARRAY function takes the two corner points of a rectangle that indicate, a number of
partitions (M) in the X direction and a number of partitions (N) in the Y direction. It then partitions the
rectangle into M x N sub rectangles noted as cells.
A graphics standard proposed for interactive Three Dimensional applications should assure
different criteria. It should be introduced on platforms with changing graphics abilities without
sacrificing the graphics quality of the primary hardware and without compromising control over the
hardware’s function. It must offer a normal interface that permits a programmer to explain rendering
processes quickly.
To end with, the interface should be flexible adequate to contain additions, hence that as new
graphics operations become important, these operations can be given without sacrificing the original
interface. OpenGL meets these measures by giving a simple interface to the basic operations of 3D
graphics rendering. It supports basic graphics primitives, basic rendering operations and lighting
calculations. It also helps advanced rendering attributes such as texture mapping.
Figure 5.7 shows a schematic diagram of OpenGL. Commands go into OpenGL on the left. The
majority commands may be collected in a ‘display list’ for executing at a later time. If not, commands
are successfully sent through a pipeline for processing.
The first stage gives an effective means for resembling curve and surface geometry by
estimating polynomial functions of input data. The next stage works on geometric primitives explained
by vertices. In this stage vertices are converted, and primitives are clipped to a seeing volume in
creation for the next stage.
All ‘fragment’ created is supplied to the next stage that executes processes on personal
fragments before they lastly change the structural buffer. These operations contain restricted updates into
the structural buffer based on incoming and formerly saved depth values, combination of incoming
colors with stored colors, as well as covering and other logical operations on fragment values.
To end with, rectangle pixels and bitmaps by pass the vertex processing part of the pipeline to
move a group of fragments in a straight line to the individual fragment actions, finally rooting a block of
pixels to be written to the frame buffer. Values can also be read back from the frame buffer or duplicated
from one part of the frame buffer to another. These transfers may contain several type of encoding or
decoding.
i) Based on IRIS GL
OpenGL is supported on Silicon Graphics’ Integrated Rater Imaging System Graphics Library
(IRIS GL). Though it would have been potential to have designed a totally new Application
Programmer’s Interface (API), practice with IRIS GL offered insight into what programmers need and
don’t need in a Three Dimensional graphics API. Additional, creation of OpenGL similar to Integrated
Rater Imaging System Graphics Library where feasible builds OpenGL most likely to be admitted; there
are various successful IRIS GL applications, and programmers of IRIS GL will have a simple time
switching to OpenGL.
ii) Low-Level
A critical target of OpenGL is to offer device independence while still permitting total contact to
hardware. Therefore the API gives permission to graphics operations at the lowest level that still gives
device independence. Hence, OpenGL does not give a suggestion for modeling complex geometric
objects.
iv) Modal
A modal Application Programmer’s Interface arises in executions in which processes
function in parallel on different primitives. In that cases, a mode modify must be transmit to all
processors so that all collects the new parameters before it processes its next primitive. A mode
change is thus developed serially, stopping primitive processing until all processors have collected
the modifications, and decreasing performance accordingly.
v) Frame buffer
Most of OpenGL needs that the graphics hardware has a frame buffer. This is a realistic
condition since almost all interactive graphics run on systems with frame buffers. Some actions in
OpenGL are attained only during exposing their execution using a frame buffer. While OpenGL may
be applied to give data for driving such devices as vector displays, such use is minor.