Study of Consolidation Parameters Notes

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 19

STUDY OF CONSOLIDATION PARAMETERS:

PHARMACOKINETICS
What is Pharmacokinetics?
Pharmacokinetics is defined as the kinetics of drug absorption,
distribution, metabolism and excretion (ADME) and their
relationship with the pharmacological, therapeutic or
toxicological response in man and animals.
The term pharmaco comes from the Greek word for “drug”:
pharmackon and kinetics comes from the Greek word for
“moving”, kinetikos.
Relevant terms:
• Clinical Pharmacokinetics is defined as the application of
pharmacokinetic principles in the safe and effective
management of individual patient.
• Population Pharmacokinetics is defined as the study of
pharmacokinetic differences of drugs in various population
groups.
• Toxicokinetics is defined as the application of
pharmacokinetic principles to the design, conduct and
interpretation of drug safety evaluation studies.
Plasma Drug Concetration-Time Profile:

Pharmacokinetic Parameters:
Pharmacokinetic Parameters: The three important
pharmacokinetic parameters that describe the plasma level-
time curve and useful in assessing the bioavailability of a
drug from its formulation are:
1. Peak Plasma Concentration (Cmax)
2. Time of Peak Concentration (tmax)
3. Area Under the Curve (AUC)
Peak Plasma Concentration (Cmax):
The point of maximum concentration of drug in plasma is
called as the peak and the concentration of drug at peak is
known as peak plasma concentration.
It is also called as peak height concentration and maximum
drug concentration.
Cmax is expressed in mcg/ml.
The peak plasma level depends upon-
• Dose administered
• Rate of absorption
• Rate of elimination.
The peak represents the point of time when absorption rate
equals elimination rate of drug.
The portion of curve to the left of peak represents absorption
phase
i.e. when the rate of absorption is greater than the rate of
elimination.
The section of curve to the right of peak generally represents
elimination phase i.e. when the rate of elimination exceeds
rate of absorption.
Peak concentration is often related to the intensity of
pharmacological response and should ideally be above
minimum effective concentration (MEC) but less than the
maximum safe concentration (MSC).
Time of Peak Concentration (tmax):
The time for drug to reach peak concentration in plasma
(after extravascular administration) is called as the time of
peak concentration.
It is expressed in hours and is useful in estimating the rate of
absorption.
Onset time and onset of action are dependent upon tmax·
This parameter is of particular importance in assessing the
efficacy of drugs used to treat acute conditions like pain and
insomnia which can be treated by a single dose.
Area Under the Curve (AUC):
It represents the total integrated area under the plasma
level-time profile and expresses the total amount of drug
that comes into the systemic circulation after its
administration.
AUC is expressed in mcg/ml X hours.
It is the most important parameter in evaluating the
bioavailability of a drug from its dosage form as it represents
the extent of absorption.
AUC is also important for drugs that are administered
repetitively for the treatment of chronic conditions like
asthma or epilepsy.

Software used in the estimation of Pharmacokinetics:


1. in - silico pharmacokinetics like Window-Based Non-linear
model fitting (WinNonlin)
2. Statistical Analysis Software (SAS)
3. Non-linear Mixed Effects Modelling (NONMEM)
4. PK Solution
5. Simbiology:
a. SimBiology pharmacokinetics software allows you to
create a PK model by specifying your desired model
options in the PK model wizard. Model options
include number of compartments, dosing type, and
elimination route. Alternatively, you can create a PK
or PD model using the graphical, tabular, or
programmatic interfaces
6. The R Foundation for Statistical Computing
7. Kinetica
8. PK Solutions
9. GastroPlus
Test used for measurement of pharmacokinetics
parameters:
1. ANOVA
2. Student t test
3. Chi Square test
Analysis of Variance (ANOVA)
The analysis of variance (ANOVA) was developed by R.A.
Fisher in 1920.
It is possible to study the significance of the differences of
mean values of a large number of samples at the same time
because of ANOVA.
The ANOVA is classified into two ways:
 One-way classification:
In one-way classification we take in-to account the effect of
only one variable.
 Two-way classification
In two-way classification, the effect of two variables can be
studied
Principle of ANOVA:
The underlying principle of the ANOVA is to compare the
differences in the different means of population, by studying
amount of the variations within the samples, with respect to
amount of variations present between samples of the
population.
Classification of ANOVA:
1. One-way ANOVA
2. Two-way ANOVA
One-way ANOVA:
In the concept of the one-way ANOVA, there is analysis of
variance of a continuous dependent variable with a single
classification variable (Independent variable). One-way
analysis of variance (ANOVA) test the equality of the
population means where there is classification only by one
variable.
Two-way ANOVA:
1. ANOVA Technique in the context of two-way Design
when repeated values are not there:
In the case when repeated values are missing,
researchers cannot compute sum of squares directly
within the samples. So, the residual or the error
variation is calculated by subtracting the sum of
variance between varieties of a treatment and variance
between varieties of other treatment from the total
sum of squares for total variance.

2. ANOV A Technique in, the Context of Two-way Design


when Repeated Values are there: The researcher gets
the separate value of the individual smallest change or
variation, in the case of the two-way ANOVA when
there is some repeated values. The degree of freedom
and sum of the squares are calculated for the variance
within the sample same as in case of one-way ANOVA.
The total of the sum of the square (SS), which are
between the columns and rows can be stated as above.
Later, the researcher finds out the 'left over sums of the
square' and also the 'left over degree of freedom ‘s
which is known as ‘interaction variation’. (Interaction
variation, is the value of the relationship present
between two different classifications). After all the
calculations have been done ANOVA table can be setup
for observing various inferences.
Student t test:
A particular frequency distribution is obtained when a very
large number of small samples are taken form a
population and the mean of these samples is used to plot
the frequency distribution. It is called Student’ t test
distribution.
Sir William Gossett and R. A. Fisher are considered to be
great contributors towards the theory of small samples.
Under the name of ‘student’, Gossett published his
findings in the year 1905 and thus this theory is commonly
known as ‘students distribution’ ‘t-test’ or ‘students t
distribution’.
The t-distribution can be implemented in situations when
the sample size is 30 or less and no information is available
with respect to the population standard deviation.
Properties of t-Distribution:
1. The value of t-distribution ranges from -1 to +1.
2. It can also be used for large samples.
3. Similar, to normal curve, the mean of t-distribution is
zero.
4. Similar to normal distribution, a bell-shaped frequency
curve is also associated with t-distribution which is
symmetrical in nature. Degree of freedom is the only
parameter which determines the shape of the curve.
The shape of the curve changes with the change in
degree of freedom.
5. The value of variance of t-distribution is usually greater
than 1 and as the sample size increases it shifts towards
unity. Therefore, t-distribution becomes normal
distribution in case of sample size being very large.

Assumptions for student’s t-Test:


1. The parent population from which the sample is drawn
is normal.
2. The sample observations are random i.e. the given
sample is drawn by random sampling method.
3. The population standard deviation is not known.

Application of T-Distribution:
1. For testing the significance of difference between two
samples means.
2. Foe testing the significance of a sample regression co-
efficient and observed sample correlation co-efficient.
3. Foe testing the significance of multiple correlation
coefficients and the observed partial correlation co-
efficient.

Graph of ‘t’ Distribution:


It is clear that when the degree of freedom changes then
the shape of the t-distribution also changes.
The above diagram represents two important
characteristics of t-distribution which are as follows:

1. A t-distribution is lower at the mean and higher at the


tail than a normal distribution.
2. The t-distribution has proportionality greater area in its
tail than the normal distribution.
Types of T-Test:

1. Sample t-Test
2. Pooled T-Test / Unpaired t-Test
3. Paired t-Test

Sample t-Test:
If the variance of a normal population it unknown and
one wants to determine the mean of a sample which is
drawn from the population and it deviates significantly
from a stated valued, then the following statistic is used

Pooled t-Test/ Unpaired t-test:


The unpaired t-test is statistical technique that compares
means 2 independent or unrelated groups to find
whether there is a significant variance between the two.
It is also known as independent t-test.
If the two independent samples of size n1 and n2 with
means X1 and X2 and standard deviation S1 and S2 are
given and one wants to test the hypothesis that the
samples come from the same normal population.

Paired t-test:
The paired t-test is a statistical test that compares the
means and standard deviation of 2 related groups to find
whether there is a significantly difference between
them. It is also known as dependent or correlated t-test.
The significant difference happens when the difference
between two groups are unlikely to be due to sampling
error. A group can be related to similar group of persons
or the same items.
A paired t-test are more better than unpaired t-tests as
using the similar participants or item eliminates variation
among the samples that could be caused by anything
other.
To, study the effect of some action or process, this test
has been used.
Chi-square test:
In-general the test we use to measure the difference
between what is observed and what is expected
according to an assumed hypothesis is called the chi-
square test.
CHI-SQUARE TEST is non-parametric test not based on
any assumption or distribution of any variable.
This statistical test follows a specific distribution known
as chi-square distribution.
It was developed by Karl Pearson in 1990.
It is generally denoted as x²

It is a mathematical expression, representing between the ratio


between experimentally obtained result (O) and the
theoretically expected result (E) based on certain hypothesis.
It uses data in the form of frequencies.
Chi-square is calculated by dividing the square of the overall
deviation in the observed and expected frequencies by the
expected frequencies.

If there is no difference between actual and observed


frequencies, the value of chi-square is zero.
If there is a difference between observed and expected
frequencies, the value of chi-square would be more than zero.

Characteristics of chi-square test:


1. The test is based on frequencies and not on the
parameters like mean and standard deviation.
2. The test is used for testing the hypothesis and is not useful
for estimation.
3. This test is applied to complex contingency table with
several classes and as such is a very useful test in research
work.

Assumptions of chi-square test:


1. All the observation must be independent. No individual
item should be included twice or a number of items in the
sample.
2. The total number of observations should be large. The chi-
square test should not be used if n>50.
3. If the theoretical frequency is less than five then we
pooled with preceding or succeeding frequency so that
resulting sum is greater than five.

Applications of chi-square test:


1. To test the goodness of fit.
2. To test the independence of attributes.
3. To test the homogeneity of independent estimates of the
population variance.
4. To test the detection of linkage.

You might also like