Data Compression MCQ
Data Compression MCQ
Data Compression MCQ
Cryptograpgy and network security (Dr. A.P.J. Abdul Kalam Technical University)
DATA COMPRESSION
By
Mr. Sandeep Vishwakarma
Assistant Professor
Dr. A.P.J. Abdul Kalam
Technical University,Lucknow
YouTube: www.youtube.com/c/UniversityAcademy
3. What is compression?
(A) To compress something by pressing it very hardly
(B) To minimize the time taken for a file to be downloaded
(C) To reduce the size of data to save space
(D) To convert one file to another
Answer
Correct option is C
8. Based on the requirements of reconstruction, data compression schemes can be divided into ____
broad classes.
(A) 3
(B) 4
(C) 2
(D) 5
Answer
Correct option is C
9. _______ compression is the method which eliminate the data which is not noticeable and _______
compression does not eliminate the data which is not noticeable.
(A) Lossless, lossy
(B) Lossy, lossless
(C) None of these
Answer
Correct option is B
10. ______ compression is generally used for applications that cannot tolerate any difference between
the original and reconstructed data.
(A) Lossy
(B) Lossless
(C) Both
(D) None of these
Answer
Correct option is B
12. Suppose storing an image made up of a square array of 256×256 pixels requires 65,536 bytes. The
image is compressed and the compressed version requires 16,384 bytes. Then compression ratio is
_______.
(A) 1:4
(B) 4:1
(C) 1:2
(D) 2:1
Answer
Correct option is B
13. Lossy techniques are generally used for the compression of data that originate as analog signals,
such as
(A) Speech
(B) Video
(C) Both
(D) None of these
Answer
Correct option is C
14. If fidelity or quality of a reconstruction is _____, then the difference between the reconstruction
and the original is ______.
16. Which of the following is true of lossy and lossless compression techniques?
(A) Lossless compression is only used in situations where lossy compression techniques can't be used
(B) Lossy compression is best suited for situations where some loss of detail is tolerable, especially if
it will not be detectable by a human
(C) Both lossy and lossless compression techniques will result in some information being lost from
the original file
(D) Neither lossy nor lossless compression can actually reduce the number of bits needed to represent
a file
Answer
Correct option is B
17. Which of the following would not be suitable for Lossy Compression?
(A) Speech
(B) Video
(C) Text
(D) Image
Answer
Correct option is C
Answer
Correct option is C
21. According to Claude Elwood Shannon's second theorem, it is not feasible to transmit information
over the channel with ______ error probability, although by using any coding technique.
(A) Large
(B) May be large or small
(C) Unpredictable
(D) Small
Answer
Correct option is D
22. The essential condition/s for a good error control coding technique?
(A) Better error correcting capability
(B) Maximum transfer of information in bits/sec
(C) Faster coding & decoding methods
(D) All of the above
Answer
Correct option is D
26. The set of binary sequences is called a _____, and the individual members of the set are called
_______.
(A) Codewords, code
(B) Code, codewords
(C) None of these
Answer
Correct option is B
28. Composite source models is a combination or composition of several sources. In which how many
source being active at any given time?
(A) All
(B) Only one
(C) Only first three
(D) None of these
Answer
Correct option is B
29. For models used in lossless compression, we use a specific type of Markov process called a
(A) Continous time Markov chain
(B) Discrete time Markov chain
(C) Constant time Markov chain
(D) None of the above
Answer
Correct option is B
30. Markov model is often used when developing coding algorithms for
(A) Speech
(B) Image
(C) Both
(D) None of these
Answer
Correct option is C
(A) The details of data compression are subject to change without notice in service packs or
subsequent releases
(B) Compression is not available for system tables
(C) If you specify a list of partitions or a partition that is out of range, an error will be generated
(D) All of the mentioned
Answer
Correct option is D
33. In which type of Data Compression, the integrity of the data is preserved?
(A) Lossy Compression
(B) Lossless Compression
(C) Both of the above
36. In how many parts we can divide audio and video services into broad categories?
(A) Two
(B) Three
(C) Four
(D) None of the above
Answer
Correct option is B
Unit-II
1. Huffman codes are ______ codes and are optimum for a given model (set of probabilities).
(A) Parity
(B) Prefix
(C) Convolutional code
(D) Block code
Answer
Correct option is B
2. The Huffman procedure is based on observations regarding optimum prefix codes, which is/are
(A) In an optimum code, symbols that occur more frequently (have a higher probability of
occurrence) will have shorter codewords than symbols that occur less frequently.
(B) In an optimum code,thetwo symbolsthat occurleast frequently will havethe samelength
(C) Both (A) and (B)
(D) None of these
Answer
Correct option is C
4. How many printable characters does the ASCII character set consists of?
(A) 128
(B) 100
(C) 98
(D) 90
Answer
Correct option is B
5. The difference between the entropy and the average length of the Huffman code is called
(A) Rate
(B) Redundancy
(C) Power
(D) None of these
Answer
Correct option is B
6. Unit of redundancy is
(A) bits/second
(B) symbol/bits
(C) bits/symbol
(D) none of these
Answer
Correct option is C
9. Bits are needed for standard encoding if the size of the character set is X
(A) X+1
(B) log(X)
(C) X2
(D) 2X
Answer
Correct option is B
13. Running time of the Huffman algorithm, if its implementation of the priority queue is done using
linked lists
(A) O(log(C))
(B) O(Clog(C))
(C) O(C2)
(D) O(C)
Answer
Correct option is C
14. The unary code for a positive integer n is simply n ___ followed by a ___.
(A) zero, ones
(B) ones, zero
(C) None of these
Answer
Correct option is B
16. In the Tunstall code, all codewords are of _____ length. However, each codeword represents a
_________ number of letters.
(A) different, equal
(B) equal, different
(C) none of these
Answer
Correct option is B
20. An alphabet consist of the letters A, B, C and D. The probability of occurrence is P(A) = 0.4, P(B)
= 0.1, P(C) = 0.2 and P(D) = 0.3. The Huffman code is
(A) A = 0
B = 111
C = 110
D = 10
(B) A = 0
B = 11
C = 10
D = 111
(C) A = 0
B = 111
C = 11
D = 101
(D) A = 01
B = 111
C = 110
D = 10
Answer
Correct option is A
27. Which type of method is used is used to compress data made up of combination of symbols?
(A) Run- length encoding
(B) Huffman encoding
(C) Lempel Ziv encoding
(D) JPEG encoding
Answer
Correct option is A
28. How many passes does lossy compression makes frequently?
(A) One pass
(B) Two pass
(C) Three pass
(D) Four pass
Answer
Correct option is B
29. Information is the
(A) data
(B) meaningful data
(C) raw data
(D) Both A and B
Answer
Correct option is B
Unit-III
1. In dictionary techniques for data compaction, which approach of building dictionary is used for the
prior knowledge of probability of the frequently occurring patterns?
(A) Adaptive dictionary
(B) Static dictionary
(C) Both
(D) None of the above
Answer
Correct option is B
2. If the probability of encountering a pattern from the dictionary is p, then the average number of bits
per pattern R is given by
(A) R=21-12p
(B) R=9-p
(C) R=21-p
(D) R=12-p
Answer
Correct option is A
3. Static dictionary –
(A) permanent
(B) sometimes allowing the addition of strings but no deletions
(C) allowing for additions and deletions of strings as new input symbols are being read
(D) Both (A) and (B)
(E) Both (A) and (C)
Answer
Correct option is D
4. Adaptive dictionary –
(A) holding strings previously found in the input stream
(B) sometimes allowing the addition of strings but no deletions
(C) allowing for additions and deletions of strings as new input symbols are being read
(D) Both (A) and (B)
(E) Both (A) and (C)
Answer
Correct option is E
5. LZ77 and LZ78 are the two __________ algorithms published in papers by Abraham Lempel and
Jacob Ziv in 1977 and 1978
(A) Lossy data compression
(B) Lossless data compression
(C) Both
(D) None of the above
Answer
Correct option is B
6. Deflate = ________
(A) LZ78 + Huffman
(B) LZ77 + Huffman
(C) LZW + Huffman
(D) None of these
Answer
Correct option is B
8. LZ78 has _____ compression but very _____ decompression than LZ77.
(A) fast, slow
(B) slow, fast
(C) None of these
Answer
Correct option is B
13. A coding scheme that takes advantage of long runs of identical symbols is called as
(A) Move-to-front coding
(B) Binary coding
(C) Huffman coding
(D) Move-to-back coding
Answer
Correct option is A
Correct option is A
20. Point out the wrong statement.
(A) You can enable or disable ROW or PAGE compression in online state only
(B) When you are compressing indexes, leaf-level pages can be compressed with both row and
page compression
(C) Non–leaf-level pages do not receive page compression
(D) None of the mentioned
Answer
Correct option is A
21. What is image?
(A) Picture
(B) Matrix of pixel
(C) Collection of pixel
(D) All of these
Answer
Correct option is D
22. An image transmitted using wireless network:
(A) corrupted as a result of lighting or other atmospheric disturbance.
(B) non-corrupted as a result of lighting or other atmospheric disturbance.
(C) corrupted as a result of pixel disturbance.
(D) none of above
Answer
Correct option is A
Unit-IV
1. Which of the following characterizes a quantizer
(A) Quantization results in a non-reversible loss of information
(B) A quantizer always produces uncorrelated output samples
(C) The output of a quantizer has the same entropy rate as the input
(D) None of the above
Answer
Correct option is A
5. Which of the following statement is correct for comparing scalar quantization and vector
quantization?
(A) Vector quantization improves the performance only for sources with memory. For iid sources, the
best scalar quantizer has the same efficiency as the best vector quantizer
(B) Vector quantization does not improve the rate-distortion performance relative to scalar
quantization, but it has a lower complexity
(C) By vector quantization we can always improve the rate-distortion performance relative to the best
scalar quantizer
(D) All of the above
Answer
Correct option is C
6. If {x}n is the source output and {y}n is the reconstructed sequence, then the squared error measure
is given by
(A) d(x, y) = (y - x)2
(B) d(x, y) = (x - y)2
(C) d(x, y) = (y + x)2
(D) d(x, y) = (x - y)4
Answer
Correct option is B
7. If {x}n is the source output and {y}n is the reconstructed sequence, then the absolute difference
measure is given by
(A) d(x, y) = |y - x|
(B) d(x, y) = |x - y|
(C) d(x, y) = |y + x|
(D) d(x, y) = |x - y|2
Answer
Correct option is B
8. The process of representing a _______ possibly infinite set of values with a much _______ set is
called quantization
(A) Large, smaller
(B) Smaller, large
(C) None of these
Answer
Correct option is A
11. If a Zero is assigned a decision level, then what is the type of quantizer?
(A) A midtread quantizer
(B) A midrise quantizer
(C) A midtreat quantizer
(D) None of the above
Answer
Correct option is B
12. If a Zero is assigned a quantization level, then what is the type of quantizer?
(A) A midtread quantizer
(B) A midrise quantizer
(C) A midtreat quantizer
(D) None of the above
Answer
Correct option is A
17. Which audio/video refers to on-demand requests for compressed audio/video files?
(A) Streaming live
(B) Streaming stored
(C) Interactive
(D) None of the above
Answer
Correct option is B
18. According to Nyquist theorem, how many times the highest frequency we need to sample an
analog signal?
(A) Three
(B) Two
(C) Four
(D) None of the above
Answer
Correct option is B
19. Which encoding is based on the science of psychoacoustics, which is the study of how people
perceive sound?
(A) Predictive
(B) Perceptual
(C) Both of the above
(D) None of the above
Answer
Correct option is B
20. SDH uses to measure block errors.
(A) CRC
(B) Rectangular code
(C) bit-interleaved parity (BIP )
(D) Simple parity check
21. The minimum sampling rate is called?
(A) Data rate
(B) symbol rate
(C) Nyquist rate
(D) None of the above
Answer
Correct option is C
(A) Increasing
(B) Decreasing
(C) Does not depend on
(D) None of the mentioned
Answer
Correct option is A
27. 1 bit quantizer is a
(A) Hard limiter
(B) Two level comparator
(C) Hard limiter & Two level comparator
(D) None of the mentioned
Answer
Correct option is C
28. The low pass filter at the output end of delta modulator depends on
(A) Step size
(B) Quantization noise
(C) Bandwidth
(D) None of the mentioned
Answer
Correct option is C
29. Quantization Matrix in JPEG compression was introduced because
(A) It is computationally more efficient to work with matrix than with scalar
quantization;
(B) It allows better entropy encoding due to DC and AC coefficient distribution in the
8x8 block matrix;
(C) It allows better differentiation of DC and AC coefficients in the 8x8 block matrix
than a scalar quantization;
Answer
Correct option is C
Answer
Correct option is A
Unit-V
1. Characteristic of a vector quantizer
(A) Multiple quantization indexes are represented by one codeword
(B) Each input symbol is represented by a fixed-length codeword
(C) Multiple input symbols are represented by one quantization index
(D) All of the above
Answer
Correct option is C
3. Let N represent the dimension of a vector quantizer. What statement about the performance of the
best vector quantizer with dimension N is correct?
(A) For N approaching infinity, the quantizer performance asymptotically approaches the rate-
distortion function (theoretical limit)
(B) By doubling the dimension N, the bit rate for the same distortion is halved
(C) The vector quantizer performance is independent of N
(D) All of the above
Answer
Correct option is A
4. Which of the following is/are correct for advantage of vector quantization over scalar quantization
(A) Vector Quantization can lower the average distortion with the number of reconstruction levels
held constant
(B) Vector Quantization can reduce the number of reconstruction levels when distortion is held
constant
(C) Vector Quantization is also more effective than Scalar Quantization When the source output
values are not correlated
(D) All of the above
Answer
Correct option is D
Correct option is B
(A) Modulation
(B) Multiplexing
(C) Quantization
(D) Sampling
Answer
Correct option is C
Answer
Correct option is A
11. To convert a continuous sensed data into Digital form, which of the following is required?
(A) Sampling
(B) Quantization
(C) Both Sampling and Quantization
(D) Neither Sampling nor Quantization
Answer
Correct option is C
Answer
Correct option is B
13. The resulting image of sampling and quantization is considered a matrix of real numbers.
Answer
Correct option is C
14. Which conveys more information?
(A) High probability event
(B) Low probability event
(C) High & Low probability event
(D) None of the mentioned
Answer
Correct option is B
15. The probability density function of the envelope of narrow band noise is
(A) Uniform
(B) Gaussian
(C) Rayleigh
(D) Rician
Answer
Correct option is B
16. Which model is known as ignorance model?
(A) Physical model
(B) Markov model
(C) Probability model
(D) Composite Source Model
Answer
Correct option is C
17. Shannons theorem is also called
(A) noiseless coding theorem
(B) noisy coding theorem
(C) coding theorem
(D) noiseless theorem
Answer
Correct option is A
18. Transform coding, vector quantization are examples for______
(A) Pixel
(B) compression
(C) Transmission
(D) Lossy compression
Answer
Correct option is D
19. Entropy Coding is an ________
(A) Lossless
(B) Lossy
(C) 0
(D) None
Answer
Correct option is A
20. ______ is normally used for the data generated by scanning the documents,
fax machine, typewriters etc.
(A) Huffman Coding
(B) Transformation Coding
(C) Vector Quantization
(D) Runlength Encoding
Answer
Correct option is D
21. Compression Technique used in Image Video is
(A) Huffman Coding
(B) Transformation Coding
(C) Entropy Coding
(D) Differential Encoding
Answer
Correct option is B
22. Compression Technique used in Audio is
(A) Differential Encoding
(B) Transformation Encoding
(C) Entropy Coding
(D) Differential & Transformation Encoding
Answer
Correct option is D
23. Expansion of LZ Coding is
(A) Lossy
(B) Lossless
(C) Lempel-ziv-welsh
(D) Lempel-ziv
Answer
Correct option is D
(C) Lempel-ziv
(D) Lempel-ziv-welsh
Answer
Correct option is D
Practice Question
(Option In bold font is Answer )
1. What is compression?
a) To convert one file to another
b) To reduce the size of data to save space
c) To minimise the time taken for a file to be download
d) To compress something by pressing it hard
7. Uncompressed audio and video files require less memory thancompressed files....
a) True
b) False
a) Images
b) Sounds
c) Videos
d) Text
15. Which of the following is true of lossy and lossless compression techniques?
a) Lossless compression throws away unimportant details that a human being will
likely be unable to detect.
b) Lossy compression is only possible on files that are at least one gigabyte in size before
compression.
c) Lossy compression techniques are no longer commonly used.
d) Lossless compression is fully reversible, meaning the original file can be recreated bit
for bit.
16. Which of the following is true of lossy and lossless compression techniques?
a) Both lossy and lossless compression techniques will result in some information
being lost from the original file.
b) Neither lossy nor lossless compression can actually reduce the number of bits needed to
represent a file.
c) Lossless compression is only used in situations where lossy compression techniques can't
be used.
d) Lossy compression is best suited for situations where some loss of detail is
tolerable, especially if it will not be detectable by a human.
17. data compression algorithm that allows the original data to be perfectly reconstructed
from the compressed data.
a) lossy compression
b) lossless compression
b) Bzip2 generates a better compression ratio than does Gzip, but it’s much slower
c) Gzip is a compression utility that was adopted by the GNU project
d) None of the mentioned
25. Gzip (short for GNU zip) generates compressed files that have a _ extension.
a) .gzip
b) .gz
c) .gzp
d) .g
27. typically compresses files to within 10% to 15% of the best available
techniques.
a) LZO
b) Bzip2
c) Gzip
d) All of the mentioned
32. The type of encoding where no character code is the prefix of another character code
is called?
a) optimal encoding
b) prefix encoding
c) frequency encoding
d) trie encoding
d) O( N log C)
34. What is the running time of the Huffman algorithm, if its implementation of the priority queue
is done using linked lists?
a) O(C)
b) O(log C)
c) O(C log C)
d) O(C2)
50. The event with minimum probability has least number of bits.
a) True
b) False
52. When the base of the logarithm is 2, then the unit of measure of information is
a) Bits
b) Bytes
c) Nats
d) None of the mentioned
58. Coded system are inherently capable of better transmission efficiency than the uncoded
system.
a) True
b) False
61. How many printable characters does the ASCII character set consists of? a) 120
b) 128
c) 100
d) 98
63. How many bits are needed for standard encoding if the size of the character set is
X?
a) log X
b) X+1
c) 2X
d) X2
64. The code length does not depend on the frequency of occurrence of characters.
a) true
b) false
a. Granular error
b. Slope over load error
c. Both a & b
d. None of the above
75. If the input analog signal is within the range of the quantizer, the quantization error eq
(n) is bounded in magnitude i.e., |eq (n)| < Δ/2 and the resulting error is called?
a) Granular noise
b) Overload noise
c) Particulate noise
d) Heavy noise
76. If the input analog signal falls outside the range of the quantizer (clipping), eq (n) becomes
unbounded and results in _
a) Granular noise
b) Overload noise
c) Particulate noise
d) Heavy noise
77. In the mathematical model for the quantization error eq (n), to carry out the analysis, what
are the assumptions made about the statistical properties of eq (n)?
i. The error eq (n) is uniformly distributed over the range — Δ/2 < eq (n) < Δ/2.
ii. The error sequence is a stationary white noise sequence. In other words, the error eq (m) and
the error eq (n) for m≠n are uncorrelated.
iii. The error sequence {eq (n)} is uncorrelated with the signal sequence x(n).
iv. The signal sequence x(n) is zero mean and stationary.
a) i, ii & iii
b) i, ii, iii, iv
c) i, iii
d) ii, iii, iv
b) 10 log10(Pn/Px)
c) 10 log2(Px/Pn)
d) 2 log2(Px/Pn)
81. In the equation SQNR = 10 log10(Px/Pn). what are the terms Px and Pn are called
respectively.
a) Power of the Quantization noise and Signal power
b) Signal power and power of the quantization noise
c) None of the mentioned
d) All of the mentioned
82. In the equation SQNR = 10log10(Px/Pn), what are the expressions of Px and Pn?
a) Px=σ2=E[x2(n)] and Pn=σ2 =E[e2q(n)]
b) Px=σ2=E[x2(n)] and Pn= σ2 =E[e3q(n)]
c) Px=σ2=E[x3(n)] and Pn= σ2 =E[e2q(n)]
d) None of the mentioned
83. If the quantization error is uniformly distributed in the range (-Δ/2, Δ/2), the mean value
of the error is zero then the variance Pn is?
a) Pn= σ2 =Δ2/12
b) Pn= σ2 =Δ2/6
c) Pn= σ2 =Δ2/4
d) Pn= σ2 =Δ2/2
84. By combining Δ=R2b+1 with Pn=σ2e=Δ2/12 and substituting the result into SQNR = 10
log10PxPn, what is the final expression for SQNR = ?
a) 6.02b + 16.81 + 20log10(R/σx)
b) 6.02b + 16.81 – 20log10(R/σx)
c) 6.02b – 16.81 – 20log10(R/σx)
d) 6.02b – 16.81 – 20log10(R/σx)
85. In the equation SQNR = 6.02b + 16.81 – 20log10(R/σx), for R = 6σx the equation
becomes?
a) SQNR = 6.02b-1.25 dB
b) SQNR = 6.87b-1.55 dB
c) SQNR = 6.02b+1.25 dB
d) SQNR = 6.87b+1.25 dB
89. A Lloyd quantizer can be considered as optimal quantizer for fixed-length coding of the
quantization indices. Can we improve a Lloyd quantizer by using variable length codes?
a. No, variable length coding does not improve the quantizer performance, since all quantization
indices have the same probability.
b. No, variable length coding does not improve the quantizer performance, since the quantizer
output is uncorrelated.
c. Yes, in general, the quantizer performance can be improved by variable length coding
(there are some exceptions for special sources).
91. What characterizes the best possible scalar quantizer with variable length coding at high
rates (for MSE distortion)?
a. All quantization intervals have the same probability.
b. All quantization intervals have the same size.
c. None of the above statements is correct.
92. Which statement is true regarding the performance of optimal scalar quantizers with
variable length coding at high rates for iid sources?
a. For iid sources, the operational distortion-rate curve for optimal scalar quantization is always
equal to the distortion-rate function (theoretical limit).
b. Only for Gaussian iid sources, the operational distortion-rate curve for optimal scalar
quantization is equal to the distortion-rate function (theoretical limit)
c. For iid sources, the operational distortion-rate curve for optimal scalar quantization is
1.53 dB worse than the distortion-rate function (theoretical limit).
94. What statement is correct for comparing scalar quantization and vector quantization?
a. By vector quantization we can always improve the rate-distortion performance
relative to the best scalar quantizer.
b. Vector quantization improves the performance only for sources with memory. For iid sources,
the best scalar quantizer has the same efficiency as the best vector quantizer.
c. Vector quantization does not improve the rate-distortion performance relative to scalar
quantization, but it has a lower complexity.
96. Assume we have a source with memory and apply scalar quantization and scalar Huffman
coding? Can the performance, in general, be improved by replacing the scalar Huffman coding by
conditional Huffman coding or block Huffman coding?
a. Yes, the performance can in general be improved, since there will be also dependencies
between successive quantization indexes.
b. No, the performance cannot be improved, since the quantization removes all dependencies
between the source symbols.
c. No, the performance cannot be improved, since the quantization error and the input signal are
uncorrelated.
97. Uniform quantizer is also known as
a) Low rise type
b) Mid rise type
c) High rise type
d) None of the mentioned
b) 2 bit DPCM
c) 4 bit DPCM
d) None of the mentioned
104. The low pass filter at the output end of delta modulator depends on
a) Step size
b) Quantization noise
c) Bandwidth
d) None of the mentioned
105. In early late timing error detection method if the bit is constant, then the slope will be
a) Close to zero
b) Close to infinity
c) Close to origin
d) None of the mentioned
106. The theoretical gain in zero crossing TED is greater than early late TED.
a) True
b) False
c) Speech coding
d) All of the mentioned
110. The probability density function of the envelope of narrow band noise is
a) Uniform
b) Gaussian
c) Rayleigh
d) Rician
111. The type of noise that interferes much with high frequency transmission is
a) White
b) Flicker
c) Transit time
d) Shot
a) Inversely proportional
b) Directly proportional
c) Equal
d) Double
117. The output SNR can be made independent of input signal level by using
a) Uniform quantizer
b) Non uniform quantizer
c) Uniform & Non uniform quantizer
d) None of the mentioned
122. Which type of quantization is most preferable for audio signals for a human ear?
a) Uniform quantization
b) Non uniform quantization
c) Uniform & Non uniform quantization
d) None of the mentioned
127. Which among the following compression techniques is/are intended for still images?
a. JPEG
b. H.263
c. MPEG
d. All of the above
133. LZ77 and LZ78 are the two data compression algorithms.
a. lossless
b. lossy
134. The LZ77 algorithm works on data whereas LZ78 algorithm attempts to
work on data.
a. future , past
b. past , future
c. present, future
d. past, present
135. Prediction by Partial Matching is a method to predict the next symbol depending on n
previous. This method is else called prediction by_
Model.
a. Probability
b. Physical
c. Markov
d. None of the above