0% found this document useful (0 votes)
15 views36 pages

Gear & Equipment

Uploaded by

unikeruniks
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views36 pages

Gear & Equipment

Uploaded by

unikeruniks
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

WHAT IS AN AUDIO INTERFACE?

Faculty- Kunal shourie

Today’s computer-based digital audio workstation (DAW) software gives you


more recording and music production power than a studio full of hardware
from the pre-digital days. But despite all the functionality that such software
provides, its sound depends heavily on a piece of external hardware called an
audio interface.

Such devices offer the connectors you need to plug in microphones and
instruments for recording as well as speakers and headphones for listening.
They also typically provide metering and other important features. The more
you understand about how interfaces work, and the kinds of features they
offer, the better positioned you’ll be to make an informed buying decision.

CONNECTING WITH YOUR COMPUTER


Modern audio interfaces connect to your desktop or laptop computer via a
USB or Thunderbolt port (some older ones use different ports, such as PCI,
PCIe or Ethernet). Most interfaces work with both Mac® and Windows
systems; many are also compatible with Apple® iOS devices, although that
usually requires an additional adapter.

CONNECTING AND CONVERTING AUDIO


An audio interface acts as the front end of your computer recording system.
For example, let’s say you connect a microphone and record yourself singing.
The mic converts the physical vibration of air into an equivalent (i.e., “analog”)
electrical signal, which travels down the connecting cable into the interface’s
mic input. From there, it goes into the interface’s built-in mic preamplifier,
which boosts the low-level mic signal up to a hotter line level — something
that’s necessary for recording. (The quality of both the microphone and
preamp have a significant impact on how good a recording sounds.)

Next, the signal gets sent to the interface’s analog-to-digital (“A/D”) converter,
which changes it into equivalent digital audio data — a stream of ones and
zeroes that travel through the USB or Thunderbolt cable into your computer.
This data is then sent to your DAW or other recording software, where it gets
recorded and/or processed with effects.

Almost simultaneously, the now-digitized audio that originated at your


microphone — along with any other tracks you’ve already recorded for the
song — get sent back from the computer to the audio interface over the USB
cable, where it goes through an opposite quick change, carried out by a
digital-to-analog (“D/A”) converter, which turns it back to an equivalent analog
electrical signal. That signal is now available at the interface’s line outputs to
feed your studio speakers, headphone output(s), or other line-level devices.

We’re saying almost simultaneously because it actually takes a few


milliseconds (thousandths of a second) for the audio to go through all these
changes, from the time you start singing to the time you hear it back. That
slight delay is called latency — something we’ll look at more closely shortly.
The microphone

A good microphone in your home studio will enable you to have high-quality sound. The
microphone converts the vibrations in the air generated by a sound source into electricity.

You need the highest possible resolution at the moment of conversion. That’s why you need a
high-grade microphone. You have a huge selection of affordable, high-quality microphones
available on the market nowadays.

Also, as you guys must not be having an acoustic treatment, dynamic microphones are
advisable—for example, Shure –SM 58/57, Electro-Voice RE20, Electrovoic, Sm 7b. For
condenser mic, you can choose Rode nt 1 nt 2, Lewitt LCT 440(Affordable range) and
Neumann, AKG C314/414, U series, TLM for the higher range.
Types of Microphones

Let's have a look at the three main types of microphones and learn more about
them.

· Dynamic

· Condenser

· Ribbon

ü Dynamic
They are more durable and reliable, overall an all-rounder microphone for those that perform
live, record loud guitars, and swing their microphones. They'll be okay if they fall.

It does not require a power source.

· They are reasonably priced.

· They also have a proclivity for picking up noise.

· Shure SM 57/58 are the most frequent models.


ü Condenser microphones
They can be classified as Large and Small Diaphragm microphones. They are compassionate as
they use a conductive diaphragm that vibrates with sound pressure. They are very susceptible
to distortion, so they are not ideal for recording guitar amps up close.

The major difference between a dynamic and a condenser microphone is that a dynamic
microphone is better for capturing loud, intense sounds (drums or loud vocals), particularly in
a live setting. In contrast, a condenser microphone captures more delicate sounds and higher
frequencies (studio vocals, for example), particularly in a studio setting.

ü Ribbon Microphones
These microphones use an ultra-thin ribbon of electro-conductive material
suspended between the poles of a magnet to generate their signal. Early ribbon
designs were incredibly fragile.
A ribbon microphone is a good choice for recording a wide range of acoustic
instruments.

· These microphones are ideal for recording multiple instruments in a room.

· They are a little on the pricey side.

· They're microphones that are easily harmed.

· They work well with vocals, choirs, piano, strings, and woodwind instruments.

The Fundamental Purpose of a Preamp


We were saying, microphones record their signals in at mic-level. What
happens is that acoustic waves come vibrating through the air and jiggle a
diaphragm back and forth. This moves a magnet in a wire coil and this
generates an electrical signal.

But this signal is very weak, so weak that it gets it's own name... mic-level.
The situation is the same with instrument-level signals.

Now, all of our recording gear from compressors, equalizers, analog-to-digital


converters, the works... it all expects a line-level signal. A line-level signal is at
a much higher voltage, which is to say a louder volume. This is the type of
signal coming out of electric guitars and keyboards for instance.

So the challenge is to raise a mic-level or instrument-level signal up to a


line-level signal. This is the basic purpose of preamplifiers.

That's the basic reason we use preamps.

Two Types of Preamplifiers: Sonic Qualities


The first and foremost goal of a preamp is to raise the volume of a mic-level
signal and to do this cleanly. You want to boost the signal without raising the
noise floor and other non-problems that get picked up along the way like
electrical hum.

Pre's that boost your signal cleanly to perfectly reproduce the sound the
microphone recorded are called Transparent.

This is the problem with NOT using a preamp. You can run the signal into an
instrument input or into a soundcard and just crank up the input gain as high
as you can and get a useable signal to record into your daw

But you're cranking up the volume of all the noise too. The noise floor turns up
with the desired signal. You get a very high signal-to-noise ratio.

It goes like this: If you're recording with a microphone then you need to use a
preamplifier, no questions asked.

It is 100% possible to record without one and you have the same percentage
of a chance to get horrible results. The reason I'm so adamant about this is
that I recorded my first 30 songs or so without one because I had no clue what
I was doing. I was more eager to record my art than to learn the technical
aspects of how to do it properly.

The funny part about all of this is that preamps are hidden everywhere. They
are built into mixers. They're built into USB mics (for the love of all that is good
and holy, don't use one of these). They're built into even a cheap audio
interface. Some soundcards even have them.

But they're so invisible most of the time that when a newbie starts recording
on his or her own, they don't even realize they exist and end up not using one
at all!

Obviously they exist for a reason. The question is why? What do they do?
Can they do their job poorly? Can they do their job really darn good? What
happens if you skip one? You've got the questions and I've got the answers...

The preamp exists due to a characteristic of all microphones... They record a


mic-level signal.

The Fundamental Purpose of a Preamp


We were saying, microphones record their signals in at mic-level. What
happens is that acoustic waves come vibrating through the air and jiggle a
diaphragm back and forth. This moves a magnet in a wire coil and this
generates an electrical signal.

But this signal is very weak, so weak that it gets it's own name... mic-level.
The situation is the same with instrument-level signals.
Now, all of our recording gear from compressors, equalizers, analog-to-digital
converters, the works... it all expects a line-level signal. A line-level signal is at
a much higher voltage, which is to say a louder volume. This is the type of
signal coming out of electric guitars and keyboards for instance.

So the challenge is to raise a mic-level or instrument-level signal up to a


line-level signal. This is the basic purpose of preamplifiers.

That's the basic reason we use preamps.

Two Types of Preamplifiers: Sonic Qualities


The first and foremost goal of a preamp is to raise the volume of a mic-level
signal and to do this cleanly. You want to boost the signal without raising the
noise floor and other non-problems that get picked up along the way like
electrical hum.

Pre's that boost your signal cleanly to perfectly reproduce the sound the
microphone recorded are called Transparent.

This is the problem with NOT using a preamp. You can run the signal into an
instrument input or into a soundcard and just crank up the input gain as high
as you can and get a useable signal to record into your digital audio
workstation.

But you're cranking up the volume of all the noise too. The noise floor turns up
with the desired signal. You get a very high signal-to-noise ratio.
Starting with your acoustic treatment, then microphone, and then preamp, you
can control your desirable signal while maintaining a minimal amount of noise
through gain staging. That's part of the job of a recording engineer.

So back to it. Transparency is created through solid state electronics, but old
preamplifiers used tube technology (just like cathode ray tube televisions did).
Some newer solid state pre's use transformers.

Pre's that use transformers and vacuum tubes are designed to raise your
signals volume while imparting a specific Color.

Coloration, Color, Flavor... these are all terms related to a warmth that is
imparted to your signal as it passes through the tubes or transformers. What
happens is a very pleasant distortion is applied to the signal.

This is a harmonic distortion based on the signal itself at very low volumes
and lower frequencies that provide a sense of "warmness" to the signal.
Manufacturers have mastered the art of creating transparent preamps (not
that they all are willing to spend the money on the right electronic parts to do
it). The big boys that already have their transparent models are also providing
colorful models.

Worry about base quality first. At the top level, flavors are just very similar and
very subtle preferences for people to argue about (us studio engineers have
learned to hear every peculiarity!).

Major Difference Between Balanced +


Unbalanced Audio
One of the major differences between these cables is that balanced audio has
less risk for unwanted noise, while unbalanced audio can pick up humming or
buzzing sounds in certain environments.

In general, balanced audio will give you a better, stronger audio signal without
any extraneous noises. Unbalanced audio, on the other hand, is susceptible to
picking up noise and interference over longer distances. The ground wire in an
unbalanced audio cable can pick up unwanted noise as the audio signal
travels through it. This susceptibility (or lack thereof) to interference is due to
how the cable is made.

To understand that, let's dive deeper into how balanced and unbalanced audio
works.

What Is Unbalanced Audio?


An audio cable carrying an unbalanced signal uses two wires: a signal and a
ground.
The signal wire, as the name suggests, carries the audio signal to where it
needs to go. The ground wire acts as a reference point for the signal.
However, the ground wire itself also acts like an antenna, picking up unwanted
noise along the way.

Because unbalanced cables can pick up noise as a signal is sent along the
cable, they’re best used for short distances, like connecting a guitar to a
nearby amp. This minimizes the risk of unwanted noise.

Where does the noise come from?


Noise can come from a variety of electrical and radio interferences, but it
most commonly comes from power cables, which can create a humming
sound if they're near cables carrying unbalanced audio. Older, non-LED stage
lighting (such as spotlights or dimmers) can also add signal interference.

How do you reduce noise when using unbalanced cables?


The best technique to reduce unbalanced cable noise is to be careful with
cable placement. A single perpendicular crossing of

power and audio cables is much better than a parallel run. If parallel can't be
avoided, leave as much space as possible between audio and power cables.
Unbalanced Cable Types
RCA Cables
RCA audio cables are unbalanced analog audio connections that send stereo
audio over a right channel (red tip) and left channel (white or black tip). An
RCA unbalanced signal typically shouldn't run over 25 feet.

Quarter-Inch TS Cables
Quarter-inch TS (tip, sleeve) cables are generally used for unbalanced signals.
These are most commonly used with electric guitars, which often output to an
amplifier. The diagram below shows how the cable works.

What Is Balanced Audio?


The structure of a balanced audio cable is similar to an unbalanced cable —
with one addition. A balanced audio cable has a ground wire, but it also
carries two copies of the same incoming audio signal, sometimes referred to
as a hot (positive) and cold (negative) signal.

What’s the difference between the hot and cold signals?

The two signals are reversed in polarity, so as they travel down the cable, they
cancel each other out. (Think of how adding positive and negative numbers of
equal value amounts to zero.)
Once the hot and cold signal get to the other end of the cable, however, the
polarity of the cold signal is flipped, so both signals are in phase, and perfectly
in sync.

Here’s the cool part: If the cable picks up noise along the way, the noise added
to both of those cables is not reversed in polarity. So when the cold signal flips
in polarity to match the polarity of the hot signal, the noise carried along the
cold signal cancels out with noise in the hot signal. This canceling out
process is called common-mode rejection, with the noise being the common
signal between the two.

Because balanced signals send two in-phase signals, they’re also louder
(roughly 6–10 dB) than what unbalanced signals can provide.

Balanced Cable Types


XLR
XLR cables can send balanced audio signals up to 200 feet. As you see in the
diagram below, there are three male pins inside

the connector: the groundwire, the hot signal, and the cold signal.
Quarter-Inch TRS
A quarter-inch TRS cable is another balanced professional audio cable. TRS
stands for tip, ring, sleeve, and can be used to send either mono (balanced) or
stereo (unbalanced) signals. The diagram below shows how each one plays a
role in the structure of a balanced signal.
Sound

is all about vibrations.

The source of a sound vibrates, bumping into nearby air molecules


which in turn bump into their neighbours, and so forth. This results
in a wave of vibrations travelling through the air to the eardrum,
which in turn also vibrates. What the sound wave will sound like
when it reaches the ear depends on a number of things such as the
medium it travels through and the strength of the initial vibration.
Sound is a mechanical wave that results from the back and forth
vibration of the particles of the medium through which the sound wave
is moving. If a sound wave is moving from left to right through air, then
particles of air will be displaced both rightward and leftward as the
energy of the sound wave passes through it. The motion of the
particles is parallel (and anti-parallel) to the direction of the energy
transport. This is what characterizes sound waves in air as
longitudinal wave.

Since a sound wave consists of a repeating pattern of high-pressure and


low-pressure regions moving through a medium, it is sometimes referred to
as a pressure wave. If a detector, whether it is the human ear or a
man-made instrument, were used to detect a sound wave, it would detect
fluctuations in pressure as the sound wave impinges upon the detecting
device. At one instant in time, the detector would detect a high pressure;
this would correspond to the arrival of a compression at the detector site. At
the next instant in time, the detector might detect normal pressure. And
then finally a low pressure would be detected, corresponding to the arrival
of a rarefaction at the detector site. The fluctuations in pressure as
detected by the detector occur at periodic and regular time intervals. In fact,
a plot of pressure versus time would appear as a sine curve. The peak
points of the sine curve correspond to compressions; the low points
correspond to rarefactions; and the "zero points" correspond to the
pressure that the air would have if there were no disturbance moving
through it. The diagram below depicts the correspondence between the
longitudinal nature of a sound wave in air and the pressure-time
fluctuations that it creates at a fixed detector location.

Compression • Compression- a region in a longitudinal (sound)


wave where the particles are closest together. •
Rarefaction- a region in a longitudinal (sound) wave where the
particles are furthest apart.
Let’s Talk Sound Frequencies!
The Highs and Lows of Sound Frequency

To most of us, it’s a miracle, and one thing is true: Few understand how we come to
hear the sounds of our daily lives. If you’re feeling like you’re missing out, here is an
easy-to-understand introduction of sound frequency to get you started.

Sound waves travel through air, water and even the ground. Once they reach our ear,
they cause the delicate membranes in our ears to vibrate, allowing us to hear the voices
of our loved ones, listen to our favorite music or the calming sounds of raindrops on a
tin roof and the distant sound of thunder. Admittedly, this is a rather simple explanation
of a complex process.

Sound frequency is an important aspect of how we interpret sounds, but it is not the
only one. A sound wave has five characteristics: Wavelength, time-period, amplitude,
frequency and speed. While amplitude is perceived as loudness, the frequency of a
sound wave is perceived as its pitch.

The higher the frequency waves oscillate, the higher the pitch of the sound we hear

As you see, sound frequency is determined by the way in which sound waves oscillate
whilst travelling to our ears, meaning that they alternate between compressing and
stretching the medium, which in most cases is air. In the same medium, all sound waves
travel at the same speed.

Squeaky sounds, like the blow of a whistle or a screaming child, oscillate at a high
frequency, resulting in oftentimes deafening high-pitched sounds. The low rumbling of a
nearing storm or a bass drum, on the other hand, is produced by low-frequency
oscillation, so we hear it as a very low-pitched noise.
Measuring the Frequency of Sound
How is sound frequency measured? The total number of waves produced in one second
is called the frequency of the wave. The number of vibrations counted per second is
called frequency. Here is a simple example: If five complete waves are produced in one
second then the frequency of the waves will be 5 hertz (Hz) or 5 cycles per second.

Low-Frequency Sounds
Also called infrasound, low-frequency sounds stand for sound waves with a frequency
below the lower limit of audibility (which is generally at about 20 Hz). Low-frequency
sounds are all sounds measured at about 500 Hz and under.

Here are a few examples of low-frequency sounds:

● Severe weather
● Waves
● Avalanches
● Earthquakes
● Whales
● Elephants
● Hippopotamuses
● Giraffes

High-Frequency Sounds
A high-frequency sound is measured at about 2000 Hz and higher.

● Whistles
● Mosquito
● Computer devices
● Screaming
● Squeaking
● Glass breaking
● Nails on a chalkboard
What Is Pitch in Music?
Pitch as described in music is the specific position of a sound within a set of notes. Sounds are
considered either higher or lower in pitch depending upon the frequency of vibration in the sound
that is created by a wave. Pitches are measured by using a tool called hertz. Hertz measures one
second of sound pitches and creates a calculated visual of the sound wave. Pitches can be defined
as the high frequency when the sound wave is 880-hertz cycles in one second. A low-frequency
pitch is defined as 55 hertz.

Oscillation in music is one individual segment of a repeated motion. All music notes have a tone or
pitch. The pitch itself is a part of a note that can be assigned a number and then measured on an
oscillation scale. Technically speaking a pitch is a frequency in and of itself. Consider the music note
A on a musical scale played by a violin. The sound played by the string will vibrate or oscillate by
moving back and forth to create the individual pitch. The frequency of the pitch itself is measured by
how much movement is made in one second. Note A is read as 440 oscillations of frequency in one
second of time.
Types of Pitch
There are varying ways to define how a pitch sounds. There are multiple tools in which pitches can
be analyzed to explain the sound waves and what the ear hears as pitch. A pitch or sound could be
defined as definite, indefinite, relative, low, or high in tone quality. Below are descriptions of these
characterizations of the pitch.

Definite Pitch
Pitch is used as a primary way for musicians to explain how a note sounds. Pitch is essentially the
frequency of an individual note. A music note can sound higher or lower depending upon the
frequency of the individual notes.

Definite pitch refers to a specific music pitch that can be defined by using standard music notation.
Definite pitch is measured mathematically by the number of times a wave of sound is played in one
second. The number is read in Hertz. Hertz is abbreviated and read as Hz. If a pitch has a frequency
of 500 Hz that could be translated to mean that the sound wave of that particular note is repeated
500 times in one second. The music notes in a C major scale are all definite pitches.

Consider a piano keyboard for the following explanation: The music note C which is located one
octave higher than middle C has a pitch that is double the value of middle C's Hz. Middle C's pitch or
frequency in Hertz is 261.63 Hz and the same note played one octave higher has a frequency of
523.25 Hz.

Timbre

It's an instrument's distinct sound quality. It's also known as the tone colour.
Every instrument and voice has its own distinct tone, which contributes to the
piece's uniqueness. The tone of a nylon-string guitar versus a steel-string guitar,
for example, is distinct, and we can tell the difference just by listening.Imagine a bell
and a piano in an orchestra. The same musical notes can be obtained by both
instruments but their sounds are very different. The piano produces a distinct note
whereas the bell struck to the same pitch and amplitude produces a sound that
continues to ring after it has been struck. This difference in the sound is referred to
as the Timbre. Timbre is actually defined as the quality of a sound which is used for
differentiating two sounds when they are in the same frequency. If two different
sounds have the same frequency and amplitude, then by definition they have
different timbres.
Wavelength
is one of the more straightforward acoustics concepts to imagine. It is simply the size
of a wave, measured from one peak to the next. If one imagines a sound wave as
something like a water wave, then the wavelength is simply the distance from the
crest of one wave to the next nearest crest. Thus, if the distance between two
peaks is 1 m, then the wavelength is 1 m. There is a direct relation between
wavelength, frequency, and sound speed. Namely, if we know the frequency (which
is the number of wave repetitions per second, often given in Hertz, or Hz) and the
sound speed (which is the speed the wave travels in meters per sec), then we can
find the wavelength using the equation wavelength=speed/frequency.
Put another way, wavelength is the distance that a wave travels before the next
wave starts. That means that at a given sound speed, as frequency gets higher, the
time between repetitions decreases and the wavelength gets shorter, and vice
versa.

Waves can bend around small objects with ease, while larger objects may block those waves.

Wavelength is the essential quantity to know when trying to understand how waves
move through the world. Long wavelengths bend around objects that are smaller
then themselves, while short wavelengths reflect off of or are absorbed by those
same objects. Thus, a sound with a wavelength of 3.4 cm in air (1,000 Hz) will not
be hampered by an object that is less than 3.4 cm in diameter, but a larger object
may interfere with or entirely block that wave.
Often people talk about “long” and “short,” but what is really meant by these terms?”
How does one draw the line between those admittedly fuzzy and highly subjective
categories? To answer this question, we must understand the concept of scale.
Scale is important throughout science, from biology to physics, though not all
disciplines give it formal treatment.

Amplitude is the relative strength of sound waves


(transmitted vibrations), which we perceive as loudness
or volume. Amplitude is measured in decibels (dB),
which refer to the sound pressure level or intensity. The
lower threshold of human hearing is 0 dB at 1kHz.
Moderate levels of sound (a normal speaking voice, for
example) are under 60 dB. Relatively loud sounds, like
that of a vacuum cleaner, measure around 70 dB. When
workplace sound levels reach or exceed 85 dB,
employers must provide hearing protection. A rock
concert, at around 125 dB, is pushing the human pain
threshold.:

peak amplitude. This is the loudest part of a given sound. If


you’re looking at a waveform, it’s the highest point reached on
the graph. It can’t go higher than 0dB, since that’s equivalent
to the full signal output of whatever software you’re using.

Peak Amplitude — highest instantaneous level in a waveform.

RMS Amplitude — the root mean squared amplitude — this is more of an


average measurement of amplitude over time that is derived by taking many
amplitude samples, squaring each value, dividing the sum by the number of
samples taken, and taking the square root of the result. Hence:
Root-Mean-Squared.

Crest Factor — the peak amplitude divided by the RMS value. So high Crest
factors indicate large peaks in comparison to the RMS level and vice versa.
This explains why a Sine wave sounds softer than a Square wave given equal
peak amplitude values. The Square wave has an RMS Equal to its Peak
amplitude whereas the RMS of a Sine wave is .707 x the Peak value. Simply
put, the Square spends all its time at Peak level so sounds louder. This idea
also underscores the need to consider RMS as a more relevant measure of
loudness over time.

Sound Envelopes

The envelope of a sound displays how the level of a sound wave


changes over time.
The envelope of a wave helps establish the sound’s unique individual quality; it has a
significant influence on how we interpret sound.

Signal Envelope

The envelope of a sound can be measured in four ways:

1. Attack – The attack is the portion of the envelope that represents the time taken for
the amplitude to reach its maximum level. Essentially it is the initial build-up of a sound.

2. Decay – The decay is the progressive reduction in amplitude of a sound over time.
The decay phase starts as soon as the attack phase has reached its peak. In the decay
phase, the signal level drops until it reaches the sustain level.

3. Sustain – The sustain is the period of time during which the sound is held before it
begins to fade out. Many instruments do not contain a sustain phase.

4. Release – The release is the final fade or reduction in amplitude over time.
Gain vs. Volume

Although gain and volume may be used interchangeably, they have technical differences
that are very important to understand when it comes to getting the right mix.

Volume is the actual loudness of the output on the channel. It controls the loudness –
but not the tone of the audio.

Gain is the loudness of the input on the channel. It controls the tone – but does not
affect the loudness.

You might also like