0% found this document useful (0 votes)
14 views33 pages

SKP Module3 PDF

The document discusses various types of sensors used for measuring position, velocity, and force in robotic applications, including encoders, potentiometers, LVDTs, synchros, resolvers, tachometers, Hall-effect sensors, strain gauges, and piezoelectric sensors. Each sensor's working principle is explained, along with diagrams illustrating their operation. Additionally, it covers the differences between contact and non-contact sensors, specifically highlighting the functionality of limit switches.

Uploaded by

yessiknow23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views33 pages

SKP Module3 PDF

The document discusses various types of sensors used for measuring position, velocity, and force in robotic applications, including encoders, potentiometers, LVDTs, synchros, resolvers, tachometers, Hall-effect sensors, strain gauges, and piezoelectric sensors. Each sensor's working principle is explained, along with diagrams illustrating their operation. Additionally, it covers the differences between contact and non-contact sensors, specifically highlighting the functionality of limit switches.

Uploaded by

yessiknow23
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Module –3

Sensor Classification, Internal Sensors, External Sensors, Vision,


Signal Conditioning
1. Explain the working of Position sensor with a neat diagram
a) Using Incremental Linear Encoder
b) Using Absolute Linear Encoder
c) Using Incremental Rotary Encoder
d) Using Absolute Rotary Encoder
Position sensors measure the position of each joint, i.e., joint angle of a robot. Fromthese
joint angles, one can find the end-effectors configuration, namely, its positionand orientation,
through forward kinematics.

1. Encoder
The encoder is a digital optical device that converts motion into a sequence of digital pulses.
By counting a single bit or by decoding a set of bits, the pulses can be converted to relative or
absolute measurements. Thus, encoders are of incremental or absolute type. Further, each
type may be again linear and rotary.

Incremental Linear Encoder As shown in Fig. 4.2(a), it has a transparent glass scale with
opaque grating. The thickness of grating lines and the gap between them is made same, which
are in the range of microns. One side of the scale is provided with a light source and a
condenser lens. On the other side there are light-sensitive cells. The resistance of the cells
(photodiodes) decreases whenever a beam of light falls on them. Thus, a pulse is generated
each time a beam of light is intersected by the opaque line. This pulse is fed to the controller,
which updates a counter (a record of the distance traveled).

Absolute Linear Encoder It is similar in principle as the incremental linearencoder. The


difference is that it gives absolute value of the distance covered atany time. Thus, the chance
of missing the pulses at high speeds is less. The outputis digital in this case. The scale is
marked in a sequence of opaque and transparentstrips, as shown in Fig. 4.2(b). In the scale
shown, if the opaque block represents1 (one) and the transparent block as 0 (zero) then the
leftmost column will show abinary number as 00000, i.e., a decimal value of 0, and the
rightmost column willshow a binary number 111111, i.e., a decimal value of 61.

Incremental Rotary Encoder It is similar to the linear incremental encoder with adifference
that the gratings are now on a circular disc, as in Fig. 4.2(c). The commonvalue of the width
of transparent spaces is 20 microns. There are two sets of gratinglines on two different circles
which detect direction of rotation, and one can alsoenhance the accuracy of the sensor. There
is another circle, which contains only onegrating mark. It is used for measurement of full
circles.

Absolute Rotary Encoder Similar to the absolute linear encoder, the circular diskis divided
into a number of circular strips and each strip has definite arc segments,as shown in Fig.
4.2(d). This sensor directly gives the digital output (absolute). Theencoder is directly
mounted on the motor shaft or with some gearing to enhance theaccuracy of measurement.
To avoid noise in this encoder, a gray scale is sometimesused. A Gray code, unlike binary
codes, allows only one of the binary bits in a codesequence to change between radial lines. It
prevents confusing changes in the binaryoutput of the absolute encoder when the encoder
oscillates between points. A sampleGray code is given in Table 4.1 for some numbers. Note
the difference between theGray and binary codes. The basic arrangement of the rotary
encoder is shown in Fig.4.2(e).
2. Explain the working of Potentiometer Sensor with a neat diagram.
A potentiometer, also referred as simply pot, is a variableresistancedevice that expresses
linear or angular displacements in terms of voltage,as shown in Figs. 4.3(a-b), respectively. It
consists of a wiper that makes contact witha resistive element, and as this point of contact
moves, the resistance between thewiper and end leads of the device changes in proportion to
the displacement, x and qfor linear and angular potentiometers, respectively.

3. Explain the working of LVDT to measure displacement (Position)


with neat diagram.
The Linear Variable Differential Transformer (LVDT) is one of the mostused displacement
transducers, particularly when high accuracy is needed. It generatesan ac signal whose
magnitude is related to the displacement of a moving core, asindicated in Fig. 4.4. The basic
concept is that of a ferrous core moving in a magnetic field, the field being produced in a
manner similar to that of a standard transformer.

There is a central core surrounded by two identical secondary coils and a primarycoil, as
shown in Fig. 4.4. As the core changes position with respect to the coils, it changes the
magnetic field, and hence the voltage amplitude in the secondary coilchanges as a linear
function of the core displacement over a considerable segment. ARotary Variable Differential
Transformer (RVDT) operates under the same principleas the LVDT is also available with a
range of approximately ±40°.
4. Explain the working of Synchros and Resolversto measure position with
neat diagram
While encoders give digital output, synchros andresolvers provide analog signal as
theiroutput. They consist of a rotating shaft(rotor) and a stationary housing (stator).Their
signals must be converted into thedigital form through an analog-to-digitalconverter before
the signal is fed to thecomputer.

As illustrated in Fig. 4.5, synchros and resolvers employ single-winding rotors that revolve
inside fixed stators. In a simple synchro, the stator has three windingsoriented 120° apart and
electrically connected in a Y-connection.

Resolvers, in their stators have only two windings oriented at 90°. Becausesynchros have
three stator coils in a 120° orientation, they are more difficult thanresolvers to manufacture
and are, therefore, more costly.

Modern resolvers, in contrast, are available in a brushless form that employs atransformer to
couple the rotor signals from the stator to the rotor. The primarywinding of this transformer
resides on the stator, and the secondary on the rotor.
Other resolvers use more traditional brushes or slip rings to couple the signal into therotor
winding. Brushless resolvers are more rugged than synchros because there areno brushes to
break or dislodge, and the life of a brushless resolver is limited only byits bearings. Most
resolvers are specified to work over 2 V to 40 V rms (root meansquare) and at frequencies
from 400 Hz to 10 kHz. Angular accuracies range from 5arc-minutes to 0.5 arc-minutes.

In operation, synchros and resolvers resemble rotating transformers. The rotorwinding is


excited by an ac reference voltage, at frequencies up to a few kHz. Themagnitude of the
voltage induced in any stator winding is proportional to the sineof the angle Ɵ between the
rotor-coil axis and the stator-coil axis.
In the case of asynchro, the voltage induced across any pair of stator terminals will be the
vectorsum of the voltages across the two connected coils. For example, if the rotor of
asynchro is excited with a reference voltage, V sin(wt), across its terminal R1 and R2,the
stator’s terminal will see voltages denoted as V0 in the form:

where S1, S2, etc., denotes the stator terminals. Moreover, V and w are the inputamplitude
and frequency, respectively, whereas q is the shaft angle. In the case of aresolver, with a rotor
ac reference voltage of V sin (w t), the stator’s terminal voltageswill be

As said earlier, the output of these synchros and resolvers must be first [Link] do this,
analog-to-digital converters are used. These are typically 8-bit or 16-bit. An 8-bit means that
the whole range of analog signals will be converted into amaximum of 28 = 256 values.
How to measure Velocity ?
Velocity or speed sensors measure by taking consecutive position measurements atknown
time intervals and computing the time rate of change of the position values or directly finding
it based on different principles.

Basically, all position sensors when used with certain time bounds can give velocity, e.g., the
number of pulses given by an incremental position encoder divided by the time consumed in
doing so. But this scheme puts some computational load on the controller which may be busy
in some other computations.

How does a tachometer work ?


A magnetic field and a rotating shaft move relative to each [Link] movement creates an
electromotive force in a coil placed in the magnetic [Link] electromotive force is
proportional to the speed of the [Link] tachometer processes the data and displays it on a
screen.

5. Explain the working of Tachometer to measure velocity with a neat


diagram
Such sensors can directly find the velocity at any instant of time,and without much of
computational load. This measures the speed of rotation ofan element. There are various
types of tachometers in use but a simpler design isbased on the Fleming’s rule, which states
‘the voltage produced is proportional to the rate of flux linkage.’ Here, a conductor (basically
a coil) is attached to the rotatingelement which rotates in a magnetic field (stator). As the
speed of the shaft increases,the voltage produced at the coil terminals also increases. In other
ways, as shown inFig. 4.6, one can put a magnet on the rotating shaft and a coil on the stator.
Thevoltage produced is proportional to the speed of rotation of the shaft. This
informationis digitized using an analog-to-digital converter and passed on to the computer.

How does Halleffect Sensor work ?


Hall effect sensors work by measuring the voltage change (Hall voltage) generated when a
current-carrying conductor or semiconductor is placed in a magnetic field, with the voltage
being proportional to the magnetic field strength.
6. Explain the working principle of Hall-effect Sensor to measure Velocity

Another velocity-measuring device is the Hall-effect sensor, whose principle is described


next. If a flat piece of conductor material, called Hall chip, is attached to a potential
difference on its two opposite faces, as indicated in Fig. 4.7 then the voltage across the
perpendicular faces is zero. But if a magnetic field is imposed at right angles to the
conductor, the voltage is generated on the two other perpendicular faces. Higher the fi eld
value, higher the voltage level. If oneprovides a ring magnet, the voltage produced is
proportional to the speed of rotationof the magnet.

7. Explain the working of Acceleration Sensors.


Similar to measurements of velocityfrom the information of position sensors, one can find the
accelerations as the timerate of change of velocities obtainedfrom velocity sensors or
calculatedfrom the position information.

But this is not an efficient way to calculatethe acceleration because this will puta heavy
computational load on thecomputer and that can hamper the speedof operation of the system.

Another way to compute the acceleration is to measurethe force which is the result of mass
time’s acceleration. Forces are measured, forexample, using strain gauges for which the
formula is

Where F is force, R is the change in resistance of the strain gauge, A is the crosssectional
area of the member on which the force being applied, E is the elasticmodulus of the strain-
gauge material, R is the original resistance of the gauge, andG is gauge factor of the strain
gauge. Then, the acceleration a is the force divided bymass of the accelerating object m, i.e.,

a = F/m = R A E/ R G m

What is Gauge Factor?


It is pointed out here that the velocitiesand accelerations that are measured usingposition
sensors require [Link] is generally not desirable, as thenoise in the measured data,
if any, will be amplified. Alternatively, the use ofintegrators to obtain the velocity from
the acceleration, and consequently theposition, are recommended. Integrators tend tosuppress
the noise.

How does Force Sensor work ?


A spring balance (is an example of a force sensor)consists of a spring fixed at one end with a
hook to attach an object at the other. When an object is hung from the hook, the spring
extends, and the extension is directly proportional to the weight (force) of the object,
according to Hooke's Law.

There exist other types of force sensors,e.g., strain-gauge based, Hall-effect sensor, etc.
8. Explain the working of strain-gauge based force sensor with neat
diagram

1. Strain-gauge Based
The principle of this type of sensors is that the elongationof a conductor increases its
resistance. Typical resistances for strain gauges are 50–100 ohms. The increase in resistance
is due to
• Increase in the length of the conductor; and
• Decrease in the cross-section area of the conductor.

Strain gauges are made of electrical conductors, usually wire or foil, etched on abase
material, as shown in Fig. 4.8.

They are glued on the surfaces where strains areto be measured, e.g., R1 and R2 of Fig.
4.9(a). The strains cause changes in theresistances of the strain gauges, which are measured
by attaching them to theWheatstone bridge circuit as one of the four resistances, R1 . . . R4 of
Fig. 4.9(b). A Wheatstone bridge circuit is used to measure the resistance. It is a cheap and
accurate method of measuringstrain. But care should be taken for the temperaturechanges.

In order to enhance the output voltage and cancelaway the resistance changes due to the
change intemperature, two strain gauges are used, as shown in Fig.4.9(a), to measure the
force at the end of the cantileverbeam.
9. Explain the working of Piezoelectric Based based force sensor with
neat diagram
A piezoelectric transducer (also known as a piezoelectric sensor) is a device that uses the
piezoelectric effect to measure changes in acceleration, pressure, strain, temperature or force
by converting this energy into an electrical charge. A transducer can be anything that
converts one form of energy to another.
A piezoelectric material exhibits a phenomenon knownas the piezoelectric effect. This effect
states that when asymmetrical, elastic crystalsare deformed by a force, an electrical potential
will be developed within the distortedcrystal lattice, as illustrated in Fig. 4.10.

This effect is reversible. That is, if a potentialis applied between the surfaces of thecrystal, it
will change its physicaldimensions. The magnitude and polarity ofthe induced charges are
proportional to themagnitude and direction of the appliedforce. The piezoelectric materials
arequartz, tourmaline, Rochelle salt, andothers. The range of forces that can bemeasured
using piezoelectric sensors arefrom 1 to 20 kN and at a ratio of 2 × [Link] sensors can be
used to measure aninstantaneous change in force (dynamicforces).

Current Based Force/Torque Sensor


Since the torque provided by an electric motor is a functionof the current drawn, its
measurement, along with the known motor characteristics,gives the torque sensing.

EXTERNAL SENSORS
External sensors are primarily used to learn more about a robot’s environment,especially the
objects being manipulated. External sensors can be divided into thefollowing categories:
• Contact type, and
• Noncontact type.
10. Explain the workingof Limit Switch as Contact Type force sensor
with neat diagram

A limit switch is constructedmuch as the ordinary light switch used at homesand offices. It
has the same on-off [Link] limit switch usually has a pressure-
sensitivemechanical arm, as shown in Fig. 4.11(a). Whenan object applies pressure on
themechanical arm,the switch is energized. An object might have anattached magnet that
causes a contact to rise andclose when the object passes over the arm. Asshown in Fig.
4.11(b)

the pull-up register keeps the signal at +V until the switch closes, sendingthe signal to
ground. Limit switches can be eitherNormally Open (NO) or Normally Closed (NC), and
may have multiple-poles. Anormally open switch has continuity when pressure is applied. A
single-pole switchallows one circuit to be opened or closed upon contact, whereas a multi-
pole switchallows multiple switch circuits to be open or closed.

Limit switches are mechanicaldevices which have problems like


• they are subjected to mechanical failure,
• their mean time between failures is low compared to noncontact sensors, and
• the speed of operation is relatively slow compared to the speed of switching of
Photoelectric micro-sensors which is up to 3000 times faster.

Limit switches are used in robots to detect the extreme positions of the motions,where the
link reaching an extreme position switches off the corresponding actuator,thus, safeguarding
any possible damage to the mechanical structure of the robot arm.

Noncontact Type Sensor or Proximity Sensor


Proximity sensing is the technique of detecting the presenceor absence of an object with an
electronic noncontact-type sensor. Proximity sensorsare of two types, inductive and
capacitive.
11. Explain the working of Inductive proximity sensors with neat
diagram
Inductive proximity sensors are used inplace of limit switches for noncontact sensing of
metallic objects, whereas capacitiveproximity sensors are used on the same basis as inductive
proximity [Link], these can also detect nonmetallic objects.
Inductive Proximity Sensor All inductive proximity sensors consist of four basicelements,
namely, the following:
• Sensor coil and ferrite core
• Oscillator circuit
• Detector circuit
• Solid-state output circuit
As shown in Fig. 4.12, the oscillator circuitgenerates a radio-frequency electromagnetic
[Link] field is centred around the axis of the ferritecore, which shapes the field and directs
it at thesensor face.

When a metal target approaches theface and enters the field, eddy currents are inducedinto
the surface of the target. This results in aloading or damping effect that causes a reductionin
amplitude of the oscillator signal. The detectorcircuit detects the change in the
oscillatoramplitude. The detector circuit will ‘switch on’ atspecific operating amplitude. This
signal ‘turnson’ the solid-state output circuit. This is oftenreferred to as damped condition.

As the targetleaves the sensing field, the oscillator respondswith an increase in amplitude.
As the amplitude increases above a specific value, it is detected bythe detector circuit, which
is ‘switched off’ causingthe output signal to return to the normal or ‘off’state.

The sensing range of an inductive proximitysensor refers to the distance between the
sensorface and the target. It also indicates the shape ofthe sensing field generated through the
coil and thecore. There are several mechanical andenvironmental factors that affect the
sensing [Link] usual range is up to 10–15 mm but somesensors have ranges as high as
100 mm.

12. Explain the working of Capacitive Proximity sensors with neat


diagram
A capacitive proximity sensor operates much likean inductive proximity sensor. However,
the means of sensing is considerablydifferent. Capacitive sensing is based on dielectric
capacitance. Capacitance is theproperty of insulators to store the charge. A capacitor consists
of two plates separatedby an insulator, usually called a [Link] the switch is closed, a
charge is stored onthe two plates. The distance between the platesdetermines the ability of the
capacitor to storethe charge and can be calibrated as a function ofstored charge to determine
discrete ON andOFF switching status. Figure 4.13 illustratesthe principle of a capacitive
sensor.

Onecapacitive plate is part of the switch, the sensorface is the insulator, and the target is the
otherplate. Ground is the common path. Thecapacitive switch has the same four elements as
the inductive sensor, i.e., sensor (thedielectric media), oscillator circuit, detector circuit, and
solid-state output circuit.
The oscillator circuit in a capacitive switch operates like one in an inductiveswitch. The
oscillator circuit includes capacitance from the external target plateand the internal plate. In a
capacitive sensor, the oscillator starts oscillating when sufficient feedback capacitance is
detected. Major characteristics of the capacitiveproximity sensors are as follows:
• They can detect non-metallic targets.
• They can detect lightweight or small objects that cannot be detected bymechanical
limit switches.
• They provide a high switching rate for rapid response in object counting
applications.
• They can detect limit targets through nonmetallic barriers (glass, plastics, etc.).
• They have long operational life with a virtually unlimited number of operating
cycles.
• The solid-state output provides a bounce-free contact signal.
• Capacitive proximity sensors have two major limitations.
• The sensors are affected by moisture and humidity, and
• They must have extended range for effective sensing.
Capacitive proximity sensors have a greater sensing range than inductiveproximity sensors.
Sensing distance for capacitive switches is a matter of platearea, as coil size is for inductive
proximity sensors. Capacitive sensors basicallymeasure a dielectric gap. Accordingly, it is
desirable to be able to compensate forthe target and application conditions with a sensitivity
adjustment for the sensingrange. Most capacitive proximity sensors are equipped with a
sensitivity adjustmentpotentiometer.

13. Explain the working of Semiconductor Displacement Sensor with


neat diagram

As shown in Fig. 4.14, asemiconductor displacement sensor uses a semiconductor Light


Emitting Diode(LED) or laser as a light source, and a Position-Sensitive Detector (PSD). The
laserbeam is focused on the target by a lens. The target reflects the beam, which is then
focused on to the PSD forming a beam spot. The beam spot moves on the PSD asthe target
moves. The displacement of the workpiece can then be determined bydetecting the movement
of the beam spot.

14. Explain the purpose of vision systems used with robot.


The vision systems or vision sensors are classified as externalnoncontact type. They are used
by robots to let them look around and find the parts,for example, picking and placing them at
appropriate locations. Earlier, fixtureswere used with robots for accurate positioning of the
parts. Such fixtures are veryexpensive. A vision system can provide alternative economic
solution. Other tasksof vision systems used with robots include the following:

1. Inspection Checking for gross surface defects, discovery of flaws in


labelling,verificationof the presence of components in assembly, measuring for
dimensionalaccuracy, checking the presence of holes and other features in a part.

2. Identification Here, the purpose is to recognize and classify an object ratherthan to


inspect it. Inspection implies that the part must be either accepted or rejected.

3. Visual Servoing and Navigation Control The purpose here is to direct theactions of
the robot based on its visual inputs, for example, to control the trajectoryof the robot’s
end-effector toward an object in the workspace. Industrial applicationsof visual
servoing are part positioning, retrieving parts moving along a conveyor,seam tracking
in continuous arc welding, etc.
All the above applications someway require the determination of theconfiguration of the
objects, motion of the objects, reconstruction of the 3D geometryof the objects from their 2D
images for measurements and building the maps of theenvironments for a robot’s navigation.
Coverage of vision system is from a fewmillimetres to tens of metres with either narrow or
wide angles, depending upon thesystem needs and design. Figure 4.15 shows a typical visual
system connected to anindustrial robot.

15. Explain the elements in a Vision Sensor with neat diagram.


In vision systems, the principal imaging component is a complete camera includingsensing
array, associated electronics, output signal format, and lens, as shown inFig. 4.16.
The task of the camera as a vision sensor is to measure the intensity of the lightreflected by
an object, as indicated in Fig. 4.16, using a photosensitive elementtermed pixel (or photosite).
A pixel is capable of transforming light energy intoelectric energy. The sensors of different
types like CCD, CMOS,etc., are availabledepending on the physical principle exploited to
realize the energy [Link] on the application, the camera could be RS-
170/CCIR, NTSC/PAL (Theseare American RS-170 monocolor, European/Indian CCIR
monocolor, NTSC color,PALcolor television standard signal produced by the video cameras,
respectively)progressive scan, variable scan, or line scan. Five major system parameters
whichgovern the choice of camera are field of view, resolution, working distance, depth
offield, and image data acquisition rate. As a rule of thumb, for size measurement, thesensor
should have a number of pixels at least twice the ratio of the largest to smallestobject sizes of
interest.

16. Explain the Camera Systems with schematic diagram vidicon


camera
Camera Systems As indicated in Fig. 4.16, a camera is a complex systemcomprising
ofseveral devices inside it. Other than the photosensitive sensor, thereare shutter, a lens, and
analog preprocessing electronics. The lens is responsiblefor focusing the light reflected by the
object on the plane where the photosensitivesensors lies, called the image plane. In order to
use it to compute the position and/or orientation of an object, the associated coordinate
transformations, etc., This isgenerally carried out by a software residing inside a personal
computer which savesthe images.
Note that there are two types of video cameras: analog and digital. Analogcameras are not in
common anymore. However, if it is used, a frame grabber orvideo capture card, usually a
special analog-to-digital converter adopted for videosignal acquisition in the form of a plug-
in board which is installed in the computer,is often required to interface the camera to a host
computer. The frame grabber willstore the image data from the camera on-board, or system
memory, and performssampling and digitizing of the analog data as necessary. In some cases,
the cameramay output digital data, which is compatible with a standard computer. So a
separateframe grabber may not be needed. Vision software is needed to create the program
which processes the image data.
When an image has been analyzed, the system mustbe able to communicate the result to
control the process or to pass information to adatabase. This requires a digital input/output
interface. The human eye and brain canidentify objects and interpret scenes under a wide
variety of conditions. Robot-visionsystems are far less versatile. So the creation of a
successful system requires carefulconsideration of all elements of the system and precise
identification of the goals tobe accomplished, which should be kept as simple as possible.

Vidicon Camera Early vision systems employed vidicon cameras, which werebulky vacuum
tube devices. They are almost extinct today but explained here for thesake of completeness in
the development of video cameras. Vidicons are also moresensitive to electromagnetic noise
interference and require high power. Their chiefadvantages are higher resolution and better
light sensitivity.

Figure 4.17 shows theschematic diagram of a vidicon [Link] mosaic reacts to the
varying intensity of a light by varying its [Link], as the electric gun generates and
sends a continuous cathode beam to themosaic passing though two pairs of orthogonal
capacitors (deflectors), the electronbeam gets deflected up or down, and left or right based on
the charge on each pair ofcapacitors. As the beam scans the image, at each instant, the output
is proportional tothe resistance of the mosaic or the light intensity on the mosaic. By reading
the outputvoltage continuously, an analog representation of the image can be obtained.
Pleasenote that the analog signal of vidicon needs to be converted to digital signal
usinganalog-to-digital converters (ADC), as mentioned in Section 2.1.2 in order to processthe
image further using a PC. The ADC which actuallyperforms the digitizationof the analog
signal requires mainly three steps, i.e., sampling, quantization, andencoding.

17. Explain the Process of Sampling and quantization.


In sampling, a given analog signal is sampled periodically to obtain a series ofdiscrete-time
analog signal, asillustrated in Fig. 4.18.

By setting aspecified sampling rate, the analogsignal can be approximated by thesampled


digital outputs. However,while reconstructing the originalsignal from the sample data, one
mayend up with a completely differentsignal. This loss of information iscalled aliasing, and it
can be a seriousproblem. In order to prevent aliasing,according to the sampling theorem,
the sampling rate must be at leasttwice the largest frequency in theoriginal video signal if
one wishes toreconstruct that signal exactly.
In quantization, each sampled discrete time voltage level is assigned to a finitenumber of
defined amplitude levels. These levels correspond to the Gray scale usedin the system. The
predefined amplitude levels are characteristics to a particular ADCand consist of a set of
discrete values of voltage levels. The number ofquantizationlevels is defined by 2 n, where n
is the number of bits of the ADC. For example, a 1-bitADC will quantize only at two values,
whereas with an 8-bit ADC, it is possible toquantize at 28 = 256 different values. Note that a
large number of bits enables a signalto be represented more precisely. Moreover, sampling
and quantization resolutionsare completely independent of each [Link], encoding
does the job of converting the amplitude levels that are quantizedinto digital codes, i.e., 0 or
1. The ability of the encoding process to distinguishbetween various amplitude levels is a
function of the spacing of each quantizationlevel.

18. Explain the working of Digital Camera (CCD / CID Camera).

A digital camera is based on solid-state technology. The mainpart of these cameras is a


solid-state silicon wafer image area that has hundreds ofthousands of extremely small
photosensitive areas called photsites printed on it. Eachsmall area of the wafer is a pixel. A
pixel is a short form for picture element. It is a single point in a graphic image. As theimage
is projected onto the image area, ateach pixel location of the wafer, a chargeis developed
that is proportional to theintensity of the light at that [Link], a digital camera is also
called a Charged Coupled Device (CCD) camera orCharge Integrated Device (CID) camera.
The collection of charges, as shown inFig. 4.19, if read sequentially, would be a
representation of the image pixels.

Theoutput is a discrete representation of the image as a voltage sampled in time. Solid state
cameras are smaller, more rugged, last longer, and have less inherent imagedistortion than
vidicon cameras. They are also slightly more costly, but prices arecoming [Link]. 4.19
Basic principle of a CCD device (Image acquisition)Both the CCDs and CID chips use large
transfer techniques to capture an image.

In a CCD camera, light impinges on the optical equivalent of a Random AccessMemory


(RAM) chip. The light is absorbed in a silicon substrate, with charge buildupproportional to
the amount of light reaching the array. Once sufficient amount ofenergy has been received
to provide a picture, the charges are read out through builtincontrol registers. Some CCD
chips use an interline charge-transfer [Link] use frame-transfer approach, which
is more flexible for varying the [Link] CID camera works on a
similarprinciple. A CID chip is a Metal OxideSemiconductor (MOS) based device with
multiple gates similar to CCDs. The videosignal is the result of a current pulse from a
recombination of carriers. CIDs producea better image (less distortion) and use a different
read-out technique than CCDswhich require a separate scanning address unit. CIDs are,
therefore, more expensivethan CCDs. The principle difference between a CCD and a CID
camera is the methodof generating the video signal.

Lighting Techniques
One of the key questions in robot vision is whatdetermines how bright the image of some
surface on the object will be? It involvesradiometry (measurement of the flow and transfer
of radiant energy), generalillumination models, and surface having both diffuse and specular
reflectioncomponents. Different points on the objects in front of the imaging system
willhave different intensity values on the image, depending on the amount of
incidentradiance, how they are illuminated, how they reflect light, how the reflected light is
collected by a lens system, and how the sensor camera responds to the incoming light.

Figure 4.20 shows the basic reflection phenomenon. Hence, proper illumination ofthe scene
is important. It also affects the complexity level of the image-processingalgorithm required.
The lighting techniques must avoid reflections and shadow unlessthey are designed for the
purpose of image processing. The main task of lighting is tocreate contrast between the
object features to be detected. Typical lighting techniquesare explained below.

Direct Incident LightingThis simple lighting technique can be used for nonreflective materials which strongly
scatter the light due to their matte, porous, fibrous,non-glossy surface. Ideally, a ring light is chosen for smaller
illuminated fields thatcan be arranged around the lens. Shadows are avoided to the greatest extent due to
theabsolutely vertical illumination. Halogen lamps and large fluorescence illuminationcan be used too.
Diffuse Incident Lighting
Diffused light is necessary for many applications,e.g., to test reflective, polished, glossy, or metallic objects. It
is particularly difficultif these surfaces are not glossy, perfectly flat, but individually shaped, wrinkled,curved,
or cylindrical. To create diffused lighting, one may use incident light withdiffusers, coaxial illumination, i.e.,
light is coupled into the axis of the camera bymeans of a beam splitter or half-mirror, or the dome-shaped
illumination where lightis diffused by means of a diffused coated dome in which the camera looks throughan
opening in the dome onto the workpiece.

Lateral Lighting Light


From the side can be radiated at a relatively wide or narrowangle. The influence on the camera image can be
significant. In an extreme case, theimage information can almost be inverted.

Dark Field Lighting


At first sight, images captured using dark fi eld illuminationseem unusual to the viewer. The light shines at a
shallow angle. According to theprinciple of angle of incidence equals the angle of reflection, all the light is
directedaway from the camera. The field of view, therefore, remains dark. Inclined edges,scratches, imprints,
slots, and elevations interfere with the beam of light. At theseanomalies, the light is reflected towards the
camera. Hence, these defects appearbright in the camera image.

Backlighting
Transmitted light illumination is the first choice of lighting when itis necessary to measure parts as accurately
as possible. The lighting is arranged onthe opposite side of the camera, the component itself is put in the light
beam.

19. Illustrate the steps in a Vision System


As depicted in Fig. 4.21, vision sensing has two steps, namely, image acquisition andimage
processing.
Image Acquisition In image acquisition, an image is acquired from a vidiconwhich is digitized
or from a digital camera (CCD or CID), as explained in the previoussection. The image is
stored in computer memory (alsocalled a frame buffer) inthe format such as TIFF, JPG,
Bitmap, etc. The buffer may be a part of the framegrabber card or in the computer itself.
Note that the image acquisition is primarily ahardware function, however, software can be
used to control light intensity, focus,camera angle, synchronization, field of view, read
times, and other functions.
Imageacquisition has four principle elements, namely, a light source, either controlled
orambient, which is explained in Section 4.1.4, a lens that focuses reflected light fromthe
object on to the image sensor, an image sensor that converts the light image into astored
electrical image, and the electronics to read the sensed image from the
imagesensingelement, and after processing, transmit the image information to a
computerfor further processing. A typical acquired image is shown in Fig. 4.22.

Image ProcessingImage-processing techniques are used to enhance, improve, or otherwise alter an image and
to prepare it for image analysis. Usually, during image processing, information is not extracted from the image.
The intention is to remove faults, trivial information, or information that may be important, and to improve
the image. Image processing examines the digitized data to locate and recognize an object within the image
field. It is divided into several sub-processes, which are discussed below:

Image Data ReductionHere, the objective is to reduce the volume of data. As a preliminary step in the data
analysis, the schemes like digital conversion or windowing can be applied to reduce the data. While the digital
conversion reduces the number of gray levels used by the vision system, windowing involves using only a
portion of the total image stored in the frame buffer for image processing and analysis. For example, in
windowing, to inspect a circuit board, a rectangular window is selected to surround the component of interest
and only pixels within that window are analyzed.

Histogram AnalysisA histogram is a representation of the total number of pixels of an image at each gray level.
Histogram information is used in a number of different processes, including thresholding. For example,
histogram information can help in determining a cut-off point when an image is to be transformed into binary
values.

Thresholding It is the process of dividing an image into different portions or levels by picking a certain grayness
level as a threshold. Comparing each pixel value with the threshold, and then assigning the pixel to the
different portions or level, depending on whether the pixel’s grayness level is below the threshold (‘off’ or 0, or
not belonging) or above the threshold (‘on’ or 1, or belonging).

Masking A mask may be used for many different purposes, e.g., filtering operations and noise reduction, and
others. It is possible to create masks that behave like a lowpass filter such that higher frequencies of an image
are attenuated while the lower frequencies are not changed very much. Thenoise is reduced. Masking an
image considers a portion of an imaginary image shown in Fig. 4.23(a), which has all the pixels at a gray value
of 20 except the one at a gray level of 100. The one with 100 may be considered noise. Applying the 3 x 3 mask
shown in Fig. 4.23(b) over the corner of the image yields the following value:

between the noisy pixel and the surrounding pixels, i.e., 100 vs. 20, becomes much smaller, namely, 29 vs. 20,
thus reducing the noise. With this characteristic, the mask acts as a low-pass filter. Note that the above
reduction of noise has been achieved using what is referred as neighborhood averaging, which causes the
reduction of the sharpness of the image as well.

Edge DetectionEdge detection is a general name for a class of computer programs and techniques that
operate on an image and result in a line drawing of the image. The lines represent changes in values such as
cross section of planes, intersections of planes, textures, lines, etc. In many edge-detection techniques, the
resulting edgesare not continuous. However, in many applications, continuous edges are preferred,which can
be obtained using the Hough transform. It is a technique used to determinethe geometric relationship
between different pixels on a line, including the slopeof the line. Consider a straight line in the xy-plane, as
shown in Fig. 4.24, which isexpressed asy = mx + c (4.15)
SegmentationSegmentation is a generic name for a number of different techniques that divide the image into
segments of its constituents. The purpose of segmentation is to separate the information contained in the
image into smaller entities that can be used for other purposes. Segmentation includes edge detection, as
mentioned above, region growing and splitting, and others. While region growing works based on the similar
attributes, such as gray-level ranges or other similarities, and then try to relate the regions by their average
similarities, region splitting is carried out based on thresholding in which an image is split into closed areas of
neighborhood pixels by comparing them with thresholding value or range.

Morphology OperationsMorphology operations are a family of operations which are applied on the shape of
subjects in an image. They include many different operations, both for binary and gray images, such as
thickening, dilation, erosion, skeletonization, opening, closing, and fi ling. These operations are performed on
animage in order to aid in its analysis, as well as to reduce the ‘extra’ information thatmay be present in the
image. For example, Fig. 4.25(a) shows the object which afterskeletonization is shown in Fig. 4.25(b).

Image Analysis Image analysis is a collection of operations and techniques that are used to extract information
from images. Among these are feature extraction; object recognition; analysis of the position, size, orientation;
extraction of depth information, etc. Some techniques can be used for multiple purposes. For example,
moment equations may be used for object recognition, as well as to calculate the position and orientation of
an object. It is assumed that image processing has already been performed to the image and available for the
image analysis. Some of the image analysis techniques are explained below.

Feature ExtractionObjects in an image may be recognized by their features that uniquely characterize them.
These include, but are not limited to, gray-level histograms, morphological features such as perimeter, area,
diameter, number of holes, etc., eccentricity, cord length, and moments. As an example, perimeter of an
object may be found by first applying an edge-detection routine and then counting the number of pixels on the
perimeter. Similarly, the area can be calculated by region growing techniques, whereas diameter of a non
circular object is obtained by the maximum distance between any two points on any line that crosses the
identified area of the object. In order to know the thinness of an object, it can be calculated using either of the
two ratios:

Object Recognition The next in image analysis is to identify the object that the image represents based on the
extracted features. The recognition algorithm should be powerful enough to uniquely identify the object.
Typical techniques used in the industries are template matching and structural technique.

20. Explain the Hierarchy of a Vision System.


The collection of processes involved in visual perception are often perceived as a hierarchy spanning the range
from ‘low’ via ‘intermediate’ to ‘high-level’ vision. The notion of ‘low’ and ‘high’ vision are used routinely, but
there is no clear definition of the distinction between what is considered ‘high’ as opposed to ‘low-level’
vision. As shown in Fig. 4.28, a vision is classified as ‘low’, ‘intermediate’ or ‘high-level’ vision based on specific
activities during the image-processing stage. They are explained below.

1. Low-level VisionThe sequence of steps from image formation to image acquisition, etc., described
above, along with the extraction of certain physical properties of the visible environment, such as
depth, three-dimensional shape, object boundaries, or surface-material properties, can be classified
as a process of low-level vision.
Activity in the low-level vision is to process images for feature extraction (edge, corner, or optical flow).
Operations carried out on the pixels in the image to extract the above properties with respect to intensity or
depth at each point in the image. One may, for example, be interested in extracting uniform regions, where
the gradient of the pixels remains constant, or fi rst-order changes in gradient, which would correspond to
straight lines, or second-order changes which could be used to extract surface properties such as peaks, pits,
ridges, etc. A number of characteristics that are typically associated with low-level vision processes are as
follows:

• They are spatially uniform and parallel, i.e., with allowance for the decrease inresolution from the
center of the visual fi eld outwards, similar process is appliedsimultaneously across the visual fi eld.
For example, processing involved in edgedetection, motion, or stereo vision, often proceed in parallel
across the visualfield, or a large part of it.
• Low-level visual processes are also considered ‘bottom-up’ in nature. Thismeans that they are
determined by the data, i.e., data driven, and are relativelyindependent of the task at hand or
knowledge associated with specific [Link] far as the edge detection is concerned, it will be
performed in the samemanner for images of different objects, with no regard to whether the task to
dowith moving around, looking for a misplaced object, or enjoying the landscape.

2. Intermediate-level VisionIn this level, objects are recognized, and 3Dscenes are interpreted using the
features obtained from the low-level vision. Intermediate-level processing is fundamentally concerned with
grouping entitiestogether. The simplest case is when one groups pixels into lines. One can then expressthe line
in a functional form. Similarly, if the output of the low-level informationis a depth map, one may further need
to distinguish object boundaries, or othercharacteristics. Even in the simple case where one is trying to extract
a single sphere,it is not an easy process to go from a surface-depth representation to a center-
andradiusrepresentation. In contrast to higher-level vision, the process here does notdepend on the
knowledge about specific object.
3. High-level VisionHigh-level vision, which is equivalent to image understanding,is concerned mainly with the
interpretation of scene in terms of the objectsin it and is usually based on knowledge of specific objects and
relationships. It isconcerned primarily with the interpretation and use of information in the imagerather than
the direct recovery of physical [Link] high-level vision, interpretation of a scene goes beyond the tasks
of lineextraction and grouping. It further requires decisions to be made about types ofboundaries, such as
which are occluding, and what information is hidden from theuser. Further grouping is essential at this stage
since one may still need to be ableto decide which lines group together to form an object. To do this, it is
necessary tofurther distinguish lines which are part of the object structure, from those which arepart of a
surface texture, or caused by shadows. High-level systems are, therefore,object oriented and sometimes called
‘top-down’. High-level visual processes areapplied to a selected portion of the image, rather than uniformly
across the entireimage, as done in low- and intermediate-level visions. They almost always requiresome form
of knowledge about the objects of the scene to be included.

21. Explain difficulties in Vision and Remedies.

Difficulties in Vision
• A vision system cannot uniquely represent or process all available data because ofcomputational
problem, memory, and processing-time requirements imposed on thecomputer. Therefore, the
system mustcompromise.
• Other problems include variationof light, part-size, part placement, and limitations in the dynamic
ranges available intypical vision sensors.
• A vision system requires specialized hardware and software.

• It is possible to purchase just the hardware with little or no vision applicationprogramming. In fact, a
few third-party programs are available. A hardware-onlyapproach is less expensive and can be more
flexible for handling usual visionrequirements. But, since this approach requires image processing
expertise, it is only
• of interest to users who wish to retain the responsibility of image interpretation. Itis usual practice to
obtain the hardware and application software together from thesupplier. However, the user might
still need custom programming for an application. Major vision system suppliers specialise in
providing software for only a few
• application areas.

• Every vision system requires a sensor to cover the visual image into an electronicsignal. Several types
of video sensors are used, including vidicon cameras, vacuumtube devices, and solid-state sensors.
Many of these vision systems were originallydesigned for other applications, such as television so the
signal must be processed toextract the visual image and remove synchronization information before
the signalis sent to the computer for further processing.
• The computer then treats this digitalsignal as the array pixels and processes this data to extract the
desired [Link] processing can be very time consuming. For a typical sensor of 200,000or
more pixels, a vision system can take many seconds, even minutes, to analyzethe complete scene and
determine the action to be taken.

• The number of bits to beprocessed is quite large, for example, a system with 512 × 512 pixels array
and an8-bit intensity per pixel yields over two million of bits to be processed. If continuousimage at a
30 FPS frame rate were being received, data bytes would be received atan 8 MHz rate.

• Few computers can accept inputs at these data rates, and, in any case,there would be no time left to
process the data. When higher resolution system, colorsystem, or multiple camera systems are
considered, data-handling requirementsbecome astronomical.

Remadies
Several methods can be used to reduce the amount of datahandled and, therefore, the processing time. They
are explained as follows:

• One approach is the binary vision, which is used when only black-and-whiteinformation is processed
(intensity variations and shades of Gray are ignored).In binary vision, a picture is converted into a
binaryimage by thresholding, asillustrated in Fig. 4.29. In thresholding, a brightness level is selected.
All datawith intensities equal to or higher than this value are considered white, and allother levels are
considered black.

• Another method of shortening process time is to control object placement sothat objects of interest
cannot overlap in the image. Complicated algorithmsto separate images are then not necessary, and
the image-processing time isreduced.
• A third approach reduces data handling by processing only a small window ofthe actual data; that is,
the object is located in a predefined field of view. Forexample, if the robot is looking for a mark on the
printed circuit board, thesystem can be held in such a way that the mark is always in the upper
rightcorner.
• Fourth approach takes a statistical sample of data and makes decisions on thisdata sample.
Unfortunately, all of these approaches ignore some of the availabledata and, in effect, produce a less
robust system. Processing time is saved, butsome types of complex objects cannot be recognized.
Signal conditioning
The basic information or data generated by the transducers (or sensors) generally requires ‘conditioning’or
‘processing’ of one sort or another before they are presented to the observer as anindication or a record, or to
be used by a robot controller for further action.
22. Explain the role of Amplifiers and Filters with neat diagrams

Since the electrical signals produced by most transducers of a sensor are at a lowvoltage, generally they
require amplification before they are suitably recognizedby a monitoring system like a data processor,
controller, or data logger. The use of operational amplifiers with the sensors will be explained [Link]
operational amplifier (op-amp) is the most widely utilized analog electronicsub-assembly. It is the basis of
instrumentation amplifier, filters, and a countless ofanalog and digital data-processing equipment.

An op-amp could be manufactured in the discrete-element form using, say, ten bipolar junction transistors and
as many discrete resistors or alternatively (and preferably) in the monolithic form as in IC(Integrated Circuit)
chip that may be equivalent to over 100 discrete elements. In anyform, the device has input impedance Zi, an
output impedance Zo and a gain K, as indicated in Fig. 4.30(a). A common 8-pin dual in-line package (DIP) or V-
package is shown in Fig. 4.30(b).

From Fig. 4.30(a), the open-loop condition yields, vo= Kvi -------------------------------------------(4.22)
where the input voltage vi is the differential input voltage defined as the algebraicdifference between the
voltages at the +ve and –ve leads. Thus,vi = vip– vin -------------------------------------(4.23)

The open-loop voltage gain K is typically very high (105 – 109)


The input impedance Zi value is typically 2 MW (could be as high as 10 MW)
The output impedance Zo value is typically 2 MW (could be as high as 10 MW)

The voltages at the two input leads are nearly equal. If a large voltage differential vi (say, 10 V) at the input
then according to Eq. (4.22), the output voltage should be extremely high. This never happens in practice,
because the device saturates quickly beyond moderate output voltages in the order of 15 V. Vip is termed non-
inverting input and Vin is termed inverting input.

Filters
A filter is a device that allows through only the desirable part of a signal, rejecting the unwanted part.
Unwanted signals can seriously degrade the performance of a robotic system. External disturbances, error
components in excitations, and noise generated internally within system components and instrumentation are
such spurious signals, which may be removed by a filter. There are four broad categories of filters, namely,
low-pass filters, high-pass filters, band-pass filters, and band-reject (or notch) filters, which are shown in Fig.
4.31.
An analog filter contains active components like transistors or op-amps. It is a physical dynamic system,
typically an electric circuit, whose dynamics will determine which (desired) signal components would be
allowed through and which (unwanted) signal components would be rejected. In a way, output of the dynamic
system is the filtered signal. An analog filter can be represented as a differential equation with respect to time.

Filtering can be achieved through digital filters as well which employ digital signal processing. Digital filtering is
an algorithm by which a sampled signal (or sequence of numbers), acting as an input, is transformed to a
second sequence of numbers called the output. It is a discrete-time system and can be represented as a
difference equation. Digital filtering has the usual digital benefits of accuracy, stability, and adjustability by
software (rather than hardware) changes.

22. Explain the role of Modulators and Demodulators with neat diagrams.
Signals are sometimes deliberately modified to maintain the accuracy during their transmission, conditioning,
and processing. In modulation, the data signal which is referred to as modulating signal is used to vary a
property, say, amplitude or frequency, of a carrier signal. Thus, the carrier signal is modulated by the data
signal.
After transmitting or conditioning, the data signal is recovered by removing the carrier signal from the
modulated signal. This step is known as demodulation. The carrier signal can be either sine or square wave,
and its frequency should be 5–10 times the highest frequency of the data signal.

Figure 4.32 shows some typical modulation techniques in which the amplitude of a high-frequency sinusoidal
carrier signal is varied according to the amplitude of the data signal. Figure 4.32(a) shows the data signal which
needs to be transmitted. The carrier signal’s frequency is kept constant, while amplitude is same as that of the
data signal. This is called Amplitude Modulation (AM), and the resulting amplitude modulated signal is shown
in Fig. 4.32(b). In Fig. 4.32(c).

However, the frequency of the carrier signal is varied in proportion to the amplitude of the data signal
(modulating signal), while keeping the amplitude of the carrier signal constant. This is called Frequency
Modulation (FM). An FM is less susceptible to noise than AM.
In Pulse-Width Modulation (PWM), the carrier signal is a pulse sequence, and its width is changed in
proportion to the amplitude of the data signal, while keeping the pulse spacing constant. This is shown in Fig.
4.32(d).

The PWM signals can be used directly in controlling a process without having to demodulate them. There also
exists Pulse-Frequency Modulation (PFM), where the carrier signal is a pulse sequence. Here, the frequency of
the pulses is changed in proportion to the value of the data signal, while keeping the pulse width constant. It
has the advantages of ordinary FM.

Demodulation or detection is the process of extracting the original data signal from a modulated signal. A
simple and straight forward method of demodulation is by detection of the envelope of the modulated signal.

23. Explain the working of Analog and Digital Conversions


Most sensors have analog output while much data processing is done using digital computers. Hence, devices
for conversion between these two domains have to be performed. These can be achieved using an analog-to-
digital converter (ADC) and a digital-to-analog converter (DAC). Some ADC uses DAC as its component.

1. Digital-to-Analog Converter ( DAC)


The function of a digital-to-analog convertor or DAC is to convert a sequence of digital words stored in a data
register, typically in the straight binary form, into an analog signal. A typical DAC unit is an active circuit in the
integrated circuit (IC) form and consists of a data register (digital circuits), solid-state switching circuits,
resistors, and op-amps powered by an external power supply. The IC chip that represents the DAC is usually
one of many components mounted on a Printed Circuit Board (PCB), which is the I/O board or card. This board
is plugged into a slot of the PC having DAQ.

Analog-to-Digital Converter ( ADC) An analog-to-digital converter or ADC, on the other hand, converts an
analog signal into the digital form, according to an appropriate code, before the same is used by a digital
processor or a computer. The process of analog-to-digital conversion is more complex and time consuming
than the digital-to-analog conversion. ADCs are usually more costly, and their conversion rate is slower than
DACs. Several types of ADCs are commercially available.

Note that the most fundamental property of any DAC or ADC is their number of bits for which it is designed,
since this is a basic limit on resolution. Units of 8 to 12 bits are most common even though higher bits are also
available. Both DAC and ADC are elements in typical input/output board (or I/O board, or data acquisition and
control card, i.e., DAC or DAQ), which are usually situated on the same digital interface board.

24. Explain the working of Bridge Circuits

Various bridge circuits are employed widely for the measurement of resistance, capacitance, and inductance,
as many transducers convert physical variables to these quantities. Figure 4.33 shows a purely resistive
(Wheatstone) bridge in its simplest form.

The basic principle of the bridge may be applied in two different ways, namely, the null method and the
deflection method. If the resistances are adjusted so that the bridge is balanced then there is no voltage across
AC, i.e., VAC = 0. This happens when R1/R4 = R2/R3. Now if one of the resistors, say, R1, changes its resistance
then there will be unbalance in the bridge and a voltage will appear across VAC causing a meter reading. This
meter reading is an indication of the change in R1 of the transducer element, and can be utilized to compute
the change. This method of measurement is called the deflection method.

In the null method, one of the resistors is adjustable manually. Thus, if R1 changes causing a meter deflection,
R2 can be adjusted manually till its effect just cancels that of R1 and the bridge is returned to its balanced
position. Here, the change in R1 is directly related to the change in R2 required to effect the balance. Note that
both the deflection and null methods require a calibration curve so that one knows the numerical values of R1
or R2 that has caused the imbalance or balance, respectively.

Note that the measurements of rapid dynamic phenomenon can be done using the deflection method.
Moreover, based on the alternate current (ac) and direct current (dc) excitations in the bridge, there are ac
bridges and dc bridges, respectively.
25. Explain the working of Signal Analyzer
Modern signal analyzers employ digital techniques of signal analysis to extract useful information that is
carried by the signal. Digital Fourier analysis using Fast Fourier Transform ( FFT) is perhaps the single most
common procedure that is used in the vast majority of signal analyzers. Fourier analysis produces the
frequency spectrum of a time signal, which is explained here in brief. Any periodic signal f(t) can be
decomposed into a number of sines and cosines of different amplitudes, an and bn, and frequencies nwt, for n
= 1, 2, …∞, which is expressed as

If one adds sine and cosine functions together, the original signal can be reconstructed. Equation (4.25) is
called a Fourier series, and the collection of different frequencies present in the equation is called the
frequency spectrum or frequency content of the signal.

Even though the signal is in amplitude-time domain, the frequency spectrum is in the amplitude-frequency
domain. For example, for the function, f(t) = sin(t) of Fig. 4.34(a), which consists of only one frequency with

constant amplitude, the plotted signal would be represented by a single line at the given frequency, as shown
in Fig. 4.34(b). If the plot with given frequency and amplitude is represented by the arrow in Fig. 4.34(b), the
same sine function can be reconstructed. The plots in Fig. 4.35 are similar and represent the following:
The frequencies are also plotted in the frequency-amplitude domain. Clearly, when the number of frequencies
contained in f(t) increases, the summation becomes closer to a square function. Theoretically, to reconstruct a
square wave from sine functions, an infinite number of sines must be added together. In practice, however,
some of the major frequencies within the spectrum will have larger amplitudes. These major frequencies or
harmonics are used in identifying and labeling a signal, including recognizing shapes, objects, etc.

26. Explain how a user or a designer select appropriate sensors for a robotic
application.

In using sensors, one must first decide what the sensor is supposed to do and what result one expects. A
sensor detects the quantity to be measured (the measurand). The transducer converts the detected
measurand into a convenient form for subsequent use, e.g., for control or actuation. The transducer signal
may be filtered, amplified and suitably modified using the suitable devices. Selection of suitable sensors for
robotic applications relies heavily of their performance specifications.

Majority of manufacturers provide what are actually static parameters. However, dynamic parameters are also
important and the scope of syllabus.
The following definitions will help a user or designer select appropriate sensors for a robotic application.

1. Range:
Range or span is a measure of the difference between the minimum and maximum values of its input or
output (response) so as to maintain a required level of output accuracy. For example, a strain gauge might be
able to measure output values over the range from 0.1 to 10 Newtons.

2. Sensitivity
Sensitivity is defined as the ratio of the change of output to change in input. As an example, if a movement of
0.025 mm in a linear potentiometer causes an output voltage by 0.02 volt then the sensitivity is 0.8 volts per
mm. It is sometimes used to indicate the smallest change in input that will be observable as a change in
output. Usually, maximum sensitivity that provides a linear and accurate signal is desired.

3. Linearity
Perfect linearity would allow output versus input to be plotted as a straight line on a graph paper. Linearity is a
measure of the constancy of the ratio of output to input. In the form of an equation, it is y = mx (4.27) where x
is input and y is output, and m is a constant. If m is a variable, the relationship is not linear. For example, m
may be a function of x, such as m = a + bx where the value of b would introduce a nonlinearity. A measure of
the nonlinearity could be given as the value of b.
4. Response Time
Response time is the time required for a sensor to respond completely to a change in input. The response time
of a system with sensors is the combination of the responses of all individual components, including the
sensor. An important aspect in selecting an appropriate sensor is to match its time response to that of the
complete system. Associated definitions like rise time, peak time, settling time, etc., with regard to the
dynamic response of a sensor
5. Bandwidth
It determines the maximum speed or frequency at which an instrument associated with a sensor or otherwise
is capable of operating. High bandwidth implies faster speed of response. Instrument bandwidth should be
severaltimes greater than the maximum frequency of interest in the input signals.
6. Accuracy
Accuracy is a measure of the difference between the measured and actual values. An accuracy of ±0.025 mm
means that under all circumstances considered, the measured value will be within 0.025 mm of the actual
value. In positioning a robot and its end-effector, verifi cation of this level of accuracy would require careful
measurement of the position of the end-effector with respect to the base reference location with an overall
accuracy of 0.025 mm under all conditions of temperature, acceleration, velocity, and loading. Precision-
measuring equipment, carefully calibrated against secondary standards, would be necessary to verify this
accuracy. Accuracy describes ‘closeness to true values.’

7. Repeatability
Repeatability and Precision Repeatability is a measure of the difference in value between two successive
measurements under the same conditions, and is a far less stringent criterion than accuracy. As long as the
forces, temperature, and other parameters have not changed, one would expect the successive values to be
the same, however poor the accuracy is. An associated definition is precision, which means the ‘closeness of
agreement’ between independent measurements of a quantity under the same conditions without any
reference to the true value, as done above. Note that the number of divisions on the scale of the measuring
device generally affects the consistency of repeated measurement and, therefore, the precision. In a way,
precision describes ‘repeatability.’ Figure 4.36 illustrates the difference between accuracy and precision.

8. Resolution
Resolution and Threshold Resolution is a measure of the number of measurements within a range from
minimum to maximum. It is also used to indicate the value of the smallest increment of value that is
observable, whereas threshold is a particular case of resolution. It is defi ned as the minimum value of input
below which no output can be detected.
9. Hysteresis It is defined as the change in the input/output curve when the direction of motion changes, as
indicated in Fig. 4.37.

This behavior is common in loose components such as gears, which have backlash, and in magnetic devices
with ferromagnetic media, and others.
10. Type of Output
Type of Output can be in the form of a mechanical movement, an electrical current or voltage, a pressure, or
liquid level, a light intensity, or another form. To be useful, it must be converted to another form, as in the
LVDT (Linear Variable Differential Transducer) or strain gauges, which are discussed earlier. In addition to the
above characteristics, the following considerations must also be made while selecting a sensor.

11. Size and Weight


Size and weight are usually important physical characteristics of sensors. If the sensor is to be mounted on the
robot hand or arm, it becomes a part Output Input of the mass that must be accelerated and decelerated by the
drive motors of the wrist and arm. So, it directly affects the performance of the robot. It is a challenge to
sensor designers to reduce size and weight. An early wrist force-torque sensor, for example, was about 125
mm in diameter but was reduced to about 75 mm in diameter through careful redesign.

12. Environmental Conditions


Power requirement and its easy availability should be considered. Besides, conditions like chemical reactions
including corrosion, extreme temperatures, light, dirt accumulation, electromagnetic field, radioactive
environments, shock and vibrations, etc., should be taken into account while selecting a sensor or considering
how to shield them.
13. Reliability and Maintainability
Reliability is of major importance in all robot applications. It can be measured in terms of Mean Time To
Failure (MTTF) as the average number of hours between failures that cause some part of the sensor to become
inoperative. In industrial use, the total robot system is expected to be available as much as 98 or 99% of the
working days. Since there are hundreds of components in a robot system, each one must have a very high
reliability. Some otherwise good sensors cannot stand the daily environmental stress and, therefore, cannot be
used with robots. Part of the requirement for reliability is ease of maintenance. A sensor that can be easily
replaced does not have to be as reliable as one that is hidden in the depths of the robot. Maintainability is a
measure in terms of Mean Time To Repair (MTTR).
14. Interfacing
Interfacing of sensors with signal-conditioning devices and the controller of the robot is often a determining
factor in the usefulness of sensors. Nonstandard plugs or requirements for nonstandard voltages and currents
may make a sensor too complex and expensive to use. Also, the signals from a sensor must be compatible with
other equipment being used if the system is to work properly.
15. Others
Other aspects like initial cost, maintenance cost, cost of disposal and replacement, reputation of
manufacturers, operational simplicity, ease of availability of the sensors and their spares should be taken into
account. In many occasions, these nontechnical considerations become the ultimate deciding factor in the
selection of sensors for an application.

--------------End of Module 3 --------------

You might also like