SKP Module3 PDF
SKP Module3 PDF
1. Encoder
The encoder is a digital optical device that converts motion into a sequence of digital pulses.
By counting a single bit or by decoding a set of bits, the pulses can be converted to relative or
absolute measurements. Thus, encoders are of incremental or absolute type. Further, each
type may be again linear and rotary.
Incremental Linear Encoder As shown in Fig. 4.2(a), it has a transparent glass scale with
opaque grating. The thickness of grating lines and the gap between them is made same, which
are in the range of microns. One side of the scale is provided with a light source and a
condenser lens. On the other side there are light-sensitive cells. The resistance of the cells
(photodiodes) decreases whenever a beam of light falls on them. Thus, a pulse is generated
each time a beam of light is intersected by the opaque line. This pulse is fed to the controller,
which updates a counter (a record of the distance traveled).
Incremental Rotary Encoder It is similar to the linear incremental encoder with adifference
that the gratings are now on a circular disc, as in Fig. 4.2(c). The commonvalue of the width
of transparent spaces is 20 microns. There are two sets of gratinglines on two different circles
which detect direction of rotation, and one can alsoenhance the accuracy of the sensor. There
is another circle, which contains only onegrating mark. It is used for measurement of full
circles.
Absolute Rotary Encoder Similar to the absolute linear encoder, the circular diskis divided
into a number of circular strips and each strip has definite arc segments,as shown in Fig.
4.2(d). This sensor directly gives the digital output (absolute). Theencoder is directly
mounted on the motor shaft or with some gearing to enhance theaccuracy of measurement.
To avoid noise in this encoder, a gray scale is sometimesused. A Gray code, unlike binary
codes, allows only one of the binary bits in a codesequence to change between radial lines. It
prevents confusing changes in the binaryoutput of the absolute encoder when the encoder
oscillates between points. A sampleGray code is given in Table 4.1 for some numbers. Note
the difference between theGray and binary codes. The basic arrangement of the rotary
encoder is shown in Fig.4.2(e).
2. Explain the working of Potentiometer Sensor with a neat diagram.
A potentiometer, also referred as simply pot, is a variableresistancedevice that expresses
linear or angular displacements in terms of voltage,as shown in Figs. 4.3(a-b), respectively. It
consists of a wiper that makes contact witha resistive element, and as this point of contact
moves, the resistance between thewiper and end leads of the device changes in proportion to
the displacement, x and qfor linear and angular potentiometers, respectively.
There is a central core surrounded by two identical secondary coils and a primarycoil, as
shown in Fig. 4.4. As the core changes position with respect to the coils, it changes the
magnetic field, and hence the voltage amplitude in the secondary coilchanges as a linear
function of the core displacement over a considerable segment. ARotary Variable Differential
Transformer (RVDT) operates under the same principleas the LVDT is also available with a
range of approximately ±40°.
4. Explain the working of Synchros and Resolversto measure position with
neat diagram
While encoders give digital output, synchros andresolvers provide analog signal as
theiroutput. They consist of a rotating shaft(rotor) and a stationary housing (stator).Their
signals must be converted into thedigital form through an analog-to-digitalconverter before
the signal is fed to thecomputer.
As illustrated in Fig. 4.5, synchros and resolvers employ single-winding rotors that revolve
inside fixed stators. In a simple synchro, the stator has three windingsoriented 120° apart and
electrically connected in a Y-connection.
Resolvers, in their stators have only two windings oriented at 90°. Becausesynchros have
three stator coils in a 120° orientation, they are more difficult thanresolvers to manufacture
and are, therefore, more costly.
Modern resolvers, in contrast, are available in a brushless form that employs atransformer to
couple the rotor signals from the stator to the rotor. The primarywinding of this transformer
resides on the stator, and the secondary on the rotor.
Other resolvers use more traditional brushes or slip rings to couple the signal into therotor
winding. Brushless resolvers are more rugged than synchros because there areno brushes to
break or dislodge, and the life of a brushless resolver is limited only byits bearings. Most
resolvers are specified to work over 2 V to 40 V rms (root meansquare) and at frequencies
from 400 Hz to 10 kHz. Angular accuracies range from 5arc-minutes to 0.5 arc-minutes.
where S1, S2, etc., denotes the stator terminals. Moreover, V and w are the inputamplitude
and frequency, respectively, whereas q is the shaft angle. In the case of aresolver, with a rotor
ac reference voltage of V sin (w t), the stator’s terminal voltageswill be
As said earlier, the output of these synchros and resolvers must be first [Link] do this,
analog-to-digital converters are used. These are typically 8-bit or 16-bit. An 8-bit means that
the whole range of analog signals will be converted into amaximum of 28 = 256 values.
How to measure Velocity ?
Velocity or speed sensors measure by taking consecutive position measurements atknown
time intervals and computing the time rate of change of the position values or directly finding
it based on different principles.
Basically, all position sensors when used with certain time bounds can give velocity, e.g., the
number of pulses given by an incremental position encoder divided by the time consumed in
doing so. But this scheme puts some computational load on the controller which may be busy
in some other computations.
But this is not an efficient way to calculatethe acceleration because this will puta heavy
computational load on thecomputer and that can hamper the speedof operation of the system.
Another way to compute the acceleration is to measurethe force which is the result of mass
time’s acceleration. Forces are measured, forexample, using strain gauges for which the
formula is
Where F is force, R is the change in resistance of the strain gauge, A is the crosssectional
area of the member on which the force being applied, E is the elasticmodulus of the strain-
gauge material, R is the original resistance of the gauge, andG is gauge factor of the strain
gauge. Then, the acceleration a is the force divided bymass of the accelerating object m, i.e.,
a = F/m = R A E/ R G m
There exist other types of force sensors,e.g., strain-gauge based, Hall-effect sensor, etc.
8. Explain the working of strain-gauge based force sensor with neat
diagram
1. Strain-gauge Based
The principle of this type of sensors is that the elongationof a conductor increases its
resistance. Typical resistances for strain gauges are 50–100 ohms. The increase in resistance
is due to
• Increase in the length of the conductor; and
• Decrease in the cross-section area of the conductor.
Strain gauges are made of electrical conductors, usually wire or foil, etched on abase
material, as shown in Fig. 4.8.
They are glued on the surfaces where strains areto be measured, e.g., R1 and R2 of Fig.
4.9(a). The strains cause changes in theresistances of the strain gauges, which are measured
by attaching them to theWheatstone bridge circuit as one of the four resistances, R1 . . . R4 of
Fig. 4.9(b). A Wheatstone bridge circuit is used to measure the resistance. It is a cheap and
accurate method of measuringstrain. But care should be taken for the temperaturechanges.
In order to enhance the output voltage and cancelaway the resistance changes due to the
change intemperature, two strain gauges are used, as shown in Fig.4.9(a), to measure the
force at the end of the cantileverbeam.
9. Explain the working of Piezoelectric Based based force sensor with
neat diagram
A piezoelectric transducer (also known as a piezoelectric sensor) is a device that uses the
piezoelectric effect to measure changes in acceleration, pressure, strain, temperature or force
by converting this energy into an electrical charge. A transducer can be anything that
converts one form of energy to another.
A piezoelectric material exhibits a phenomenon knownas the piezoelectric effect. This effect
states that when asymmetrical, elastic crystalsare deformed by a force, an electrical potential
will be developed within the distortedcrystal lattice, as illustrated in Fig. 4.10.
This effect is reversible. That is, if a potentialis applied between the surfaces of thecrystal, it
will change its physicaldimensions. The magnitude and polarity ofthe induced charges are
proportional to themagnitude and direction of the appliedforce. The piezoelectric materials
arequartz, tourmaline, Rochelle salt, andothers. The range of forces that can bemeasured
using piezoelectric sensors arefrom 1 to 20 kN and at a ratio of 2 × [Link] sensors can be
used to measure aninstantaneous change in force (dynamicforces).
EXTERNAL SENSORS
External sensors are primarily used to learn more about a robot’s environment,especially the
objects being manipulated. External sensors can be divided into thefollowing categories:
• Contact type, and
• Noncontact type.
10. Explain the workingof Limit Switch as Contact Type force sensor
with neat diagram
A limit switch is constructedmuch as the ordinary light switch used at homesand offices. It
has the same on-off [Link] limit switch usually has a pressure-
sensitivemechanical arm, as shown in Fig. 4.11(a). Whenan object applies pressure on
themechanical arm,the switch is energized. An object might have anattached magnet that
causes a contact to rise andclose when the object passes over the arm. Asshown in Fig.
4.11(b)
the pull-up register keeps the signal at +V until the switch closes, sendingthe signal to
ground. Limit switches can be eitherNormally Open (NO) or Normally Closed (NC), and
may have multiple-poles. Anormally open switch has continuity when pressure is applied. A
single-pole switchallows one circuit to be opened or closed upon contact, whereas a multi-
pole switchallows multiple switch circuits to be open or closed.
Limit switches are used in robots to detect the extreme positions of the motions,where the
link reaching an extreme position switches off the corresponding actuator,thus, safeguarding
any possible damage to the mechanical structure of the robot arm.
When a metal target approaches theface and enters the field, eddy currents are inducedinto
the surface of the target. This results in aloading or damping effect that causes a reductionin
amplitude of the oscillator signal. The detectorcircuit detects the change in the
oscillatoramplitude. The detector circuit will ‘switch on’ atspecific operating amplitude. This
signal ‘turnson’ the solid-state output circuit. This is oftenreferred to as damped condition.
As the targetleaves the sensing field, the oscillator respondswith an increase in amplitude.
As the amplitude increases above a specific value, it is detected bythe detector circuit, which
is ‘switched off’ causingthe output signal to return to the normal or ‘off’state.
The sensing range of an inductive proximitysensor refers to the distance between the
sensorface and the target. It also indicates the shape ofthe sensing field generated through the
coil and thecore. There are several mechanical andenvironmental factors that affect the
sensing [Link] usual range is up to 10–15 mm but somesensors have ranges as high as
100 mm.
Onecapacitive plate is part of the switch, the sensorface is the insulator, and the target is the
otherplate. Ground is the common path. Thecapacitive switch has the same four elements as
the inductive sensor, i.e., sensor (thedielectric media), oscillator circuit, detector circuit, and
solid-state output circuit.
The oscillator circuit in a capacitive switch operates like one in an inductiveswitch. The
oscillator circuit includes capacitance from the external target plateand the internal plate. In a
capacitive sensor, the oscillator starts oscillating when sufficient feedback capacitance is
detected. Major characteristics of the capacitiveproximity sensors are as follows:
• They can detect non-metallic targets.
• They can detect lightweight or small objects that cannot be detected bymechanical
limit switches.
• They provide a high switching rate for rapid response in object counting
applications.
• They can detect limit targets through nonmetallic barriers (glass, plastics, etc.).
• They have long operational life with a virtually unlimited number of operating
cycles.
• The solid-state output provides a bounce-free contact signal.
• Capacitive proximity sensors have two major limitations.
• The sensors are affected by moisture and humidity, and
• They must have extended range for effective sensing.
Capacitive proximity sensors have a greater sensing range than inductiveproximity sensors.
Sensing distance for capacitive switches is a matter of platearea, as coil size is for inductive
proximity sensors. Capacitive sensors basicallymeasure a dielectric gap. Accordingly, it is
desirable to be able to compensate forthe target and application conditions with a sensitivity
adjustment for the sensingrange. Most capacitive proximity sensors are equipped with a
sensitivity adjustmentpotentiometer.
3. Visual Servoing and Navigation Control The purpose here is to direct theactions of
the robot based on its visual inputs, for example, to control the trajectoryof the robot’s
end-effector toward an object in the workspace. Industrial applicationsof visual
servoing are part positioning, retrieving parts moving along a conveyor,seam tracking
in continuous arc welding, etc.
All the above applications someway require the determination of theconfiguration of the
objects, motion of the objects, reconstruction of the 3D geometryof the objects from their 2D
images for measurements and building the maps of theenvironments for a robot’s navigation.
Coverage of vision system is from a fewmillimetres to tens of metres with either narrow or
wide angles, depending upon thesystem needs and design. Figure 4.15 shows a typical visual
system connected to anindustrial robot.
Vidicon Camera Early vision systems employed vidicon cameras, which werebulky vacuum
tube devices. They are almost extinct today but explained here for thesake of completeness in
the development of video cameras. Vidicons are also moresensitive to electromagnetic noise
interference and require high power. Their chiefadvantages are higher resolution and better
light sensitivity.
Figure 4.17 shows theschematic diagram of a vidicon [Link] mosaic reacts to the
varying intensity of a light by varying its [Link], as the electric gun generates and
sends a continuous cathode beam to themosaic passing though two pairs of orthogonal
capacitors (deflectors), the electronbeam gets deflected up or down, and left or right based on
the charge on each pair ofcapacitors. As the beam scans the image, at each instant, the output
is proportional tothe resistance of the mosaic or the light intensity on the mosaic. By reading
the outputvoltage continuously, an analog representation of the image can be obtained.
Pleasenote that the analog signal of vidicon needs to be converted to digital signal
usinganalog-to-digital converters (ADC), as mentioned in Section 2.1.2 in order to processthe
image further using a PC. The ADC which actuallyperforms the digitizationof the analog
signal requires mainly three steps, i.e., sampling, quantization, andencoding.
Theoutput is a discrete representation of the image as a voltage sampled in time. Solid state
cameras are smaller, more rugged, last longer, and have less inherent imagedistortion than
vidicon cameras. They are also slightly more costly, but prices arecoming [Link]. 4.19
Basic principle of a CCD device (Image acquisition)Both the CCDs and CID chips use large
transfer techniques to capture an image.
Lighting Techniques
One of the key questions in robot vision is whatdetermines how bright the image of some
surface on the object will be? It involvesradiometry (measurement of the flow and transfer
of radiant energy), generalillumination models, and surface having both diffuse and specular
reflectioncomponents. Different points on the objects in front of the imaging system
willhave different intensity values on the image, depending on the amount of
incidentradiance, how they are illuminated, how they reflect light, how the reflected light is
collected by a lens system, and how the sensor camera responds to the incoming light.
Figure 4.20 shows the basic reflection phenomenon. Hence, proper illumination ofthe scene
is important. It also affects the complexity level of the image-processingalgorithm required.
The lighting techniques must avoid reflections and shadow unlessthey are designed for the
purpose of image processing. The main task of lighting is tocreate contrast between the
object features to be detected. Typical lighting techniquesare explained below.
Direct Incident LightingThis simple lighting technique can be used for nonreflective materials which strongly
scatter the light due to their matte, porous, fibrous,non-glossy surface. Ideally, a ring light is chosen for smaller
illuminated fields thatcan be arranged around the lens. Shadows are avoided to the greatest extent due to
theabsolutely vertical illumination. Halogen lamps and large fluorescence illuminationcan be used too.
Diffuse Incident Lighting
Diffused light is necessary for many applications,e.g., to test reflective, polished, glossy, or metallic objects. It
is particularly difficultif these surfaces are not glossy, perfectly flat, but individually shaped, wrinkled,curved,
or cylindrical. To create diffused lighting, one may use incident light withdiffusers, coaxial illumination, i.e.,
light is coupled into the axis of the camera bymeans of a beam splitter or half-mirror, or the dome-shaped
illumination where lightis diffused by means of a diffused coated dome in which the camera looks throughan
opening in the dome onto the workpiece.
Backlighting
Transmitted light illumination is the first choice of lighting when itis necessary to measure parts as accurately
as possible. The lighting is arranged onthe opposite side of the camera, the component itself is put in the light
beam.
Image ProcessingImage-processing techniques are used to enhance, improve, or otherwise alter an image and
to prepare it for image analysis. Usually, during image processing, information is not extracted from the image.
The intention is to remove faults, trivial information, or information that may be important, and to improve
the image. Image processing examines the digitized data to locate and recognize an object within the image
field. It is divided into several sub-processes, which are discussed below:
Image Data ReductionHere, the objective is to reduce the volume of data. As a preliminary step in the data
analysis, the schemes like digital conversion or windowing can be applied to reduce the data. While the digital
conversion reduces the number of gray levels used by the vision system, windowing involves using only a
portion of the total image stored in the frame buffer for image processing and analysis. For example, in
windowing, to inspect a circuit board, a rectangular window is selected to surround the component of interest
and only pixels within that window are analyzed.
Histogram AnalysisA histogram is a representation of the total number of pixels of an image at each gray level.
Histogram information is used in a number of different processes, including thresholding. For example,
histogram information can help in determining a cut-off point when an image is to be transformed into binary
values.
Thresholding It is the process of dividing an image into different portions or levels by picking a certain grayness
level as a threshold. Comparing each pixel value with the threshold, and then assigning the pixel to the
different portions or level, depending on whether the pixel’s grayness level is below the threshold (‘off’ or 0, or
not belonging) or above the threshold (‘on’ or 1, or belonging).
Masking A mask may be used for many different purposes, e.g., filtering operations and noise reduction, and
others. It is possible to create masks that behave like a lowpass filter such that higher frequencies of an image
are attenuated while the lower frequencies are not changed very much. Thenoise is reduced. Masking an
image considers a portion of an imaginary image shown in Fig. 4.23(a), which has all the pixels at a gray value
of 20 except the one at a gray level of 100. The one with 100 may be considered noise. Applying the 3 x 3 mask
shown in Fig. 4.23(b) over the corner of the image yields the following value:
between the noisy pixel and the surrounding pixels, i.e., 100 vs. 20, becomes much smaller, namely, 29 vs. 20,
thus reducing the noise. With this characteristic, the mask acts as a low-pass filter. Note that the above
reduction of noise has been achieved using what is referred as neighborhood averaging, which causes the
reduction of the sharpness of the image as well.
Edge DetectionEdge detection is a general name for a class of computer programs and techniques that
operate on an image and result in a line drawing of the image. The lines represent changes in values such as
cross section of planes, intersections of planes, textures, lines, etc. In many edge-detection techniques, the
resulting edgesare not continuous. However, in many applications, continuous edges are preferred,which can
be obtained using the Hough transform. It is a technique used to determinethe geometric relationship
between different pixels on a line, including the slopeof the line. Consider a straight line in the xy-plane, as
shown in Fig. 4.24, which isexpressed asy = mx + c (4.15)
SegmentationSegmentation is a generic name for a number of different techniques that divide the image into
segments of its constituents. The purpose of segmentation is to separate the information contained in the
image into smaller entities that can be used for other purposes. Segmentation includes edge detection, as
mentioned above, region growing and splitting, and others. While region growing works based on the similar
attributes, such as gray-level ranges or other similarities, and then try to relate the regions by their average
similarities, region splitting is carried out based on thresholding in which an image is split into closed areas of
neighborhood pixels by comparing them with thresholding value or range.
Morphology OperationsMorphology operations are a family of operations which are applied on the shape of
subjects in an image. They include many different operations, both for binary and gray images, such as
thickening, dilation, erosion, skeletonization, opening, closing, and fi ling. These operations are performed on
animage in order to aid in its analysis, as well as to reduce the ‘extra’ information thatmay be present in the
image. For example, Fig. 4.25(a) shows the object which afterskeletonization is shown in Fig. 4.25(b).
Image Analysis Image analysis is a collection of operations and techniques that are used to extract information
from images. Among these are feature extraction; object recognition; analysis of the position, size, orientation;
extraction of depth information, etc. Some techniques can be used for multiple purposes. For example,
moment equations may be used for object recognition, as well as to calculate the position and orientation of
an object. It is assumed that image processing has already been performed to the image and available for the
image analysis. Some of the image analysis techniques are explained below.
Feature ExtractionObjects in an image may be recognized by their features that uniquely characterize them.
These include, but are not limited to, gray-level histograms, morphological features such as perimeter, area,
diameter, number of holes, etc., eccentricity, cord length, and moments. As an example, perimeter of an
object may be found by first applying an edge-detection routine and then counting the number of pixels on the
perimeter. Similarly, the area can be calculated by region growing techniques, whereas diameter of a non
circular object is obtained by the maximum distance between any two points on any line that crosses the
identified area of the object. In order to know the thinness of an object, it can be calculated using either of the
two ratios:
Object Recognition The next in image analysis is to identify the object that the image represents based on the
extracted features. The recognition algorithm should be powerful enough to uniquely identify the object.
Typical techniques used in the industries are template matching and structural technique.
1. Low-level VisionThe sequence of steps from image formation to image acquisition, etc., described
above, along with the extraction of certain physical properties of the visible environment, such as
depth, three-dimensional shape, object boundaries, or surface-material properties, can be classified
as a process of low-level vision.
Activity in the low-level vision is to process images for feature extraction (edge, corner, or optical flow).
Operations carried out on the pixels in the image to extract the above properties with respect to intensity or
depth at each point in the image. One may, for example, be interested in extracting uniform regions, where
the gradient of the pixels remains constant, or fi rst-order changes in gradient, which would correspond to
straight lines, or second-order changes which could be used to extract surface properties such as peaks, pits,
ridges, etc. A number of characteristics that are typically associated with low-level vision processes are as
follows:
• They are spatially uniform and parallel, i.e., with allowance for the decrease inresolution from the
center of the visual fi eld outwards, similar process is appliedsimultaneously across the visual fi eld.
For example, processing involved in edgedetection, motion, or stereo vision, often proceed in parallel
across the visualfield, or a large part of it.
• Low-level visual processes are also considered ‘bottom-up’ in nature. Thismeans that they are
determined by the data, i.e., data driven, and are relativelyindependent of the task at hand or
knowledge associated with specific [Link] far as the edge detection is concerned, it will be
performed in the samemanner for images of different objects, with no regard to whether the task to
dowith moving around, looking for a misplaced object, or enjoying the landscape.
2. Intermediate-level VisionIn this level, objects are recognized, and 3Dscenes are interpreted using the
features obtained from the low-level vision. Intermediate-level processing is fundamentally concerned with
grouping entitiestogether. The simplest case is when one groups pixels into lines. One can then expressthe line
in a functional form. Similarly, if the output of the low-level informationis a depth map, one may further need
to distinguish object boundaries, or othercharacteristics. Even in the simple case where one is trying to extract
a single sphere,it is not an easy process to go from a surface-depth representation to a center-
andradiusrepresentation. In contrast to higher-level vision, the process here does notdepend on the
knowledge about specific object.
3. High-level VisionHigh-level vision, which is equivalent to image understanding,is concerned mainly with the
interpretation of scene in terms of the objectsin it and is usually based on knowledge of specific objects and
relationships. It isconcerned primarily with the interpretation and use of information in the imagerather than
the direct recovery of physical [Link] high-level vision, interpretation of a scene goes beyond the tasks
of lineextraction and grouping. It further requires decisions to be made about types ofboundaries, such as
which are occluding, and what information is hidden from theuser. Further grouping is essential at this stage
since one may still need to be ableto decide which lines group together to form an object. To do this, it is
necessary tofurther distinguish lines which are part of the object structure, from those which arepart of a
surface texture, or caused by shadows. High-level systems are, therefore,object oriented and sometimes called
‘top-down’. High-level visual processes areapplied to a selected portion of the image, rather than uniformly
across the entireimage, as done in low- and intermediate-level visions. They almost always requiresome form
of knowledge about the objects of the scene to be included.
Difficulties in Vision
• A vision system cannot uniquely represent or process all available data because ofcomputational
problem, memory, and processing-time requirements imposed on thecomputer. Therefore, the
system mustcompromise.
• Other problems include variationof light, part-size, part placement, and limitations in the dynamic
ranges available intypical vision sensors.
• A vision system requires specialized hardware and software.
• It is possible to purchase just the hardware with little or no vision applicationprogramming. In fact, a
few third-party programs are available. A hardware-onlyapproach is less expensive and can be more
flexible for handling usual visionrequirements. But, since this approach requires image processing
expertise, it is only
• of interest to users who wish to retain the responsibility of image interpretation. Itis usual practice to
obtain the hardware and application software together from thesupplier. However, the user might
still need custom programming for an application. Major vision system suppliers specialise in
providing software for only a few
• application areas.
• Every vision system requires a sensor to cover the visual image into an electronicsignal. Several types
of video sensors are used, including vidicon cameras, vacuumtube devices, and solid-state sensors.
Many of these vision systems were originallydesigned for other applications, such as television so the
signal must be processed toextract the visual image and remove synchronization information before
the signalis sent to the computer for further processing.
• The computer then treats this digitalsignal as the array pixels and processes this data to extract the
desired [Link] processing can be very time consuming. For a typical sensor of 200,000or
more pixels, a vision system can take many seconds, even minutes, to analyzethe complete scene and
determine the action to be taken.
• The number of bits to beprocessed is quite large, for example, a system with 512 × 512 pixels array
and an8-bit intensity per pixel yields over two million of bits to be processed. If continuousimage at a
30 FPS frame rate were being received, data bytes would be received atan 8 MHz rate.
• Few computers can accept inputs at these data rates, and, in any case,there would be no time left to
process the data. When higher resolution system, colorsystem, or multiple camera systems are
considered, data-handling requirementsbecome astronomical.
Remadies
Several methods can be used to reduce the amount of datahandled and, therefore, the processing time. They
are explained as follows:
• One approach is the binary vision, which is used when only black-and-whiteinformation is processed
(intensity variations and shades of Gray are ignored).In binary vision, a picture is converted into a
binaryimage by thresholding, asillustrated in Fig. 4.29. In thresholding, a brightness level is selected.
All datawith intensities equal to or higher than this value are considered white, and allother levels are
considered black.
• Another method of shortening process time is to control object placement sothat objects of interest
cannot overlap in the image. Complicated algorithmsto separate images are then not necessary, and
the image-processing time isreduced.
• A third approach reduces data handling by processing only a small window ofthe actual data; that is,
the object is located in a predefined field of view. Forexample, if the robot is looking for a mark on the
printed circuit board, thesystem can be held in such a way that the mark is always in the upper
rightcorner.
• Fourth approach takes a statistical sample of data and makes decisions on thisdata sample.
Unfortunately, all of these approaches ignore some of the availabledata and, in effect, produce a less
robust system. Processing time is saved, butsome types of complex objects cannot be recognized.
Signal conditioning
The basic information or data generated by the transducers (or sensors) generally requires ‘conditioning’or
‘processing’ of one sort or another before they are presented to the observer as anindication or a record, or to
be used by a robot controller for further action.
22. Explain the role of Amplifiers and Filters with neat diagrams
Since the electrical signals produced by most transducers of a sensor are at a lowvoltage, generally they
require amplification before they are suitably recognizedby a monitoring system like a data processor,
controller, or data logger. The use of operational amplifiers with the sensors will be explained [Link]
operational amplifier (op-amp) is the most widely utilized analog electronicsub-assembly. It is the basis of
instrumentation amplifier, filters, and a countless ofanalog and digital data-processing equipment.
An op-amp could be manufactured in the discrete-element form using, say, ten bipolar junction transistors and
as many discrete resistors or alternatively (and preferably) in the monolithic form as in IC(Integrated Circuit)
chip that may be equivalent to over 100 discrete elements. In anyform, the device has input impedance Zi, an
output impedance Zo and a gain K, as indicated in Fig. 4.30(a). A common 8-pin dual in-line package (DIP) or V-
package is shown in Fig. 4.30(b).
From Fig. 4.30(a), the open-loop condition yields, vo= Kvi -------------------------------------------(4.22)
where the input voltage vi is the differential input voltage defined as the algebraicdifference between the
voltages at the +ve and –ve leads. Thus,vi = vip– vin -------------------------------------(4.23)
The voltages at the two input leads are nearly equal. If a large voltage differential vi (say, 10 V) at the input
then according to Eq. (4.22), the output voltage should be extremely high. This never happens in practice,
because the device saturates quickly beyond moderate output voltages in the order of 15 V. Vip is termed non-
inverting input and Vin is termed inverting input.
Filters
A filter is a device that allows through only the desirable part of a signal, rejecting the unwanted part.
Unwanted signals can seriously degrade the performance of a robotic system. External disturbances, error
components in excitations, and noise generated internally within system components and instrumentation are
such spurious signals, which may be removed by a filter. There are four broad categories of filters, namely,
low-pass filters, high-pass filters, band-pass filters, and band-reject (or notch) filters, which are shown in Fig.
4.31.
An analog filter contains active components like transistors or op-amps. It is a physical dynamic system,
typically an electric circuit, whose dynamics will determine which (desired) signal components would be
allowed through and which (unwanted) signal components would be rejected. In a way, output of the dynamic
system is the filtered signal. An analog filter can be represented as a differential equation with respect to time.
Filtering can be achieved through digital filters as well which employ digital signal processing. Digital filtering is
an algorithm by which a sampled signal (or sequence of numbers), acting as an input, is transformed to a
second sequence of numbers called the output. It is a discrete-time system and can be represented as a
difference equation. Digital filtering has the usual digital benefits of accuracy, stability, and adjustability by
software (rather than hardware) changes.
22. Explain the role of Modulators and Demodulators with neat diagrams.
Signals are sometimes deliberately modified to maintain the accuracy during their transmission, conditioning,
and processing. In modulation, the data signal which is referred to as modulating signal is used to vary a
property, say, amplitude or frequency, of a carrier signal. Thus, the carrier signal is modulated by the data
signal.
After transmitting or conditioning, the data signal is recovered by removing the carrier signal from the
modulated signal. This step is known as demodulation. The carrier signal can be either sine or square wave,
and its frequency should be 5–10 times the highest frequency of the data signal.
Figure 4.32 shows some typical modulation techniques in which the amplitude of a high-frequency sinusoidal
carrier signal is varied according to the amplitude of the data signal. Figure 4.32(a) shows the data signal which
needs to be transmitted. The carrier signal’s frequency is kept constant, while amplitude is same as that of the
data signal. This is called Amplitude Modulation (AM), and the resulting amplitude modulated signal is shown
in Fig. 4.32(b). In Fig. 4.32(c).
However, the frequency of the carrier signal is varied in proportion to the amplitude of the data signal
(modulating signal), while keeping the amplitude of the carrier signal constant. This is called Frequency
Modulation (FM). An FM is less susceptible to noise than AM.
In Pulse-Width Modulation (PWM), the carrier signal is a pulse sequence, and its width is changed in
proportion to the amplitude of the data signal, while keeping the pulse spacing constant. This is shown in Fig.
4.32(d).
The PWM signals can be used directly in controlling a process without having to demodulate them. There also
exists Pulse-Frequency Modulation (PFM), where the carrier signal is a pulse sequence. Here, the frequency of
the pulses is changed in proportion to the value of the data signal, while keeping the pulse width constant. It
has the advantages of ordinary FM.
Demodulation or detection is the process of extracting the original data signal from a modulated signal. A
simple and straight forward method of demodulation is by detection of the envelope of the modulated signal.
Analog-to-Digital Converter ( ADC) An analog-to-digital converter or ADC, on the other hand, converts an
analog signal into the digital form, according to an appropriate code, before the same is used by a digital
processor or a computer. The process of analog-to-digital conversion is more complex and time consuming
than the digital-to-analog conversion. ADCs are usually more costly, and their conversion rate is slower than
DACs. Several types of ADCs are commercially available.
Note that the most fundamental property of any DAC or ADC is their number of bits for which it is designed,
since this is a basic limit on resolution. Units of 8 to 12 bits are most common even though higher bits are also
available. Both DAC and ADC are elements in typical input/output board (or I/O board, or data acquisition and
control card, i.e., DAC or DAQ), which are usually situated on the same digital interface board.
Various bridge circuits are employed widely for the measurement of resistance, capacitance, and inductance,
as many transducers convert physical variables to these quantities. Figure 4.33 shows a purely resistive
(Wheatstone) bridge in its simplest form.
The basic principle of the bridge may be applied in two different ways, namely, the null method and the
deflection method. If the resistances are adjusted so that the bridge is balanced then there is no voltage across
AC, i.e., VAC = 0. This happens when R1/R4 = R2/R3. Now if one of the resistors, say, R1, changes its resistance
then there will be unbalance in the bridge and a voltage will appear across VAC causing a meter reading. This
meter reading is an indication of the change in R1 of the transducer element, and can be utilized to compute
the change. This method of measurement is called the deflection method.
In the null method, one of the resistors is adjustable manually. Thus, if R1 changes causing a meter deflection,
R2 can be adjusted manually till its effect just cancels that of R1 and the bridge is returned to its balanced
position. Here, the change in R1 is directly related to the change in R2 required to effect the balance. Note that
both the deflection and null methods require a calibration curve so that one knows the numerical values of R1
or R2 that has caused the imbalance or balance, respectively.
Note that the measurements of rapid dynamic phenomenon can be done using the deflection method.
Moreover, based on the alternate current (ac) and direct current (dc) excitations in the bridge, there are ac
bridges and dc bridges, respectively.
25. Explain the working of Signal Analyzer
Modern signal analyzers employ digital techniques of signal analysis to extract useful information that is
carried by the signal. Digital Fourier analysis using Fast Fourier Transform ( FFT) is perhaps the single most
common procedure that is used in the vast majority of signal analyzers. Fourier analysis produces the
frequency spectrum of a time signal, which is explained here in brief. Any periodic signal f(t) can be
decomposed into a number of sines and cosines of different amplitudes, an and bn, and frequencies nwt, for n
= 1, 2, …∞, which is expressed as
If one adds sine and cosine functions together, the original signal can be reconstructed. Equation (4.25) is
called a Fourier series, and the collection of different frequencies present in the equation is called the
frequency spectrum or frequency content of the signal.
Even though the signal is in amplitude-time domain, the frequency spectrum is in the amplitude-frequency
domain. For example, for the function, f(t) = sin(t) of Fig. 4.34(a), which consists of only one frequency with
constant amplitude, the plotted signal would be represented by a single line at the given frequency, as shown
in Fig. 4.34(b). If the plot with given frequency and amplitude is represented by the arrow in Fig. 4.34(b), the
same sine function can be reconstructed. The plots in Fig. 4.35 are similar and represent the following:
The frequencies are also plotted in the frequency-amplitude domain. Clearly, when the number of frequencies
contained in f(t) increases, the summation becomes closer to a square function. Theoretically, to reconstruct a
square wave from sine functions, an infinite number of sines must be added together. In practice, however,
some of the major frequencies within the spectrum will have larger amplitudes. These major frequencies or
harmonics are used in identifying and labeling a signal, including recognizing shapes, objects, etc.
26. Explain how a user or a designer select appropriate sensors for a robotic
application.
In using sensors, one must first decide what the sensor is supposed to do and what result one expects. A
sensor detects the quantity to be measured (the measurand). The transducer converts the detected
measurand into a convenient form for subsequent use, e.g., for control or actuation. The transducer signal
may be filtered, amplified and suitably modified using the suitable devices. Selection of suitable sensors for
robotic applications relies heavily of their performance specifications.
Majority of manufacturers provide what are actually static parameters. However, dynamic parameters are also
important and the scope of syllabus.
The following definitions will help a user or designer select appropriate sensors for a robotic application.
1. Range:
Range or span is a measure of the difference between the minimum and maximum values of its input or
output (response) so as to maintain a required level of output accuracy. For example, a strain gauge might be
able to measure output values over the range from 0.1 to 10 Newtons.
2. Sensitivity
Sensitivity is defined as the ratio of the change of output to change in input. As an example, if a movement of
0.025 mm in a linear potentiometer causes an output voltage by 0.02 volt then the sensitivity is 0.8 volts per
mm. It is sometimes used to indicate the smallest change in input that will be observable as a change in
output. Usually, maximum sensitivity that provides a linear and accurate signal is desired.
3. Linearity
Perfect linearity would allow output versus input to be plotted as a straight line on a graph paper. Linearity is a
measure of the constancy of the ratio of output to input. In the form of an equation, it is y = mx (4.27) where x
is input and y is output, and m is a constant. If m is a variable, the relationship is not linear. For example, m
may be a function of x, such as m = a + bx where the value of b would introduce a nonlinearity. A measure of
the nonlinearity could be given as the value of b.
4. Response Time
Response time is the time required for a sensor to respond completely to a change in input. The response time
of a system with sensors is the combination of the responses of all individual components, including the
sensor. An important aspect in selecting an appropriate sensor is to match its time response to that of the
complete system. Associated definitions like rise time, peak time, settling time, etc., with regard to the
dynamic response of a sensor
5. Bandwidth
It determines the maximum speed or frequency at which an instrument associated with a sensor or otherwise
is capable of operating. High bandwidth implies faster speed of response. Instrument bandwidth should be
severaltimes greater than the maximum frequency of interest in the input signals.
6. Accuracy
Accuracy is a measure of the difference between the measured and actual values. An accuracy of ±0.025 mm
means that under all circumstances considered, the measured value will be within 0.025 mm of the actual
value. In positioning a robot and its end-effector, verifi cation of this level of accuracy would require careful
measurement of the position of the end-effector with respect to the base reference location with an overall
accuracy of 0.025 mm under all conditions of temperature, acceleration, velocity, and loading. Precision-
measuring equipment, carefully calibrated against secondary standards, would be necessary to verify this
accuracy. Accuracy describes ‘closeness to true values.’
7. Repeatability
Repeatability and Precision Repeatability is a measure of the difference in value between two successive
measurements under the same conditions, and is a far less stringent criterion than accuracy. As long as the
forces, temperature, and other parameters have not changed, one would expect the successive values to be
the same, however poor the accuracy is. An associated definition is precision, which means the ‘closeness of
agreement’ between independent measurements of a quantity under the same conditions without any
reference to the true value, as done above. Note that the number of divisions on the scale of the measuring
device generally affects the consistency of repeated measurement and, therefore, the precision. In a way,
precision describes ‘repeatability.’ Figure 4.36 illustrates the difference between accuracy and precision.
8. Resolution
Resolution and Threshold Resolution is a measure of the number of measurements within a range from
minimum to maximum. It is also used to indicate the value of the smallest increment of value that is
observable, whereas threshold is a particular case of resolution. It is defi ned as the minimum value of input
below which no output can be detected.
9. Hysteresis It is defined as the change in the input/output curve when the direction of motion changes, as
indicated in Fig. 4.37.
This behavior is common in loose components such as gears, which have backlash, and in magnetic devices
with ferromagnetic media, and others.
10. Type of Output
Type of Output can be in the form of a mechanical movement, an electrical current or voltage, a pressure, or
liquid level, a light intensity, or another form. To be useful, it must be converted to another form, as in the
LVDT (Linear Variable Differential Transducer) or strain gauges, which are discussed earlier. In addition to the
above characteristics, the following considerations must also be made while selecting a sensor.