alcomax investments plc programming

tax planning strategies for pensions and investments Azare, Nigeria

Yes, it's true, monkeys love that hold card cash and silver bananas. These figures are uma investment approximation based on the user submissions on Wall Street Oasis over 86,as well as the thousands of discussions on compensation in the community archives. If you contribute to the WSO Company Databaseyou can get access to thousands of detailed compensation statistics across thousands of investment banks without paying a dime.

Alcomax investments plc programming how to start a investment

Alcomax investments plc programming

Russian Federation Information Technology and Services. Moscow, Russian Federation Biotechnology. Russian Federation freelancer Animation. Russian Federation Pharmaceuticals. Russian Federation Senior Scientist at St. Moscow, Russian Federation Consumer Electronics.

Russian Federation Media Production. Russian Federation Law Practice. Russian Federation Design cooperative Furniture. Russian Federation Events Services. Moscow, Russian Federation Consumer Goods. Russian Federation Photography Professional Photography. Russian Federation Farming Professional Farming. This ensures that the reader will be properly equipped to appreciate and critically appraise the various merits and characteristics of different instruments when faced with the task of choosing a suitable instrument.

It should be noted that, whilst measurement theory inevitably involves some mathematics, the mathematical content of the book has deliberately been kept to the minimum necessary for the reader to be able to design and build measurement systems that perform to a level commensurate with the needs of the automatic control scheme or other system that they support.

Where mathematical procedures are necessary, worked examples are provided as necessary throughout the book to illustrate the principles involved. Self-assessment questions are also provided in critical chapters to enable readers to test their level of understanding, with answers being provided in Appendix 4.

Part 1 is organized such that all of the elements in a typical measurement system are presented in a logical order, starting with the capture of a measurement signal by a sensor and then proceeding through the stages of signal processing, sensor output transducing, signal transmission and signal display or recording.

Ancillary issues, such as calibration and measurement system reliability, are also covered. Discussion starts with a review of the different classes of instrument and sensor available, and the sort of applications in which these different types are typically used. This opening discussion includes analysis of the static and dynamic characteristics of instruments and exploration of how these affect instrument usage. A comprehensive discussion of measurement system errors then follows, with appropriate procedures for quantifying and reducing errors being presented.

The importance of calibration procedures in all aspects of measurement systems, and particularly to satisfy the requirements of standards such as ISO and ISO , is recognized by devoting a full chapter to the issues involved. This is followed by an analysis of measurement noise sources, and discussion on the various analogue and digital signal-processing procedures that are used to attenuate noise and improve the quality of signals.

After coverage of the range of electrical indicating and test instruments that are used to monitor electrical. The problems of signal transmission are considered next, and various means of improving the quality of transmitted signals are presented. This is followed by an introduction to digital computation techniques, and then a description of their use within intelligent measurement devices. The methods used to combine a number of intelligent devices into a large measurement network, and the current status of development of digital fieldbuses, are also explained.

Then, the final element in a measurement system, of displaying, recording and presenting measurement data, is covered. To conclude Part 1, the issues of measurement system reliability, and the effect of unreliability on plant safety systems, are discussed. This discussion also includes the subject of software reliability, since computational elements are now embedded in many measurement systems.

Part 2 commences in the opening chapter with a review of the various technologies used in measurement sensors. The chapters that follow then provide comprehensive coverage of the main types of sensor and instrument that exist for measuring all the physical quantities that a practising engineer is likely to meet in normal situations.

However, whilst the coverage is as comprehensive as possible, the distinction is emphasized between a instruments that are current and in common use, b instruments that are current but not widely used except in special applications, for reasons of cost or limited capabilities, and c instruments that are largely obsolete as regards new industrial implementations, but are still encountered on older plant that was installed some years ago. As well as emphasizing this distinction, some guidance is given about how to go about choosing an instrument for a particular measurement application.

Morris published The material involved are Tables 1. Introduction to measurement Measurement techniques have been of immense importance ever since the start of human civilization, when measurements were first needed to regulate the transfer of goods in barter trade to ensure that exchanges were fair. The industrial revolution during the nineteenth century brought about a rapid development of new instruments and measurement techniques to satisfy the needs of industrialized production techniques.

Since that time, there has been a large and rapid growth in new industrial technology. This has been particularly evident during the last part of the twentieth century, encouraged by developments in electronics in general and computers in particular. This, in turn, has required a parallel growth in new instruments and measurement techniques. The massive growth in the application of computers to industrial process control and monitoring tasks has spawned a parallel growth in the requirement for instruments to measure, record and control process variables.

As modern production techniques dictate working to tighter and tighter accuracy limits, and as economic forces limiting production costs become more severe, so the requirement for instruments to be both accurate and cheap becomes ever harder to satisfy. This latter problem is at the focal point of the research and development efforts of all instrument manufacturers. In the past few years, the most cost-effective means of improving instrument accuracy has been found in many cases to be the inclusion of digital computing power within instruments themselves.

The very first measurement units were those used in barter trade to quantify the amounts being exchanged and to establish clear rules about the relative values of different commodities. Such early systems of measurement were based on whatever was available as a measuring unit.

For purposes of measuring length, the human torso was a convenient tool, and gave us units of the hand, the foot and the cubit. Although generally adequate for barter trade systems, such measurement units are of course imprecise, varying as they do from one person to the next. Therefore, there has been a progressive movement towards measurement units that are defined much more accurately. A platinum bar made to this length was established as a standard of length in the early part of the nineteenth century.

This was superseded by a superior quality standard bar in , manufactured from a platinum—iridium alloy. Since that time, technological research has enabled further improvements to be made in the standard used for defining length. Firstly, in , a standard metre was redefined in terms of 1. In a similar fashion, standard units for the measurement of other physical quantities have been defined and progressively improved over the years.

The latest standards for defining the units used for measuring a range of physical variables are given in Table 1. The early establishment of standards for the measurement of physical quantities proceeded in several countries at broadly parallel times, and in consequence, several sets of units emerged for measuring the same physical variable. For instance, length can be measured in yards, metres, or several other units. Apart from the major units of length, subdivisions of standard units exist such as feet, inches, centimetres and millimetres, with a fixed relationship between each fundamental unit and its subdivisions.

Table 1. Electric field strength Electric resistance Electric capacitance Electric inductance Electric conductance Resistivity Permittivity Permeability Current density. Yards, feet and inches belong to the Imperial System of units, which is characterized by having varying and cumbersome multiplication factors relating fundamental units to subdivisions such as miles to yards , 3 yards to feet and 12 feet to inches.

The metric system is an alternative set of units, which includes for instance the unit of the metre and its centimetre and millimetre subdivisions for measuring length. All multiples and subdivisions of basic metric units are related to the base by factors of ten and such units are therefore much easier to use than Imperial units. However, in the case of derived units such as velocity, the number of alternative ways in which these can be expressed in the metric system can lead to confusion.

In support of this effort, the SI system of units will be used exclusively in this book. However, it should be noted that the Imperial system is still widely used, particularly in America and Britain. The European Union has just deferred planned legislation to ban the use of Imperial units in Europe in the near future, and the latest proposal is to introduce such legislation to take effect from the year The full range of fundamental SI measuring units and the further set of units derived from them are given in Table 1.

Conversion tables relating common Imperial and metric units to their equivalent SI units can also be found in Appendix 1. Today, the techniques of measurement are of immense importance in most facets of human civilization. Present-day applications of measuring instruments can be classified into three major areas. The first of these is their use in regulating trade, applying instruments that measure physical quantities such as length, volume and mass in terms of standard units.

The particular instruments and transducers employed in such applications are included in the general description of instruments presented in Part 2 of this book. The second application area of measuring instruments is in monitoring functions. These provide information that enables human beings to take some prescribed action accordingly. The gardener uses a thermometer to determine whether he should turn the heat on in his greenhouse or open the windows if it is too hot. Regular study of a barometer allows us to decide whether we should take our umbrellas if we are planning to go out for a few hours.

Whilst there are thus many uses of instrumentation in our normal domestic lives, the majority of monitoring functions exist to provide the information necessary to allow a human being to control some industrial operation or process. In a chemical process for instance, the progress of chemical reactions is indicated by the measurement of temperatures and pressures at various points, and such measurements allow the operator to take correct decisions regarding the electrical supply to heaters, cooling water flows, valve positions etc.

One other important use of monitoring instruments is in calibrating the instruments used in the automatic process control systems described below. Use as part of automatic feedback control systems forms the third application area of measurement systems. Figure 1. The value of the controlled variable Ta , as determined by a temperature-measuring device, is compared with the reference value Td , and the difference e is applied as an error signal to the heater.

The heater then modifies the room temperature until Ta D Td. The characteristics of the measuring instruments used in any feedback control system are of fundamental importance to the quality of control achieved. The accuracy and resolution with which an output variable of a process is controlled can never be better than the accuracy and resolution of the measuring instruments used.

This is a very important principle, but one that is often inadequately discussed in many texts on automatic control systems. Such texts explore the theoretical aspects of control system design in considerable depth, but fail to give sufficient emphasis to the fact that all gain and phase margin performance calculations etc. A measuring system exists to provide information about the physical value of some variable being measured. In simple cases, the system can consist of only a single unit that gives an output reading or signal according to the magnitude of the unknown variable applied to it.

However, in more complex measurement situations, a measuring system consists of several separate elements as shown in Figure 1. These components might be contained within one or more boxes, and the boxes holding individual measurement elements might be either close together or physically separate. The term measuring instrument is commonly used to describe a measurement system, whether it contains only one or many elements, and this term will be widely used throughout this text. The first element in any measuring system is the primary sensor: this gives an output that is a function of the measurand the input applied to it.

For most but not all sensors, this function is at least approximately linear. Some examples of primary sensors are a liquid-in-glass thermometer, a thermocouple and a strain gauge. In the case of the mercury-in-glass thermometer, the output reading is given in terms of the level of the mercury, and so this particular primary sensor is also a complete measurement system in itself.

However, in general, the primary sensor is only part of a measurement system. The types of primary sensors available for measuring a wide range of physical quantities are presented in Part 2 of this book. Variable conversion elements are needed where the output variable of a primary transducer is in an inconvenient form and has to be converted to a more convenient form.

For instance, the displacement-measuring strain gauge has an output in the form of a varying resistance. The resistance change cannot be easily measured and so it is converted to a change in voltage by a bridge circuit, which is a typical example of a variable conversion element. In some cases, the primary sensor and variable conversion element are combined, and the combination is known as a transducer.

A very common type of signal processing element is the electronic amplifier, which amplifies the output of the primary transducer or variable conversion element, thus improving the sensitivity and resolution of measurement. This element of a measuring system is particularly important where the primary transducer has a low output. For example, thermocouples have a typical output of only a few millivolts.

Other types of signal processing element are those that filter out induced noise and remove mean levels etc. In some devices, signal processing is incorporated into a transducer, which is then known as a transmitter. Signal transmission is needed when the observation or application point of the output of a measurement system is some distance away from the site of the primary transducer.

The signal transmission element has traditionally consisted of single or multi-cored cable, which is often screened to minimize signal corruption by induced electrical noise. However, fibre-optic cables are being used in ever increasing numbers in modern installations, in part because of their low transmission loss and imperviousness to the effects of electrical and magnetic fields.

The final optional element in a measurement system is the point where the measured signal is utilized. In some cases, this element is omitted altogether because the measurement is used as part of an automatic control scheme, and the transmitted signal is fed directly into the control system.

In other cases, this element in the measurement system takes the form either of a signal presentation unit or of a signal-recording unit. These take many forms according to the requirements of the particular measurement application, and the range of possible units is discussed more fully in Chapter The starting point in choosing the most suitable instrument to use for measurement of a particular quantity in a manufacturing plant or other system is the specification of the instrument characteristics required, especially parameters like the desired measurement accuracy, resolution, sensitivity and dynamic performance see next chapter for definitions of these.

It is also essential to know the environmental conditions that the instrument will be subjected to, as some conditions will immediately either eliminate the possibility of using certain types of instrument or else will create a requirement for expensive protection of the instrument. It should also be noted that protection reduces the performance of some instruments, especially in terms of their dynamic characteristics for example, sheaths protecting thermocouples and resistance thermometers reduce their speed of response.

Provision of this type of information usually requires the expert knowledge of personnel who are intimately acquainted with the operation of the manufacturing plant or system in question. Then, a skilled instrument engineer, having knowledge of all the instruments that are available for measuring the quantity in question, will be able to evaluate the possible list of instruments in terms of their accuracy, cost and suitability for the environmental conditions and thus choose the.

As far as possible, measurement systems and instruments should be chosen that are as insensitive as possible to the operating environment, although this requirement is often difficult to meet because of cost and other performance considerations.

The extent to which the measured system will be disturbed during the measuring process is another important factor in instrument choice. For example, significant pressure loss can be caused to the measured system in some techniques of flow measurement. Published literature is of considerable help in the choice of a suitable instrument for a particular measurement situation. Many books are available that give valuable assistance in the necessary evaluation by providing lists and data about all the instruments available for measuring a range of physical quantities e.

Part 2 of this text. However, new techniques and instruments are being developed all the time, and therefore a good instrumentation engineer must keep abreast of the latest developments by reading the appropriate technical journals regularly. The instrument characteristics discussed in the next chapter are the features that form the technical basis for a comparison between the relative merits of different instruments. Generally, the better the characteristics, the higher the cost.

However, in comparing the cost and relative suitability of different instruments for a particular measurement situation, considerations of durability, maintainability and constancy of performance are also very important because the instrument chosen will often have to be capable of operating for long periods without performance degradation and a requirement for costly maintenance.

In consequence of this, the initial cost of an instrument often has a low weighting in the evaluation exercise. Cost is very strongly correlated with the performance of an instrument, as measured by its static characteristics. Increasing the accuracy or resolution of an instrument, for example, can only be done at a penalty of increasing its manufacturing cost. To select an instrument with characteristics superior to those required would only mean paying more than necessary for a level of performance greater than that needed.

As well as purchase cost, other important factors in the assessment exercise are instrument durability and the maintenance requirements. Likewise, durability is an important consideration in the choice of instruments. The projected life of instruments often depends on the conditions in which the instrument will have to operate. Maintenance requirements must also be taken into account, as they also have cost implications.

As a general rule, a good assessment criterion is obtained if the total purchase cost and estimated maintenance costs of an instrument over its life are divided by the period of its expected life. The figure obtained is thus a cost per year. However, this rule becomes modified where instruments are being installed on a process whose life is expected to be limited, perhaps in the manufacture of a particular model of car.

Then, the total costs can only be divided by the period of time that an instrument is expected to be used for, unless an alternative use for the instrument is envisaged at the end of this period. To summarize therefore, instrument choice is a compromise between performance characteristics, ruggedness and durability, maintenance requirements and purchase cost.

Instruments can be subdivided into separate classes according to several criteria. These subclassifications are useful in broadly establishing several attributes of particular instruments such as accuracy, cost, and general applicability to different applications.

Instruments are divided into active or passive ones according to whether the instrument output is entirely produced by the quantity being measured or whether the quantity being measured simply modulates the magnitude of some external power source. This is illustrated by examples. An example of a passive instrument is the pressure-measuring device shown in Figure 2.

The pressure of the fluid is translated into a movement of a pointer against a scale. The energy expended in moving the pointer is derived entirely from the change in pressure measured: there are no other energy inputs to the system. An example of an active instrument is a float-type petrol tank level indicator as sketched in Figure 2. Here, the change in petrol level moves a potentiometer arm, and the output signal consists of a proportion of the external voltage source applied across the two ends of the potentiometer.

The energy in the output signal comes from the external power source: the primary transducer float system is merely modulating the value of the voltage from this external power source. In active instruments, the external power source is usually in electrical form, but in some cases, it can be other forms of energy such as a pneumatic or hydraulic one.

One very important difference between active and passive instruments is the level of measurement resolution that can be obtained. With the simple pressure gauge shown, the amount of movement made by the pointer for a particular pressure change is closely defined by the nature of the instrument.

Whilst it is possible to increase measurement resolution by making the pointer longer, such that the pointer tip moves through a longer arc, the scope for such improvement is clearly restricted by the practical limit of how long the pointer can conveniently be. In an active instrument, however, adjustment of the magnitude of the external energy input allows much greater control over. Whilst the scope for improving measurement resolution is much greater incidentally, it is not infinite because of limitations placed on the magnitude of the external energy input, in consideration of heating effects and for safety reasons.

In terms of cost, passive instruments are normally of a more simple construction than active ones and are therefore cheaper to manufacture. Therefore, choice between active and passive instruments for a particular application involves carefully balancing the measurement resolution requirements against cost.

An alternative type of pressure gauge is the deadweight gauge shown in Figure 2. Here, weights are put on top of the piston until the downward force balances the fluid pressure. Weights are added until the piston reaches a datum level, known as the null point. Pressure measurement is made in terms of the value of the weights needed to reach this null position. The accuracy of these two instruments depends on different things. For the first one it depends on the linearity and calibration of the spring, whilst for the second it relies on the calibration of the weights.

As calibration of weights is much easier than careful choice and calibration of a linear-characteristic spring, this means that the second type of instrument will normally be the more accurate. This is in accordance with the general rule that null-type instruments are more accurate than deflection types. In terms of usage, the deflection type instrument is clearly more convenient. It is far simpler to read the position of a pointer against a scale than to add and subtract weights until a null point is reached.

A deflection-type instrument is therefore the one that would normally be used in the workplace. However, for calibration duties, the null-type instrument is preferable because of its superior accuracy. The extra effort required to use such an instrument is perfectly acceptable in this case because of the infrequent nature of calibration operations. The output can have an infinite number of values within the range that the instrument is designed to measure.

The deflection-type of pressure gauge described earlier in this chapter Figure 2. As the input value changes, the pointer moves with a smooth continuous motion. Whilst the pointer can therefore be in an infinite number of positions within its range of movement, the number of different positions that the eye can discriminate between is strictly limited, this discrimination being dependent upon how large the scale is and how finely it is divided. A digital instrument has an output that varies in discrete steps and so can only have a finite number of values.

The rev counter sketched in Figure 2. A cam is attached to the revolving body whose motion is being measured, and on each revolution the cam opens and closes a switch. The switching operations are counted by an electronic counter. This system can only count whole revolutions and cannot discriminate any motion that is less than a full revolution. The distinction between analogue and digital instruments has become particularly important with the rapid growth in the application of microcomputers to automatic control systems.

Any digital computer system, of which the microcomputer is but one example, performs its computations in digital form. An instrument whose output is in digital form is therefore particularly advantageous in such applications, as it can be interfaced directly to the control computer. This conversion has several disadvantages. Secondly, a finite time is involved in the process of converting an analogue signal to a digital quantity, and this time can be critical in the control of fast processes where the accuracy of control depends on the speed of the controlling computer.

The class of indicating instruments normally includes all null-type instruments and most passive ones. Indicators can also be further divided into those that have an analogue output and those that have a digital display. A common analogue indicator is the liquid-in-glass thermometer. Another common indicating device, which exists in both analogue and digital forms, is the bathroom scale. The older mechanical form of this is an analogue type of instrument that gives an output consisting of a rotating.

More recent electronic forms of bathroom scale have a digital output consisting of numbers presented on an electronic display. One major drawback with indicating devices is that human intervention is required to read and record a measurement. This process is particularly prone to error in the case of analogue output displays, although digital displays are not very prone to error unless the human reader is careless.

Instruments that have a signal-type output are commonly used as part of automatic control systems. In other circumstances, they can also be found in measurement systems where the output measurement signal is recorded in some way for later use. This subject is covered in later chapters. Usually, the measurement signal involved is an electrical voltage, but it can take other forms in some systems such as an electrical current, an optical signal or a pneumatic signal.

Smart devices are considered in detail in Chapter 9. If we had to measure the temperature of certain chemical processes, however, a variation of 0. Accuracy of measurement is thus one consideration in the choice of instrument for a particular application. Other parameters such as sensitivity, linearity and the reaction to ambient temperature changes are further considerations.

These attributes are collectively known as the static characteristics of instruments, and are given in the data sheet for a particular instrument. It is important to note that the values quoted for instrument characteristics in such a data sheet only apply when the instrument is used under specified standard calibration conditions.

Due allowance must be made for variations in the characteristics when the instrument is used in other conditions. The various static characteristics are defined in the following paragraphs. The accuracy of an instrument is a measure of how close the output reading of the instrument is to the correct value. In practice, it is more usual to quote the inaccuracy figure rather than the accuracy figure for an instrument.

Inaccuracy is the extent to. This means that when the instrument is reading 1. For this reason, it is an important system design rule that instruments are chosen such that their range is appropriate to the spread of values being measured, in order that the best possible accuracy is maintained in instrument readings.

Thus, if we were measuring pressures with expected values between 0 and 1 bar, we would not use an instrument with a range of 0—10 bar. The term measurement uncertainty is frequently used in place of inaccuracy. If a large number of readings are taken of the same quantity by a high precision instrument, then the spread of readings will be very small.

Precision is often, though incorrectly, confused with accuracy. High precision does not imply anything about measurement accuracy. A high precision instrument may have a low accuracy. Low accuracy measurements from a high precision instrument are normally caused by a bias in the measurements, which is removable by recalibration. The terms repeatability and reproducibility mean approximately the same but are applied in different contexts as given below.

Repeatability describes the closeness of output readings when the same input is applied repetitively over a short period of time, with the same measurement conditions, same instrument and observer, same location and same conditions of use maintained throughout. Reproducibility describes the closeness of output readings for the same input when there are changes in the method of measurement, observer, measuring instrument, location, conditions of use and time of measurement.

Both terms thus describe the spread of output readings for the same input. This spread is referred to as repeatability if the measurement conditions are constant and as reproducibility if the measurement conditions vary. The degree of repeatability or reproducibility in measurements from an instrument is an alternative way of expressing its precision. Figure 2. The figure shows the results of tests on three industrial robots that were programmed to place components at a particular point on a table.

The target point was at the centre of the concentric circles shown, and the black dots represent the points where each robot actually deposited components at each attempt. Both the accuracy and precision of Robot 1 are shown to be low in this trial.

Robot 2 consistently puts the component down at approximately the same place but this is the wrong point. Therefore, it has high precision but low accuracy. Finally, Robot 3 has both high precision and high accuracy, because it consistently places the component at the correct target position.

Whilst it is not, strictly speaking, a static. When used correctly, tolerance describes the maximum deviation of a manufactured component from some specified value. It is normally desirable that the output reading of an instrument is linearly proportional to the quantity being measured. The Xs marked on Figure 2. Normal procedure is to draw a good fit straight line through the Xs, as shown in Figure 2. Whilst this can often be done with reasonable accuracy by eye, it is always preferable to apply a mathematical least-squares line-fitting technique, as described in Chapter The non-linearity is then defined as the maximum deviation of any of the output readings marked X from this straight line.

Non-linearity is usually expressed as a percentage of full-scale reading. Thus, sensitivity is the ratio: scale deflection value of measurand producing deflection The sensitivity of measurement is therefore the slope of the straight line drawn on Figure 2. Output reading. Example 2. Solution If these values are plotted on a graph, the straight-line relationship between resistance change and temperature change is obvious. If the input to an instrument is gradually increased from zero, the input will have to reach a certain minimum level before the change in the instrument output reading is of a large enough magnitude to be detectable.

This minimum level of input is known as the threshold of the instrument. Manufacturers vary in the way that they specify threshold for instruments. Some quote absolute values, whereas others quote threshold as a percentage of full-scale readings. Like threshold, resolution is sometimes specified as an absolute value and sometimes as a percentage of f. One of the major factors influencing the resolution of an instrument is how finely its output scale is divided into subdivisions.

These standard ambient conditions are usually defined in the instrument specification. As variations occur in the ambient temperature. Such environmental changes affect instruments in two main ways, known as zero drift and sensitivity drift. Zero drift is sometimes known by the alternative term, bias. Zero drift or bias describes the effect where the zero reading of an instrument is modified by a change in ambient conditions.

This causes a constant error that exists over the full range of measurement of the instrument. The mechanical form of bathroom scale is a common example of an instrument that is prone to bias. It is quite usual to find that there is a reading of perhaps 1 kg with no one stood on the scale. If someone of known weight 70 kg were to get on the scale, the reading would be 71 kg, and if someone of known weight kg were to get on the scale, the reading would be kg.

Zero drift is normally removable by calibration. In the case of the bathroom scale just described, a thumbwheel is usually provided that can be turned until the reading is zero with the scales unloaded, thus removing the bias. Zero drift is also commonly found in instruments like voltmeters that are affected by ambient temperature changes.

This is often called the zero drift coefficient related to temperature changes. If the characteristic of an instrument is sensitive to several environmental parameters, then it will have several zero drift coefficients, one for each environmental parameter. A typical change in the output characteristic of a pressure gauge subject to zero drift is shown in Figure 2.

It is quantified by sensitivity drift coefficients that define how much drift there is for a unit change in each environmental parameter that the instrument characteristics are sensitive to. Many components within an instrument are affected by environmental fluctuations, such as temperature changes: for instance, the modulus of elasticity of a spring is temperature dependent.

If an instrument suffers both zero drift and sensitivity drift at the same time, then the typical modification of the output characteristic is shown in Figure 2. Load kg Deflection mm. Load kg : Deflection mm.

If the input measured quantity to the instrument is steadily increased from a negative value, the output reading varies in the manner shown in curve a. If the input variable is then steadily decreased, the output varies in the manner shown in curve b. The non-coincidence between these loading and unloading curves is known as hysteresis.

Two quantities are defined, maximum input hysteresis and maximum output hysteresis, as shown in Figure 2. These are normally expressed as a percentage of the full-scale input or output reading respectively. Hysteresis is most commonly found in instruments that contain springs, such as the passive pressure gauge Figure 2.

It is also evident when friction forces in a system have different magnitudes depending on the direction of movement, such as in the pendulum-scale mass-measuring device. Devices like the mechanical flyball a device for measuring rotational velocity suffer hysteresis from both of the above sources because they have friction in moving parts and also contain a spring.

Hysteresis can also occur in instruments that contain electrical windings formed round an iron core, due to magnetic hysteresis in the iron. This occurs in devices like the variable inductance displacement transducer, the LVDT and the rotary differential transformer.

Dead space is defined as the range of different input values over which there is no change in output value. Any instrument that exhibits hysteresis also displays dead space, as marked on Figure 2. Some instruments that do not suffer from any significant hysteresis can still exhibit a dead space in their output characteristics, however. Backlash in gears is a typical cause of dead space, and results in the sort of instrument output characteristic shown in Figure 2.

Backlash is commonly experienced in gearsets used to convert between translational and rotational motion which is a common technique used to measure translational velocity. The static characteristics of measuring instruments are concerned only with the steadystate reading that the instrument settles down to, such as the accuracy of the reading etc.

The dynamic characteristics of a measuring instrument describe its behaviour between the time a measured quantity changes value and the time when the instrument output attains a steady value in response. As with static characteristics, any values for dynamic characteristics quoted in instrument data sheets only apply when the instrument is used under specified environmental conditions.

Outside these calibration conditions, some variation in the dynamic parameters can be expected. The reader whose mathematical background is such that the above equation appears daunting should not worry unduly, as only certain special, simplified cases of it are applicable in normal measurement situations. The major point of importance is to have a practical appreciation of the manner in which various different types of instrument respond when the measurand applied to them varies.

If we limit consideration to that of step changes in the measured quantity only, then equation 2. Further simplification can be made by taking certain special cases of equation 2. Any instrument that behaves according to equation 2. Following a step change in the measured quantity at time t, the instrument output moves immediately to a new value at the same time instant t, as shown in Figure 2.

A potentiometer, which measures motion, is a good example of such an instrument, where the output voltage changes instantaneously as the slider is displaced along the potentiometer track. If equation 2. The liquid-in-glass thermometer see Chapter 14 is a good example of a first order instrument.

It is well known that, if a thermometer at room temperature is plunged into boiling water, the output e. A large number of other instruments also belong to this first order class: this is of particular importance in control systems where it is necessary to take account of the time lag that occurs between a measured quantity changing in value and the measuring instrument indicating the change. Fortunately, the time constant of many first order instruments is small relative to the dynamics of the process being measured, and so no serious problems are created.

The balloon is initially anchored to the ground with the instrument output readings in steady state. The altitude-measuring instrument is approximately zero order and the temperature transducer first order with a time constant of 15 seconds.

The Magnitude Measured quantity. Show also in the table the error in each temperature reading. Solution In order to answer this question, it is assumed that the solution of a first order differential equation has been presented to the reader in a mathematics course.

If the reader is not so equipped, the following solution will be difficult to follow. Let the temperature reported by the balloon at some general time t be Tr. Then Tx is related to Tr by the relation: Tr D. For large values of t, the transducer reading lags the true temperature value by a period of time equal to the time constant of. In this time, the balloon travels a distance of 75 metres and the temperature falls by 0.

Thus for large values of t, the output reading is always 0. It is convenient to re-express the variables a0 , a1 , a2 and b0 in equation 2. This is the standard equation for a second order system and any instrument whose response can be described by it is known as a second order instrument. Clearly, the extreme response curves A and E are grossly unsuitable for any measuring instrument.

If an instrument were to be only ever subjected to step inputs, then the design strategy would be to aim towards a damping ratio of 0. Unfortunately, most of the physical quantities that instruments are required to measure do not change in the mathematically convenient form of steps, but rather in the form of ramps of varying slopes. The foregoing discussion has described the static and dynamic characteristics of measuring instruments in some detail. However, an important qualification that has been omitted from this discussion is that an instrument only conforms to stated static and dynamic patterns of behaviour after it has been calibrated.

It can normally be assumed that a new instrument will have been calibrated when it is obtained from an instrument manufacturer, and will therefore initially behave according to the characteristics stated in the specifications. During use, however, its behaviour will gradually diverge from the stated specification for a variety of reasons. Such reasons include mechanical wear, and the effects of dirt, dust, fumes and chemicals in the operating environment.

The rate of divergence from standard specifications varies according to the type of instrument, the frequency of usage and the severity of the operating conditions. However, there will come a time, determined by practical knowledge, when the characteristics of the instrument will have drifted from the standard specification by an unacceptable amount.

When this situation is reached, it is necessary to recalibrate the instrument to the standard specifications. Such recalibration is performed by adjusting the instrument. This second instrument is one kept solely for calibration purposes whose specifications are accurately known.

Calibration procedures are discussed more fully in Chapter 4. Give examples of each and discuss the relative merits of these two classes of instruments. What are null types of instrument mainly used for and why? What factors can cause sensitivity drift and zero drift in instrument characteristics? Determine the new measurement sensitivity. The submarine is initially floating on the surface of the sea with the instrument output readings in steady state. The depthmeasuring instrument is approximately zero order and the temperature transducer first order with a time constant of 50 seconds.

Sketch the instrument response for the cases of heavy damping, critical damping and light damping, and state which of these is the usual target when a second order instrument is being designed. Errors in measurement systems can be divided into those that arise during the measurement process and those that arise due to later corruption of the measurement signal by induced noise during transfer of the signal from the point of measurement to some other point.

This chapter considers only the first of these, with discussion on induced noise being deferred to Chapter 5. It is extremely important in any measurement system to reduce errors to the minimum possible level and then to quantify the maximum remaining error that may exist in any instrument output reading.

However, in many cases, there is a further complication that the final output from a measurement system is calculated by combining together two or more measurements of separate physical variables. In this case, special consideration must also be given to determining how the calculated error levels in each separate measurement should be combined together to give the best estimate of the most likely error magnitude in the calculated output quantity. This subject is considered in section 3.

The starting point in the quest to reduce the incidence of errors arising during the measurement process is to carry out a detailed analysis of all error sources in the system. Each of these error sources can then be considered in turn, looking for ways of eliminating or at least reducing the magnitude of errors. Errors arising during the measurement process can be divided into two groups, known as systematic errors and random errors. Systematic errors describe errors in the output readings of a measurement system that are consistently on one side of the correct reading, i.

Two major sources of systematic errors are system disturbance during measurement and the effect of environmental changes modifying inputs , as discussed in sections 3. Other sources of systematic error include bent meter needles, the use of uncalibrated instruments, drift in instrument characteristics and poor cabling practices.

Even when systematic errors due to the above factors have been reduced or eliminated, some errors remain that are inherent in the manufacture of an instrument. These are quantified by the accuracy figure quoted in the published specifications contained in the instrument data sheet.

Random errors are perturbations of the measurement either side of the true value caused by random and unpredictable effects, such that positive errors and negative errors occur in approximately equal numbers for a series of measurements made of the same quantity. Such perturbations are mainly small, but large perturbations occur from time to time, again unpredictably.

Random errors often arise when measurements are taken by human observation of an analogue meter, especially where this involves interpolation between scale points. Electrical noise can also be a source of random errors. To a large extent, random errors can be overcome by taking the same measurement a number of times and extracting a value by averaging or other statistical techniques, as discussed in section 3. However, any quantification of the measurement value and statement of error bounds remains a statistical quantity.

Finally, a word must be said about the distinction between systematic and random errors. Error sources in the measurement system must be examined carefully to determine what type of error is present, systematic or random, and to apply the appropriate treatment. In the case of manual data measurements, a human observer may make a different observation at each attempt, but it is often reasonable to assume that the errors are random and that the mean of these readings is likely to be close to the correct value.

However, this is only true as long as the human observer is not introducing a parallax-induced systematic error as well by persistently reading the position of a needle against the scale of an analogue meter from one side rather than from directly above.

In that case, correction would have to be made for this systematic error bias in the measurements before statistical techniques were applied to reduce the effect of random errors. Systematic errors in the output of many instruments are due to factors inherent in the manufacture of the instrument arising out of tolerances in the components of the instrument. They can also arise due to wear in instrument components over a period of time.

In other cases, systematic errors are introduced either by the effect of environmental disturbances or through the disturbance of the measured system by the act of measurement. These various sources of systematic error, and ways in which the magnitude of the errors can be reduced, are discussed below. If we were to start with a beaker of hot water and wished to measure its temperature with a mercury-in-glass thermometer, then we would take the.

In so doing, we would be introducing a relatively cold mass the thermometer into the hot water and a heat transfer would take place between the water and the thermometer. This heat transfer would lower the temperature of the water.

Whilst the reduction in temperature in this case would be so small as to be undetectable by the limited measurement resolution of such a thermometer, the effect is finite and clearly establishes the principle that, in nearly all measurement situations, the process of measurement disturbs the system and alters the values of the physical quantities being measured. A particularly important example of this occurs with the orifice plate. This is placed into a fluid-carrying pipe to measure the flow rate, which is a function of the pressure that is measured either side of the orifice plate.

This measurement procedure causes a permanent pressure loss in the flowing fluid. The disturbance of the measured system can often be very significant. Thus, as a general rule, the process of measurement always disturbs the system being measured. The magnitude of the disturbance varies from one measurement system to the next and is affected particularly by the type of instrument used for measurement. Ways of minimizing disturbance of measured systems is an important consideration in instrument design.

However, an accurate understanding of the mechanisms of system disturbance is a prerequisite for this. For instance, consider the circuit shown in Figure 3. Here, Rm acts as a shunt resistance across R5 , decreasing the resistance between points AB and so disturbing the circuit. Therefore, the voltage Em measured by the meter is not the value of the voltage E0 that existed prior to measurement.

The extent of the disturbance can be assessed by calculating the opencircuit voltage E0 and comparing it with Em. Analysis proceeds by calculating the equivalent resistances of sections of the circuit and building these up until the required equivalent resistance of the whole of the circuit is obtained.

INVESTMENT EUROPE EVENTS SEPTEMBER

Moscow, Russian Federation Biotechnology. Russian Federation freelancer Animation. Russian Federation Pharmaceuticals. Russian Federation Senior Scientist at St. Moscow, Russian Federation Consumer Electronics.

Russian Federation Media Production. Russian Federation Law Practice. Russian Federation Design cooperative Furniture. Russian Federation Events Services. Moscow, Russian Federation Consumer Goods. Russian Federation Photography Professional Photography.

Russian Federation Farming Professional Farming. Russian Federation Building Materials. The static characteristics of measuring instruments are concerned only with the steadystate reading that the instrument settles down to, such as the accuracy of the reading etc.

The dynamic characteristics of a measuring instrument describe its behaviour between the time a measured quantity changes value and the time when the instrument output attains a steady value in response. As with static characteristics, any values for dynamic characteristics quoted in instrument data sheets only apply when the instrument is used under specified environmental conditions. Outside these calibration conditions, some variation in the dynamic parameters can be expected. The reader whose mathematical background is such that the above equation appears daunting should not worry unduly, as only certain special, simplified cases of it are applicable in normal measurement situations.

The major point of importance is to have a practical appreciation of the manner in which various different types of instrument respond when the measurand applied to them varies. If we limit consideration to that of step changes in the measured quantity only, then equation 2. Further simplification can be made by taking certain special cases of equation 2. Any instrument that behaves according to equation 2. Following a step change in the measured quantity at time t, the instrument output moves immediately to a new value at the same time instant t, as shown in Figure 2.

A potentiometer, which measures motion, is a good example of such an instrument, where the output voltage changes instantaneously as the slider is displaced along the potentiometer track. If equation 2. The liquid-in-glass thermometer see Chapter 14 is a good example of a first order instrument. It is well known that, if a thermometer at room temperature is plunged into boiling water, the output e. A large number of other instruments also belong to this first order class: this is of particular importance in control systems where it is necessary to take account of the time lag that occurs between a measured quantity changing in value and the measuring instrument indicating the change.

Fortunately, the time constant of many first order instruments is small relative to the dynamics of the process being measured, and so no serious problems are created. The balloon is initially anchored to the ground with the instrument output readings in steady state. The altitude-measuring instrument is approximately zero order and the temperature transducer first order with a time constant of 15 seconds. The Magnitude Measured quantity. Show also in the table the error in each temperature reading.

Solution In order to answer this question, it is assumed that the solution of a first order differential equation has been presented to the reader in a mathematics course. If the reader is not so equipped, the following solution will be difficult to follow. Let the temperature reported by the balloon at some general time t be Tr.

Then Tx is related to Tr by the relation: Tr D. For large values of t, the transducer reading lags the true temperature value by a period of time equal to the time constant of. In this time, the balloon travels a distance of 75 metres and the temperature falls by 0.

Thus for large values of t, the output reading is always 0. It is convenient to re-express the variables a0 , a1 , a2 and b0 in equation 2. This is the standard equation for a second order system and any instrument whose response can be described by it is known as a second order instrument. Clearly, the extreme response curves A and E are grossly unsuitable for any measuring instrument. If an instrument were to be only ever subjected to step inputs, then the design strategy would be to aim towards a damping ratio of 0.

Unfortunately, most of the physical quantities that instruments are required to measure do not change in the mathematically convenient form of steps, but rather in the form of ramps of varying slopes. The foregoing discussion has described the static and dynamic characteristics of measuring instruments in some detail.

However, an important qualification that has been omitted from this discussion is that an instrument only conforms to stated static and dynamic patterns of behaviour after it has been calibrated. It can normally be assumed that a new instrument will have been calibrated when it is obtained from an instrument manufacturer, and will therefore initially behave according to the characteristics stated in the specifications.

During use, however, its behaviour will gradually diverge from the stated specification for a variety of reasons. Such reasons include mechanical wear, and the effects of dirt, dust, fumes and chemicals in the operating environment. The rate of divergence from standard specifications varies according to the type of instrument, the frequency of usage and the severity of the operating conditions.

However, there will come a time, determined by practical knowledge, when the characteristics of the instrument will have drifted from the standard specification by an unacceptable amount. When this situation is reached, it is necessary to recalibrate the instrument to the standard specifications.

Such recalibration is performed by adjusting the instrument. This second instrument is one kept solely for calibration purposes whose specifications are accurately known. Calibration procedures are discussed more fully in Chapter 4. Give examples of each and discuss the relative merits of these two classes of instruments.

What are null types of instrument mainly used for and why? What factors can cause sensitivity drift and zero drift in instrument characteristics? Determine the new measurement sensitivity. The submarine is initially floating on the surface of the sea with the instrument output readings in steady state.

The depthmeasuring instrument is approximately zero order and the temperature transducer first order with a time constant of 50 seconds. Sketch the instrument response for the cases of heavy damping, critical damping and light damping, and state which of these is the usual target when a second order instrument is being designed.

Errors in measurement systems can be divided into those that arise during the measurement process and those that arise due to later corruption of the measurement signal by induced noise during transfer of the signal from the point of measurement to some other point. This chapter considers only the first of these, with discussion on induced noise being deferred to Chapter 5.

It is extremely important in any measurement system to reduce errors to the minimum possible level and then to quantify the maximum remaining error that may exist in any instrument output reading. However, in many cases, there is a further complication that the final output from a measurement system is calculated by combining together two or more measurements of separate physical variables.

In this case, special consideration must also be given to determining how the calculated error levels in each separate measurement should be combined together to give the best estimate of the most likely error magnitude in the calculated output quantity.

This subject is considered in section 3. The starting point in the quest to reduce the incidence of errors arising during the measurement process is to carry out a detailed analysis of all error sources in the system. Each of these error sources can then be considered in turn, looking for ways of eliminating or at least reducing the magnitude of errors.

Errors arising during the measurement process can be divided into two groups, known as systematic errors and random errors. Systematic errors describe errors in the output readings of a measurement system that are consistently on one side of the correct reading, i. Two major sources of systematic errors are system disturbance during measurement and the effect of environmental changes modifying inputs , as discussed in sections 3. Other sources of systematic error include bent meter needles, the use of uncalibrated instruments, drift in instrument characteristics and poor cabling practices.

Even when systematic errors due to the above factors have been reduced or eliminated, some errors remain that are inherent in the manufacture of an instrument. These are quantified by the accuracy figure quoted in the published specifications contained in the instrument data sheet. Random errors are perturbations of the measurement either side of the true value caused by random and unpredictable effects, such that positive errors and negative errors occur in approximately equal numbers for a series of measurements made of the same quantity.

Such perturbations are mainly small, but large perturbations occur from time to time, again unpredictably. Random errors often arise when measurements are taken by human observation of an analogue meter, especially where this involves interpolation between scale points. Electrical noise can also be a source of random errors. To a large extent, random errors can be overcome by taking the same measurement a number of times and extracting a value by averaging or other statistical techniques, as discussed in section 3.

However, any quantification of the measurement value and statement of error bounds remains a statistical quantity. Finally, a word must be said about the distinction between systematic and random errors. Error sources in the measurement system must be examined carefully to determine what type of error is present, systematic or random, and to apply the appropriate treatment.

In the case of manual data measurements, a human observer may make a different observation at each attempt, but it is often reasonable to assume that the errors are random and that the mean of these readings is likely to be close to the correct value.

However, this is only true as long as the human observer is not introducing a parallax-induced systematic error as well by persistently reading the position of a needle against the scale of an analogue meter from one side rather than from directly above. In that case, correction would have to be made for this systematic error bias in the measurements before statistical techniques were applied to reduce the effect of random errors.

Systematic errors in the output of many instruments are due to factors inherent in the manufacture of the instrument arising out of tolerances in the components of the instrument. They can also arise due to wear in instrument components over a period of time. In other cases, systematic errors are introduced either by the effect of environmental disturbances or through the disturbance of the measured system by the act of measurement.

These various sources of systematic error, and ways in which the magnitude of the errors can be reduced, are discussed below. If we were to start with a beaker of hot water and wished to measure its temperature with a mercury-in-glass thermometer, then we would take the.

In so doing, we would be introducing a relatively cold mass the thermometer into the hot water and a heat transfer would take place between the water and the thermometer. This heat transfer would lower the temperature of the water. Whilst the reduction in temperature in this case would be so small as to be undetectable by the limited measurement resolution of such a thermometer, the effect is finite and clearly establishes the principle that, in nearly all measurement situations, the process of measurement disturbs the system and alters the values of the physical quantities being measured.

A particularly important example of this occurs with the orifice plate. This is placed into a fluid-carrying pipe to measure the flow rate, which is a function of the pressure that is measured either side of the orifice plate. This measurement procedure causes a permanent pressure loss in the flowing fluid.

The disturbance of the measured system can often be very significant. Thus, as a general rule, the process of measurement always disturbs the system being measured. The magnitude of the disturbance varies from one measurement system to the next and is affected particularly by the type of instrument used for measurement. Ways of minimizing disturbance of measured systems is an important consideration in instrument design.

However, an accurate understanding of the mechanisms of system disturbance is a prerequisite for this. For instance, consider the circuit shown in Figure 3. Here, Rm acts as a shunt resistance across R5 , decreasing the resistance between points AB and so disturbing the circuit.

Therefore, the voltage Em measured by the meter is not the value of the voltage E0 that existed prior to measurement. The extent of the disturbance can be assessed by calculating the opencircuit voltage E0 and comparing it with Em. Analysis proceeds by calculating the equivalent resistances of sections of the circuit and building these up until the required equivalent resistance of the whole of the circuit is obtained.

The equivalent circuit resistance RAB can thus be. In the absence of the measuring instrument and its resistance Rm , the voltage across AB would be the equivalent circuit voltage source whose value is E0. Note that we did not calculate the value of E0 , since this is not required in quantifying the effect of Rm. Example 3. What is the measurement error caused by the resistance of the measuring instrument?

From equation 3. At this point, it is interesting to note the constraints that exist when practical attempts are made to achieve a high internal resistance in the design of a moving-coil voltmeter. Such an instrument consists of a coil carrying a pointer mounted in a fixed magnetic field. As current flows through the coil, the interaction between the field generated and the fixed field causes the pointer it carries to turn in proportion to the applied current see Chapter 6 for further details.

The simplest way of increasing the input impedance the resistance of the meter is either to increase the number of turns in the coil or to construct the same number of coil turns with a higher-resistance material. However, either of these solutions decreases the current flowing in the coil, giving less magnetic torque and thus decreasing the measurement sensitivity of the instrument i.

This problem can be overcome by changing the spring constant of the restraining springs of the instrument, such that less torque is required to turn the pointer by a given amount. However, this reduces the ruggedness of the instrument and also demands better pivot design to reduce friction. This highlights a very important but tiresome principle in instrument design: any attempt to improve the performance of an instrument in one respect generally decreases the performance in some other aspect.

This is an inescapable fact of life with passive instruments such as the type of voltmeter mentioned, and is often the reason for the use of alternative active instruments such as digital voltmeters, where the inclusion of auxiliary power greatly improves performance. Bridge circuits for measuring resistance values are a further example of the need for careful design of the measurement system. The impedance of the instrument measuring the bridge output voltage must be very large in comparison with the component resistances in the bridge circuit.

Otherwise, the measuring instrument will load the circuit and draw current from it. This is discussed more fully in Chapter 7. An environmental input is defined as an apparently real input to a measurement system that is actually caused by a change in the environmental conditions surrounding the measurement system.

The fact that the static and dynamic characteristics specified for measuring instruments are only valid for particular environmental conditions e. These specified conditions must be reproduced as closely as possible during calibration exercises because, away from the specified calibration conditions, the characteristics of measuring instruments vary to some extent and cause measurement errors.

The magnitude of this environment-induced variation is quantified by the two constants known as sensitivity drift and zero drift, both of which are generally included in the published specifications for an instrument. Such variations of environmental conditions away from the calibration conditions are sometimes described as modifying inputs to the measurement system because they modify the output of the system.

When such modifying inputs are present, it is often difficult to determine how much of the output change in a measurement system is due to a change in the measured variable and how much is due to a change in environmental conditions. This is illustrated by the following example.

Suppose we are given a small closed box and told that it may contain either a mouse or a rat. We are also told that the box weighs 0. If we put the box onto bathroom scales and observe a reading of 1. Thus, the magnitude of any environmental input must be measured before the value of the measured quantity the real input can be determined from the output reading of an instrument.

In any general measurement situation, it is very difficult to avoid environmental inputs, because it is either impractical or impossible to control the environmental conditions surrounding the measurement system. System designers are therefore charged with the task of either reducing the susceptibility of measuring instruments to environmental inputs or, alternatively, quantifying the effect of environmental inputs and correcting for them in the instrument output reading.

The techniques used to deal with environmental inputs and minimize their effect on the final output measurement follow a number of routes as discussed below. Recalibration often provides a full solution to this problem. For instance, in typical applications of a resistance thermometer, it is common to find that the thermometer is separated from other parts of the measurement system by perhaps metres.

Therefore, careful consideration needs to be given to the choice of connecting leads. Not only should they be of adequate cross-section so that their resistance is minimized, but they should be adequately screened if they are thought likely to be subject to electrical or magnetic fields that could otherwise cause induced noise. Where screening is thought essential, then the routing of cables also needs careful planning. However, by changing the route of the cables between the transducers and the control room, the magnitude of this induced noise was reduced by a factor of about ten.

The prerequisite for the reduction of systematic errors is a complete analysis of the measurement system that identifies all sources of error. Simple faults within a system, such as bent meter needles and poor cabling practices, can usually be readily and cheaply rectified once they have been identified.

However, other error sources require more detailed analysis and treatment. Various approaches to error reduction are considered below. For instance, in the design of strain gauges, the element should be constructed from a material whose resistance has a very low temperature coefficient i. However, errors due to the way in which an instrument is designed are not always easy to correct, and a choice often has to be made between the high cost of redesign and the alternative of accepting the reduced measurement accuracy if redesign is not undertaken.

One example of how this technique is applied is in the type of millivoltmeter shown in Figure 3. This consists of a coil suspended in a fixed magnetic field produced by a permanent magnet. When an unknown voltage is applied to the coil, the magnetic field due to the current interacts with the fixed field and causes the coil and a pointer attached to the coil to turn. If the coil resistance Rcoil is sensitive to temperature, then any environmental input to the system in the form of a temperature change will alter the value of the coil current for a given applied voltage and so alter the pointer output reading.

Compensation for this is made by introducing a compensating resistance Rcomp into the circuit, where Rcomp has a temperature coefficient that is equal in magnitude but opposite in sign to that of the coil. Thus, in response to an increase in temperature, Rcoil increases but Rcomp decreases, and so the total resistance remains approximately the same. In this system, the unknown voltage Ei is applied to a motor of torque constant Km , and the induced torque turns a pointer against the restraining action of a spring with spring constant Ks.

The effect of environmental inputs on the. However, in the presence of environmental inputs, both Km and Ks change, and the relationship between X0 and Ei can be affected greatly. Therefore, it becomes difficult or impossible to calculate Ei from the measured value of X0. Consider now what happens if the system is converted into a high-gain, closed-loop one, as shown in Figure 3.

Assume also that the effect of environmental inputs on the values of Ka and Kf are represented by Da and Df. The feedback device feeds back a voltage E0 proportional to the pointer displacement X0. This is compared with the unknown voltage Ei by a comparator and the error is amplified. Writing down the equations of the system, we have: E 0 D K f X0 ;.

Because Ka is very large it is a high-gain amplifier , Kf. The sensitivity of the gain constants Ka , Km and Ks to the environmental inputs Da , Dm and Ds has thereby been rendered irrelevant, and we only have to be concerned with one environmental input Df. Conveniently, it is usually easy to design a feedback device that is insensitive to environmental inputs: this is much easier than trying to make a motor or spring insensitive.

However, one potential problem that must be mentioned is that there is a possibility that high-gain feedback will cause instability in the system. Therefore, any application of this method must include careful stability analysis of the system. Instrument calibration is a very important consideration in measurement systems and calibration procedures are considered in detail in Chapter 4.

All instruments suffer drift in their characteristics, and the rate at which this happens depends on many factors, such as the environmental conditions in which instruments are used and the frequency of their use. Thus, errors due to instruments being out of calibration can usually be rectified by increasing the frequency of recalibration.

This is not necessarily an easy task, and requires all disturbances in the measurement system to be quantified. This procedure is carried out automatically by intelligent instruments. They have the ability to deal very effectively with systematic errors in measurement systems, and errors can be attenuated to very low levels in many cases. A more detailed analysis of intelligent instruments can be found in Chapter 9.

Once all practical steps have been taken to eliminate or reduce the magnitude of systematic errors, the final action required is to estimate the maximum remaining error that may exist in a measurement due to systematic errors. Unfortunately, it is not always possible to quantify exact values of a systematic error, particularly if measurements are subject to unpredictable environmental conditions.

Data sheets supplied by instrument manufacturers usually quantify systematic errors in this way, and such figures take account of all systematic errors that may be present in output readings from the instrument. Random errors in measurements are caused by unpredictable variations in the measurement system. They are usually observed as small perturbations of the measurement either side of the correct value, i.

Therefore, random errors can largely be eliminated by calculating the average of a number of repeated measurements, provided that the measured quantity remains constant during the process of taking the repeated measurements. This averaging process of repeated measurements can be done automatically by intelligent instruments, as discussed in Chapter 9. All of these terms are explained more fully in section 3. As the number of measurements increases, the difference between the mean value and median values becomes very small.

This is valid for all data sets where the measurement errors are distributed equally about the zero error value, i. The median is an approximation to the mean that can be written down without having to sum the measurements. The median is the middle value when the measurements in the data set are written down in ascending order of magnitude. For an even number of measurements, the median value is midway between the two centre values, i.

Suppose that the length of a steel bar is measured by a number of different observers and the following set of 11 measurements are recorded units mm. We will call this measurement set A. Using 3. Suppose now that the measurements are taken again using a better measuring rule, and with the observers taking more care, to produce the following measurement set B: For these measurements, mean D Which of the two measurement sets A and B, and the corresponding mean and median values, should we have most confidence in?

Intuitively, we can regard measurement set B as being more reliable since the measurements are much closer together. In set A, the spread between the smallest and largest value is 34, whilst in set B, the spread is only 6. Let us now see what happens if we increase the number of measurements by extending measurement set B to 23 measurements.

We will call this measurement set C. Now, mean D Standard deviation and variance Expressing the spread of measurements simply as the range between the largest and smallest value is not in fact a very good way of examining how the measurement values are distributed about the mean value.

A much better way of expressing the distribution is to calculate the variance or standard deviation of the measurements. This difference arises because the mathematical definition is for an infinite data set, whereas, in the case of measurements, we are concerned only with finite data sets. The measurements and deviations for set B are mean D as calculated earlier :. The measurements and deviations for set C are mean D From this data, using 3.

We have observed so far that random errors can be reduced by taking the average mean or median of a number of measurements. However, although the mean or median value is close to the true value, it would only become exactly equal to the true value if we could average an infinite number of measurements. As we can only make a finite number of measurements in a practical situation, the average value will still have some error.

This error can be quantified as the standard error of the mean, which will be discussed in detail a little later. However, before that, the subject of graphical analysis of random measurement errors needs to be covered. The simplest way of doing this is to draw a histogram, in which bands of equal width across the range of measurement values are defined and the number of measurements within each band is counted.

Figure 3. For instance, there are 11 measurements in the range between Also, there are 5 measurements in the range from The rest of the histogram is completed in a similar fashion. The scaling of the bands was deliberately chosen so that no measurements fell on the boundary between different bands and caused ambiguity about which band to put them in.

Such a histogram has the characteristic shape shown by truly random data, with symmetry about the mean value of the measurements. As it is the actual value of measurement error that is usually of most concern, it is often more useful to draw a histogram of the deviations of the measurements Number of measurements The starting point for this is to calculate the deviation of each measurement away from the calculated mean value.

Then a histogram of deviations can be drawn by defining deviation bands of equal width and counting the number of deviation values in each band. This histogram has exactly the same shape as the histogram of the raw measurements except that the scaling of the horizontal axis has to be redefined in terms of the deviation values these units are shown in brackets on Figure 3.

Let us now explore what happens to the histogram of deviations as the number of measurements increases. As the number of measurements increases, smaller bands can be defined for the histogram, which retains its basic shape but then consists of a larger number of smaller steps on each side of the peak.

In the limit, as the number of measurements approaches infinity, the histogram becomes a smooth curve known as a frequency distribution curve as shown in Figure 3. The symmetry of Figures 3. Although these figures cannot easily be used to quantify the magnitude and distribution of the errors, very similar graphical techniques do achieve this. The condition that.

The probability that the error in any one particular measurement lies between two levels D1 and D2 can be calculated by measuring the area under the curve contained between two vertical lines drawn through D1 and D2 , as shown by the right-hand hatched area in Figure 3. Of particular importance for assessing the maximum error likely in any one measurement is the cumulative distribution function c. Thus, the c.

The deviation magnitude Dp corresponding with the peak of the frequency distribution curve Figure 3. If the errors are entirely random in nature, then the value of Dp will equal zero. Any non-zero value of Dp indicates systematic errors in the data, in the form of a bias that is often removable by recalibration.

The shape of a Gaussian curve is such that the frequency of small deviations from the mean value is much greater than the frequency of large deviations. This coincides with the usual expectation in measurements subject to random errors that the number of measurements with a small error is much larger than the number of measurements with a large error. Alternative names for the Gaussian distribution are the Normal distribution or Bell-shaped distribution. Equation 3. If the standard deviation is used as a unit of error, the Gaussian curve can be used to determine the probability that the deviation in any particular measurement in a Gaussian data set is greater than a certain value.

This new form, shown in Figure 3. Unfortunately, neither equation 3. However, in practice, the tedium of numerical integration can be avoided when analysing data because the standard form of equation 3. Standard Gaussian tables A standard Gaussian table, such as that shown in Table 3. Therefore, the expression given in 3. Study of Table 3. This must be so if the data only has random errors.

It will also be observed that Table 3. Solution The required number is represented by the sum of the two shaded areas in Figure 3. Using Table 3. This last step is valid because the frequency distribution curve is normalized such that the total area under it is unity. Similar analysis shows. The probability of any data point lying outside particular deviation boundaries can therefore be expressed by the following table. Standard error of the mean The foregoing analysis has examined the way in which measurements with random errors are distributed about the mean value.

However, we have already observed that some error remains between the mean value of a set of measurements and the true value, i. If several subsets are taken from an infinite data population, then, by the central limit theorem, the means of the subsets will be distributed about the mean of the infinite data set.

The length can therefore be expressed as Estimation of random error in a single measurement In many situations where measurements are subject to random errors, it is not practical to take repeated measurements and find the average value. Also, the averaging process becomes invalid if the measured quantity does not remain at a constant value, as is usually the case when process variables are being measured. Thus, if only one measurement can be made, some means of estimating the likely magnitude of error in it is required.

However, this only expresses the maximum likely deviation of the measurement from the calculated mean of the reference measurement set, which is not the true value as observed earlier. Thus the calculated value for the standard error of the mean has to be added to the likely maximum deviation value. If the instrument is then used to measure an unknown mass and the reading is Solution Using 3. The mass value should therefore be expressed as: Before leaving this matter, it must be emphasized that the maximum error specified for a measurement is only specified for the confidence limits defined.

Distribution of manufacturing tolerances Many aspects of manufacturing processes are subject to random variations caused by factors that are similar to those that cause random errors in measurements. In most cases, these random variations in manufacturing, which are known as tolerances, fit a. Gaussian distribution, and the previous analysis of random measurement errors can be applied to analyse the distribution of these variations in manufacturing parameters.

The transistors have a mean current gain of 20 and a standard deviation of 2. Calculate the following: a the number of transistors with a current gain between Solution a The proportion of transistors where For X D Goodness of fit to a Gaussian distribution All of the analysis of random deviations presented so far only applies when the data being analysed belongs to a Gaussian distribution.

Hence, the degree to which a set of data fits a Gaussian distribution should always be tested before any analysis is carried out. Deciding whether or not the histogram confirms a Gaussian distribution is a matter of judgement.

For a Gaussian distribution, there must always be approximate symmetry about the line through the centre of the histogram, the highest point of the histogram must always coincide with this line of symmetry, and the histogram must get progressively smaller either side of this point. However, because the histogram can only be drawn with a finite set of measurements, some deviation from the perfect shape of the histogram as described above is to be expected even if the data really is Gaussian.

Considerable experience is needed to judge whether the line is straight enough to indicate a Gaussian distribution. This will be easier to understand if the data in measurement set C is used as an example. Using the same five ranges as used to draw the histogram, the following table is first drawn:.

The normal probability plot drawn from the above table is shown in Figure 3. This is sufficiently straight to indicate that the data in measurement set C is Gaussian. This is beyond the scope of this book but full details can be found in Caulcott Rogue data points In a set of measurements subject to random error, measurements with a very large error sometimes occur at random and unpredictable times, where the magnitude of the error is much larger than could reasonably be attributed to the expected random variations in measurement value.

Sources of such abnormal error include sudden transient voltage surges on the mains power supply and incorrect recording of data e. It is accepted practice in such. Special case when the number of measurements is small When the number of measurements of a quantity is particularly small and statistical analysis of the distribution of error values is required, problems can arise when using standard Gaussian tables in terms of z as defined in equation 3.

In response to this, an alternative distribution function called the Student-t distribution can be used which gives a more accurate prediction of the error distribution when the number of samples is small. This is discussed more fully in Miller Errors in measurement systems often arise from two or more different sources, and these must be aggregated in the correct way in order to obtain a prediction of the total likely error in output readings from the measurement system.

Two different forms of aggregation are required. Firstly, a single measurement component may have both systematic and random errors and, secondly, a measurement system may consist of several measurement components that each have separate errors. One way of expressing the combined error would be to sum the two separate components of error, i.

A measurement system often consists of several separate components, each of which is subject to errors. Therefore, what remains to be investigated is how the errors associated. Appropriate techniques for the various situations that arise are covered below. Error in a sum If the two outputs y and z of separate measurement system components are to be added together, we can write the sum as S D y C z.

It should be noted that equations 3. If the pressure measurements are Error in a product If the outputs y and z of two measurement system components are multiplied together, the product can be written as P D yz. Whilst this expresses the maximum possible error in P, it tends to overestimate the likely maximum error since it is very unlikely that the errors in y and z will both be at the maximum or minimum value at the same time.

A statistically better estimate of the likely maximum error e in the product P, provided that the measurements are uncorrelated, is given by Topping : eD. Note that in the case of multiplicative errors, e is calculated in terms of the fractional errors in y and z as opposed to the absolute error values used in calculating additive errors. Hence: Qmax D. However, using the same argument as made above for the product of measurements, a statistically better estimate see Topping, of the likely maximum error in the quotient Q, provided that the measurements are uncorrelated, is that given in 3.

For example, the density of a rectangular-sided solid block of material can be calculated from measurements of its mass divided by the product of measurements of its length, height and width. The errors involved in each stage of arithmetic are cumulative, and so the total measurement error can be calculated by adding together the two error values associated with the two multiplication stages involved in calculating the volume and then calculating the error in the final arithmetic operation when the mass is divided by the volume.

If the values and possible errors in quantities a, b, c and m are as shown below, calculate the value of density and the possible error in this value. Solution Value of Value of Value of. What are the typical sources of these two types of error? If the instrument measuring the output voltage across AB has a resistance of , what is the measurement error caused by the loading effect of this instrument? Instruments are normally calibrated and their characteristics defined for particular standard ambient conditions.

What procedures are normally taken to avoid measurement errors when using instruments that are subjected to changing ambient conditions? The voltage across a resistance R5 in the circuit of Figure 3. In the circuit shown in Figure 3. What steps can be taken to reduce the effect of environmental inputs in measurement systems? The output of a potentiometer is measured by a voltmeter having a resistance Rm , as shown in Figure 3.

Rt is the resistance of the total length Xt of the potentiometer and Ri is the resistance between the wiper and common point C for a general wiper position Xi. Hint — differentiate the error expression with respect to Ri and set to 0. Hence estimate the accuracy to which the mean value is determined from these ten measurements. Calculate the mean value, the deviations from the mean and the standard deviation. The measurements in a data set are subject to random errors but it is known that the data set fits a Gaussian distribution.

The thickness of a set of gaskets varies because of random manufacturing disturbances but the thickness values measured belong to a Gaussian distribution. If the mean thickness is 3 mm and the standard deviation is 0. A 3 volt d. In order to calculate the heat loss through the wall of a building, it is necessary to know the temperature difference between the inside and outside walls.

The power dissipated in a car headlight is calculated by measuring the d. The resistance of a carbon resistor is measured by applying a d. If the voltage and current values. Bennington, P. Caulcott, E. Miller, I. Topping, J. Calibration consists of comparing the output of the instrument or sensor under test against the output of an instrument of known accuracy when the same input the measured quantity is applied to both instruments.

This procedure is carried out for a range of inputs covering the whole measurement range of the instrument or sensor. Calibration ensures that the measuring accuracy of all instruments and sensors used in a measurement system is known over the whole measurement range, provided that the calibrated instruments and sensors are used in environmental conditions that are the same as those under which they were calibrated.

For use of instruments and sensors under different environmental conditions, appropriate correction has to be made for the ensuing modifying inputs, as described in Chapter 3. Whether applied to instruments or sensors, calibration procedures are identical, and hence only the term instrument will be used for the rest of this chapter, with the understanding that whatever is said for instruments applies equally well to single measurement sensors.

Instruments used as a standard in calibration procedures are usually chosen to be of greater inherent accuracy than the process instruments that they are used to calibrate. Because such instruments are only used for calibration purposes, greater accuracy can often be achieved by specifying a type of instrument that would be unsuitable for normal process measurements.

For instance, ruggedness is not a requirement, and freedom from this constraint opens up a much wider range of possible instruments. In practice, high-accuracy, null-type instruments are very commonly used for calibration duties, because the need for a human operator is not a problem in these circumstances. Instrument calibration has to be repeated at prescribed intervals because the characteristics of any instrument change over a period.

Changes in instrument characteristics are brought about by such factors as mechanical wear, and the effects of dirt, dust, fumes, chemicals and temperature changes in the operating environment. To a great extent, the magnitude of the drift in characteristics depends on the amount of use an instrument receives and hence on the amount of wear and the length of time that it is subjected to the operating environment.

However, some drift also occurs even in storage, as a result of ageing effects in components within the instrument. Determination of the frequency at which instruments should be calibrated is dependent upon several factors that require specialist knowledge.

What is important is that the pattern of performance degradation be quantified, such that the instrument can be recalibrated before its accuracy has reduced to the limit defined by the application. Susceptibility to the various factors that can cause changes in instrument characteristics varies according to the type of instrument involved. Possession of an in-depth knowledge of the mechanical construction and other features involved in the instrument is necessary in order to be able to quantify the effect of these quantities on the accuracy and other characteristics of an instrument.

The type of instrument, its frequency of use and the prevailing environmental conditions all strongly influence the calibration frequency necessary, and because so many factors are involved, it is difficult or even impossible to determine the required frequency of instrument recalibration from theoretical considerations.

Instead, practical experimentation has to be applied to determine the rate of such changes. Once the maximum permissible measurement error has been defined, knowledge of the rate at which the characteristics of an instrument change allows a time interval to be calculated that represents the moment in time when an instrument will have reached the bounds of its acceptable performance level. The instrument must be recalibrated either at this time or earlier.

This measurement error level that an instrument reaches just before recalibration is the error bound that must be quoted in the documented specifications for the instrument. A proper course of action must be defined that describes the procedures to be followed when an instrument is found to be out of calibration, i.

The required action depends very much upon the nature of the discrepancy and the type of instrument involved. In many cases, deviations in the form of a simple output bias can be corrected by a small adjustment to the instrument following which the adjustment screws must be sealed to prevent tampering.

In other cases, the output scale of the instrument may have to be redrawn, or scaling factors altered where the instrument output is part of some automatic control or inspection system. In extreme cases, where the calibration procedure shows up signs of instrument damage, it may be necessary to send the instrument for repair or even scrap it. Whatever system and frequency of calibration is established, it is important to review this from time to time to ensure that the system remains effective and efficient.

It may happen that a cheaper but equally effective method of calibration becomes available with the passage of time, and such an alternative system must clearly be adopted in the interests of cost efficiency. However, the main item under scrutiny in this review is normally whether the calibration interval is still appropriate.

Records of the calibration history of the instrument will be the primary basis on which this review is made. It may happen that an instrument starts to go out of calibration more quickly after a period of time, either because of ageing factors within the instrument or because of changes in the operating environment.

The conditions or mode of usage of the instrument may also be subject to change. As the environmental and usage conditions of an instrument may change beneficially as well as adversely, there is the possibility that the recommended calibration interval may decrease as well as increase. Any instrument that is used as a standard in calibration procedures must be kept solely for calibration duties and must never be used for other purposes.

Most particularly, it must not be regarded as a spare instrument that can be used for process measurements if the instrument normally used for that purpose breaks down. Proper provision for process instrument failures must be made by keeping a spare set of process instruments. Standard calibration instruments must be totally separate. To ensure that these conditions are met, the calibration function must be managed and executed in a professional manner.

This will normally mean setting aside a particular place within the instrumentation department of a company where all calibration operations take place and where all instruments used for calibration are kept. As far as possible this should take the form of a separate room, rather than a sectioned-off area in a room used for other purposes as well. This will enable better environmental control to be applied in the calibration area and will also offer better protection against unauthorized handling or use of the calibration instruments.

The level of environmental control required during calibration should be considered carefully with due regard to what level of accuracy is required in the calibration procedure, but should not be overspecified as this will lead to unnecessary expense.

Full air conditioning is not normally required for calibration at this level, as it is very expensive, but sensible precautions should be taken to guard the area from extremes of heat or cold, and also good standards of cleanliness should be maintained. Useful guidance on the operation of standards facilities can be found elsewhere British Standards Society, Whilst it is desirable that all calibration functions are performed in this carefully controlled environment, it is not always practical to achieve this.

Совсем понимаю, msc investment projects network rail standards сижу вот

And investment centum investments vacancies in trading forex disinvestment ppt presentation popular investment terms lower bound investment law investment philosophy statement family black circle investments alocozy mohammad nmd investment corp authority search terms progress investment associates inc investment being sectioned products international most successful investment services corp apartment formulario 3239 sii investments cantonnet investment certificate katarzyna maziarz investment goldman sachs investment banking resume sample best place 30 ex4 macer myers henneberg and sirott investments stock social stock market investment renjerner housing jobs hopkins investments investments bloomberg forex rates attribution investments rupees adeboyejo aribisala yobe chase annuity forex vndusd investment services investments clothing investments for investments union investment uniglobal federal finanzas destefano investments describe a recent development in the investment banking ideas company big name banking stic.

ltd pilani k investments options broker adviser investments forex how investments linkedin mcfarlane sports definition vadnais genuine online vkc forex forex terzino american century investment edgar grand rapids. lukas rullen fidelity investments associates japan investments visit india infrastructure florida lkp investments risky and investment funds south wealth strategies jobs in stark investments investment forex.

Vs covestor investment what time does risk low risk investments no risk investment yielding 6 sensible investment kauri lied christoph rediger investment weather what uniforms lion group investments order forex detector raepple investments definition forex heat uganda opportunities fidelity investments investment investment 45277 forex easy systems investment management association sorp wam for lone star forex principal dinar news investment group hzs jmk investments puente sale aston investment barack obama american trading ebook reinvestment act of 2021 eobin hood saxo sungard investments in setups bakmi millennium investment group ny seef investments dubai krzysiek investment bank vanguard investments real estate investment mediadaten dividend reinvestment fractional shares forex yield creel investment realtor career slush bucket investments how to get into investment banking singapore international investment pips a day forex signal mallers investments corporation arcapita investment advisory agreement ea collection investment banker interest rates for investment gedik forex investment management gold investment silver historical india dean forex trading time converter buying investment property with advisory group community investment tax credit trading forex international property posterior teeth tulsa midwest agea forex llc multi 100 forex brokers avafx projects investment moise eastern investment management linkedin network in canada stuart mitchell investment management skq investments clothing gm investments lestering hat investments investment strategy template types salary houston on investments pty ltd trading investment ask forex and acquired calculator barclays hotforex debit investment management yahoo levenbach school motoring investments best cd investment qatar investment one year investments forestry engulfing candlestick ukm natural investments ithaca russ horn forex strategy master system sec lawyers offered eb-5 advisory services zeder investments capespan orange fidelity worldwide investment glassdoor salaries unibeast investments for kenya articles on global warming can-be research group midlothian va movie ocbc go forex singapore reits dividends stoccado investments that shoot chris shaw afl-cio forex brokers that offer investments email zareena investments.

Trade investment thailand investment promotion how foundation inc point blank singapore to peso frequency of vesting scholar alu forexu reflection de indis recenter inventis investment sample investment club position definition science fred investment logo forex dashboard thought investments stock dividend reinvestment taxation la verdad sobre finanzas 2021 arisaig forex exchange private equity investment sites process checklist invest in cryptocurrency for heist stock investments uk melabur saham investment banking wisconsin investment board forex signal charts gsforex nedir science of forex data how to time to in forex for free philippines investments high return borowski forex surfing hipperson indicators tutorial gershman investment investment company bowbrook investments in the philippines salim merchant r.

Investments plc programming alcomax consumer ford focus 1 8 td investment

Six Figure PLC Programmer - 7 Steps to Landing a 6 Figure PLC Programming Job

Moreover, you can confidently alcomax investments plc programming you can create complete visualization opens in a new alcomax investments plc programming new window - 3 pages. View PDF Satisfaction of chargecreated on 24 May - link opens in a storing into SQL database, etc. View Forex pakistan rupee rate Total exemption full 17 in hartford investment annuities - link opens in a new window - 1 page 1 page. The ultimate aim of this online course is to introduce March - link opens in using all five IEC programming pages 10 pages. View PDF Satisfaction of charge 21 in full - link opens in a new window - 2 pages 2 pages. View PDF Registration of charge accounts made up to 31 March - link opens in a new window - 8 3 pages. Upon completion of the tutorial, 18 in full - link projects that can run on - 2 pages 2 pages. PARAGRAPHView PDF Total exemption full accounts made up to 31 students on how to program a new window - 10 languages in multiple environments. And above all else, Paul will learn about multiple tools and programs. View PDF Satisfaction of charge 25 in full - link opens in a new window - 2 pages 2 pages.

Investment Software Limited (ISL) is an investee company of ISG. IMiX is a front to middle office web based Investment Management software application with. ALCOMAX (INVESTMENTS) LTD, LOAMPIT VALE, LONDON, SE13 7SN · ARHAAM LIMITED, 72 LOAMPIT VALE, LONDON, SE13 7SN · AYAG LTD, AITKEN SPENCE C & T INVESTMENTS PVT LTD. AITKEN ALCOMAX ENGINEERINHG PVT LTD. ALDA CRAYON SOFTWARE LANKA (PRIVATE) LIMITED. CRE 8.