DrosoBOTs: Code Implementation



june 1a


Zulkifli Zainal Abidin
Postgraduate Student(Phd)
B.Eng Computer and Information Engineering (International Islamic University Malaysia)
Project title: DrosoBOTs-Swarms of Mini Autonomous Surface Vehicles with Animal Inspired Metaheuristic Algorithm Approach


In contrast to other existing algorithms, the fly algorithm will compute the best surrounding direction before moving towards that direction. Similar to the other algorithms such as the Bee Algorithm, shrinking is ultimately important in this algorithm. Currently, this new algorithm is focused upon examining the best shrinking method. Although this algorithm is mainly based upon gradient information, randomization is imposed in order to solve the local optimum problem. Its capability to solve problem with multiple peaks will also be harnessed.

As the purpose of this algorithm is mainly for swarming robotics, this study focuses on the possibility of real swarming with the implementation of sensors. The previous existing algorithm purely concentrates on fast varying motion that can be performed by the organism or applied to the nature of particles. Although FOA is still based on the motion of the fly, it has also introduced features that can actually be performed by slow varying agents such as an autonomous surface vehicle (ASV). A real agent will collect data along the path. In our case, it’s considered as crucial point and must be taken into the consideration. Thus, this algorithm is based on the scenario where a fly is actually collecting data in its path and it changes directions according to stochastic conditions.

Thus, under real circumstances, each agent would be able to investigate each peak in a particular confined area. The searching process also can be described as pseudo code below:

Proposed Pseudo code

1. Initialization using Lévy Flight motion
2. Choosing the best location
3. While (terminating condition is not met)
4. Examine surrounding points and identify best heading direction (find location attractiveness, simply smelling)
5. Examine points on that direction with different distance (go to location found). In this algorithm, this is known as shooting process
6. Select the best point to be next reference point and loop again
7. Terminate while location is in range of 0.001

FOA process begins with initialization by using Lévy Flight motion. Each fly will be dispatch and find its own current best location. In the mean time, each fly also “smell” if there is any other source of food better than the points it was visited and eventually examining the best heading direction for the next iteration process. Like the “fruit flies”, there share information among them and stop at the location where the targeted location is consider as the most profitable among them. The details implementation of this pseudo code is described on the next section.

Lévy Flight Motion is based on the Lévy Distribution which is a skewed distribution (Fig. 1). Hence, the destinations produced by the Lévy Flight Motion will have a further distance than the destinations created by normal distribution when c is greater than 1.


nov 2anov 2b

Figure 1: The Levy Distribution


The surrounding of the known best points is then explored. To yield sensible results, the smelling process must involve 10 flies or more. A circle with a small radius is set around the best known point (Fig. 2). The flies are distributed on the perimeter of the circle at equal angles. The best direction will then be chosen. This is the basic idea of the smelling algorithm. After selecting the best direction, tracking process can be conducted along the direction that is selected. Instead of having the varying angles as in the smelling process, the tracking process will have its radius varying from the known best point for the flies involved. The point which is the best and better than the known best point will be selected as the new known best point.


nov 2c

Figure 2: The Shooting Process

Proportional- Integral-Derivative (PID) Control



aug 1a


Maziyah Mat Noh
Postgraduate Student(Phd)
BEng(hons) Electrical and Electronics from Univ. of Newcastle Upon Tyne, UK
MSc. in Automation and Control from Univ. of Newcastle Upon Tyne, UK

Project title: Modeling and controller design for USM underwater glider


The combination of proportional, integral, and derivative terms is important to increase the speed of the response, to eliminate the steady state error and also to reduce the overshoot. The PID controller block is shown in figure 1.

nov 1a

Figure 1: PID Controller block diagram


The PID terms is given by:

nov 1b

nov 1c are the tuning knobs, are adjusted to obtain the desired output. The following speed control example [4] is used to demonstrate the effect of increase/decrease the gain, nov 1c A DC motor dynamics equations is represented with second order transfer function,

nov 1e

nov 1f = electromotive force constant = 0.01Nm/Amp
b = damping ratio of the mechanical system = 0.1Nms
J = moment of inertia of the rotor = nov 1g
R = electric resistance = nov 1h
L = electric inductance = 0.5H
After we include the PID controller, the closed-loop transfer function become:

nov 1i

Figure 2 shows the comparison between proportional (P), PI, and PID controllers. The result obviously shows, with PID controller, we are able to eliminate the steady state error and overshoot of the response. With appropriate selection of the gain nov 1c we manage to obtain the desired response. The response shows the PID controller gives zero percent of overshoot, fastest settling time (0.3 seconds) and zero steaty-state error whereas P controller shows the worst performance with large steady-state error about 10% and percent of overshoot about 25%.

nov 1j Figure 2: Comparison P, PI, and PID controllers


Gene F. Franklin, J. David Powell, Abbas Emami-Naeini. Feedback Control of Dynamic Systems. 3rd edition. USA. Addison-Wesley,1994


Moving Beyond Single Processor on Chip




Muataz H. Salih Al-Doori
Research Engineer, PhD
B.Sc. Computer Engineering (Univ. of Technology, Baghdad/Iraq), M.Sc. Computer Engineering (Univ. of Technology, Baghdad/Iraq).

Project Title: Design and Implementation of Embedded Multiprocessor SoC. for Tracking and Navigation Systems Using FPGA Technology


Moving Beyond Single Processor on Chip

In the 1990s RISC microprocessors became very popular, to the point that they replaced some CISC architectures. The success of ASIC and SoCs eased the advent of post-RISC processors, which are usually generic RISC architectures, augmented with additional components. Several different approaches have been proposed to implement the acceleration of a microprocessor, but in general the main idea consists in starting from a general-purpose RISC core and adding extra components like dedicated hardware accelerators. This is due to the fact that it is good to keep a certain degree of software programmability to keep up with the fact that applications and protocols change fast, so having a programmable core in the system is recommendable to guarantee general validity and flexibility to the platform. One possible way of accelerating a programmable core consists in general into exploiting instruction and/or data parallelism of applications by providing the processor with Very Long Instruction Word (VLIW) or Single- Instruction Multiple-Data (SIMD) extensions; another way consists in adding special functional units (for example, MAC circuits, barrel shifter, or other special components designed to speed up the execution of algorithms) in the data path of the programmable core: this way the instruction set of the core is extended with specialized instructions aiming at speeding-up operations which are both heavy and frequent. This approach is anyway not always possible or convenient, especially if the component to plug into the pipeline is very large.

Moreover, large ASIC blocks are now not so convenient in the sense that they usually cost a lot and lack flexibility, so that they become useless whenever the application or the standard they implement changes. For this reason many microprocessors come with a special interface meant to ease the attachment of external accelerators; there are basically two possibilities: using large, general-purpose accelerators to be used for as many applications as possible; using very large, powerful, run-time reconfigurable accelerators. The design and verification issues related to coprocessors can be faced independently from the ones related to the main processor: this way it is possible to parallelize the design activities, saving then time, or (in case which the core already exists before the coprocessors are designed) the coprocessors can be just plugged into the system as black boxes, with no need to modify the architecture of the processor.

Today a single chip can host an entire system which is cheap and at the same time powerful enough to run several applications, including demanding ones like image, video, graphics, and audio, which are becoming extremely popular at consumer level even in portable devices. These applications are then to be carefully analyzed to determine an optimal way to map them to the hardware available: since normally applications are made up of a control part and a computation part, the first stage usually consists in locating the computational kernels of the algorithms.

These kernels are usually mapped on dedicated parts of the system (namely dedicated processing engines), optimized to exploit the regularity of the operations operated on large amounts of data, while the remaining parts of the code (the control part) is implemented by software running on a regular microprocessor. Sometimes special versions of known algorithms are set up in order to

meet the demand for an optimal implementation on hardware circuits. Different application domains call for different kinds of accelerators: for example, applications like robotics, automation, and Dolby digital audio and 3D graphics require floating-point computation, making thus the insertion of floating-point units (FPU) very useful and sometimes even necessary. To cover the broad range of modern and computationally demanding applications like imaging, video compression, multimedia, we also need some other kind of accelerator: those applications usually benefit from regular, vector architectures able to exploit the regularity of data while satisfying the high bandwidth requirements. A possibility consists in producing so called multimedia SoC, which usually are a particular version of multiprocessor systems (MPSoCs) containing different types of processors, which meets far better the demands than homogeneous MPSoCs. Such machines are usually quite large, so a very effective way of solving this problem which is widely accepted nowadays is to make those architectures run-time reconfigurable. This means that the hardware is done so that the datapath of the architecture can be changed by modifying the value of special bits, named configuration bits. One first example of reconfigurable that became very popular is given by FPGA processors, which can be used to implement virtually any circuit by sending the right configuration bits to the device. The idea of reconfigureability was then developed further, leading to custom devices used to implement powerful computation engines; this way it is possible implementing several different functionalities on the same component, saving area and at the same time tailoring the hardware at run-time to implement an optimal circuit for a given application. Reconfigureability is an excellent mean of combining the performance of hardware circuits with the flexibility of programmable architectures.

Accelerators come in different forms and can differ a lot from each others: differences can relate to the purpose for which they are designed (accelerators can be specifically designed to implement a single algorithm, or can instead support a broad series of different applications), their implementation technology (ASIC custom design, ASIC standard-cells, FPGAs), the way which they interface to the rest of the system, and their architecture. So, multiprocessor now is existing on single chip.


1. Leibson S (2004) Lower SoC operating frequencies to cut power dissipation. In Portable Design, February

2. Jeong CH, Park WC, Kim SW, Han TD (2000) The Design and Implementation of CalmlRISC32 Floating- Point Unit. In Proc. AP-ASIC, pp 327–330

3. Ahonen T (2006) Designing Network-Based Single-Chip System Architectures, DrTech Thesis, Tampere University of Technology. TUT Publication 625

4. Altera (2010) company web page http://www.altera.com

5. Xilinx (2010) company web page http://www.xilinx.com

6. Bondalapati K, Prasanna VK (2002) Reconfigurable Computing Systems. Proceedings of the IEEE, 90(7):1201–1217

7. Brunelli C, Cinelli F, Rossi D, Nurmi J (2006) A VHDL Model and Implementation of a Coarse-Grain Reconfigurable Coprocessor for a RISC Core. In Proc. PRIME, pp 229–232

8. Chen L, Bai X, Dey S (2002), Testing for Interconnect Crosstalk Defects Using On-Chip Embedded Processor Cores. Journal of Electronic Testing: Theory and Applications, 18(4):529–538

9. Comer DE (2004) Network Systems Design Using Network Processors. Prentice Hall, Upper Saddle River,N

10. Kongetira P, Aingaran K, Olukotun K (2005) Niagara: A 32-Way Multithreaded Sparc Processor. IEEE Micro, 25(2):21–29

11. Kranitis N, Paschalis A, Gizopoulos D, Xenoulis G (2005) Software-Based Self-Testing of Embedded Processors. IEEE Transactions on Computers, 54(4):461–475


The Low Dropout (LDO) Regulator





Alireza Nazem
Postgraduate Student(Phd)
B. Sc U. of Guilan-Iran
M. Sc Electronic Enggineering (USM)

Project title:Underwater autonomous navigation system using FLTZ Logic concept


Linear and Switching Power supply fundamentals - part 3

The Low Dropout (LDO) Regulator

The Low-dropout (LDO) regulator differs from the Standard regulator in that the pass device of the LDO is made up of only a single PNP transistor (Figure 3).

oct 4a


The minimum voltage drop required across the LDO regulator to maintain regulation is just the voltage across the PNP transistor: VD(MIN) = VCE (LDO Regulator)

The maximum specified dropout voltage of an LDO regulator is usually about 0.7V to 0.8V at full current, with typical values around 0.6V. The dropout voltage is directly related to load current, which means that at very low values of load current the dropout voltage may be as little as 50 mV. The LDO regulator has the lowest (best) dropout voltage specification of the three regulator types.

The lower dropout voltage is the reason LDO regulators dominate battery-powered applications, since they maximize the utilization of the available input voltage and can operate with higher efficiency. The explosive growth of battery-powered consumer products in recent years has driven development in the LDO regulator product line.

The ground pin current in an LDO regulator is approximately equal to the load current divided by the gain of the single PNP transistor. Consequently, the ground pin current of an LDO is the highest of the three types. For example, an LP2953 LDO regulator delivering its full rated current of 250 mA is specified to have a ground pin current of 28 mA (or less), which translates to a PNP gain of 9 or higher. The LM2940 (which is a 1A LDO regulator) has a ground pin current specification of 45 mA (max) at full current. This requires a current gain of not less than 22 for the PNP pass transistor at rated current.

Responses, Directivity and Intensity


vol2 oct1



Mohd Ikhwan Hadi Bin Yaacob
Postgraduate Student(Phd)
M.Sc. Physics, Instrumentations (UTM Skudai)
B.Sc. Physics, Industrial Physics (UTM Skudai)

Project title:Micromachined Parametric Array Acoustic Transducers for Sonar Application



Principle of Underwater Acoustic Transducer Design (Part 2)

Responses, Directivity and Intensity

Responses for the acoustic transducer measure the ability to perform radiation and detection of sound. In most cases, sound that was radiated into the medium need to be detected by the same transducer module or array. In more technical manner, response can be defined as transducer output per unit input as a function of specific parameters; it can be frequency, fixed drive condition or dimensional parameters.

Directivity of the radiated sound from the transducer changes with frequency of the signal and the distance from the transducer. However, beyond the far field at a fix frequency, directional characteristics of the transducer become independent of distance. At this point, sound intensity is inversely proportional to distance. The direction in which the maximum acoustic intensity occurs is known as the acoustic axis or maximum response axis (MRA).

Another important term from this interpretation is the directivity factor. It can be defined as the ratio of maximum acoustic intensity, Io to the acoustic intensity averaged from all directions, Ia; at the same distance in the far field. For a deeper understanding, average intensity, Ia need to be interpreted as the total radiated acoustic power, W divided by the area of a sphere at the distance, r. Mathematically, directivity factor can be written as:

oct 3a

As a standard practice, directivity index was stated in decibel (dB) of directivity factor which is;

oct 3b

For a conventional macro size transducer, when the area of a vibrating surface of a transducer, A is larger than acoustic wavelength, oct 3c with the assumption of uniform normal velocity of the surface, the directivity factor can approximately be stated as:

oct 3d

Another vital characteristic of the acoustic transducer is the source level. It can be interpreted as a measure of a far field pressure can be produced by a transducer at its maximum response axis. In a lossless medium, total radiated power is independent of distance. However, since the pressure varies inversely with distance, reference distance for a source level is a must. As a standard measure, it is 1m from the acoustic center of the transducer. The source level is defined as a ratio of rms pressure amplitude 1m from the transducer, oct 3e to 1 micropascal oct 3f of reference sound level in decibel (dB) which is:

oct 3g

The source level can also be written in terms of total radiated acoustic power and the directivity index by using those relationships:

oct 3h


oct 3i

And p is a density of the medium and c is a speed of sound in the medium (both a constant for every specific medium). As the example, for a water which oct 3j

oct 3k

With W as the output power in Watt is the input electrical power reduced by the electroacoustic efficiency. Source level that correspond to the maximum acoustic power is a very important measure of the acoustic projector. Furthermore, transmitting voltage and current response are defined as a source level for an input of 1V or 1A rms. However, the free field voltage receiving response is different. It can be defined as the open circuit voltage output for a free field pressure input of 1 uPa in a plane wave arriving on the MRA.


[1] C.H. Sherman and J.L. Butler, Transducers and Arrays for Underwater Sound, Springer 2007
[2] L.M. Brekhovskikh and Yu. P. Lysanov, Fundamentals of Ocean Acoustics, AIP Press and Springer, 2003.
[3] R.C. Dorf, The ocean Engineering Handbook, CRC Press LLC, 2001.