(navigation image)
Home American Libraries | Canadian Libraries | Universal Library | Community Texts | Project Gutenberg | Children's Library | Biodiversity Heritage Library | Additional Collections
Search: Advanced Search
Anonymous User (login or join us)
Upload
See other formats

Full text of "IJAET VOLUME 2 Issue 1"

Volume-2,lssue-l 



URL : http://www.iiaet.org 
E-mail : editor@iiaet.org 



International Journal of Advances in Engineering & Technology, Nov 2011. 

©IJAET ISSN: 2231-1963 

Table of Content 



S. No. Article Title & Authors (Vol. 2, Issue. 1, Jan-2012) Page No's 

1 . STATE OF ART: HAND BIOMETRIC 1 -9 

Sarah BENZIANE and Abdelkader BENYETTOU 

2. USING DYNAMIC DUAL KEYS ENCRYPTION ALGORITHM AS 10-18 
PARTIAL ENCRYPTION FOR A REAL-TIME DIGITAL VIDEO 

Abdul Monem S. Rahma and Basima Z.Yacob 

3. DESIGN AND PROTOTYPING OF A MINIATURIZED SENSOR FOR 19-26 
NON-INVASIVE MONITORING OF OXYGEN SATURATION IN 
BLOOD 

Roberto Marani, Gennaro Gelao and Anna Gina Perri 

4. EFFECTS OF PGPR ON GROWTH AND NUTRIENTS UPTAKE OF 27-31 
TOMATO 

Shahram Sharafzadeh 

5. THE APPLICATION OF PSO TO HYBRID ACTIVE POWER FILTER 32-42 
DESIGN FOR 3 PHASE 4-WIRE SYSTEM WITH BALANCED & 
UNBALANCED LOADS 

B. Suresh Kumar, K. Ramesh Reddy & S. Archana 

6. A SURVEY OF COUPLING MEASUREMENT IN OBJECT 43-50 
ORIENTED SYSTEMS 

V. S. Bidve and Akhil Khare 

7. THE COMPUTER ASSISTED EDUCATION AND ITS EFFECTS ON 51-61 
THE ACADEMIC SUCCESS OF STUDENTS IN THE LIGHTING 
TECHNIQUE AND INDOOR INSTALLATION PROJECT COURSE 

Ismail Kayri, Muhsin Tunay Gencoglu and Murat Kayri 

8. FRACTAL CHARACTERIZATION OF EVOLVING TRAJECTORIES 62-72 
OF DUFFING OSCILLATOR 

Salau, T. A.O. and Ajide, O.O. 

9. SANKEERNA: A LINEAR TIME, SYNTHESIS AND ROUTING 73-89 
AWARE, CONSTRUCTIVE VLSI PLACER TO ACHIEVE 
SYNERGISTIC DESIGN FLOW 

Santeppa Kambhaml and Siva Rama Krishna Prasad Kolli 

10. A NEW VARIANT OF SUBSET-SUM CRYPTOSYSTEM OVER RSA 90-97 
Sonal Sharma, Saroj Hiranwal, Prashant Sharma 



Vol. 2, Issue 1, pp. i-vii 



International Journal of Advances in Engineering & Technology, Nov 2011. 

©IJAET ISSN: 2231-1963 

11. A COMPACT DUAL BAND PLANAR RMSA FOR WLANAV1MAX 98-104 
APPLICATIONS 

C. R. Byrareddy, N. C. Easwar Reddy, C. S. Sridhar 

12. VLSI ARCHITECTURE FOR LOW POWER VARIABLE LENGTH 105-120 
ENCODING AND DECODING FOR IMAGE PROCESSING 
APPLICATIONS 

Vijaya Prakash. A.M & K.S. Gurumurthy 

13. VERIFICATION ANALYSIS OF AHB-LITE PROTOCOL WITH 121-128 
COVERAGE 

Richa Sinha, Akhilesh Kumar and Archana Kumari Sinha 

14. IMPACT OF VOLTAGE REGULATORS IN UNBALANCED RADIAL 129-138 
DISTRIBUTION SYSTEMS USING PARTICLE SWARM 
OPTIMIZATION 

Puthireddy Umapathi Reddy, Sirigiri Sivanagaraju 

15. STUDY ON PERFORMANCE OF CHEMICALLY STABILIZED 139-148 
EXPANSIVE SOIL 

P. VenkaraMuthyalu, K. Ramu and G.V.R. Prasada Raju 

16. DESIGNING AN AUTOMATED SYSTEM FOR PLANT LEAF 149-158 
RECOGNITION 

lyotismita Chaki and Ranjan Parekh 

17. FUZZY CONTROL OF SQUIRREL CAGE INDUCTION MACHINE 159-167 
WIND GENERATION SYSTEM 

B. Ravichandra Rao and R. Amala Lolly 

18. AN ADVANCED WIRELESS SENSOR NETWORK FOR LANDSLIDE 168-178 
DETECTION 

Romen Kumar.M & Hemalatha 

19. EVALUATION OF PHONETIC MATCHING APPROACHES FOR 179-189 
HINDI AND MARATHI: INFORMATION RETRIEVAL 

Sandeep Chaware and Srikantha Rao 

20. DESIGN OF ENERGY-EFFICIENT FULL ADDER USING HYBRID- 190-202 
CMOS LOGIC STYLE 

Mohammad Shamim Imtiaz, Md Abdul Aziz Suzon, Mahmudur Rahman 

21. EXAM ONLINE: E-ENABLING EXTENDED LEARNING, ANSWER 203-209 
AND ESSAY EXAMINATIONS 

Abdulghader. A. Ahmed, Dalbir S., Ibrahim M. 



liT 



Vol. 2, Issue 1, pp. i-vii 



International Journal of Advances in Engineering & Technology, Nov 2011. 

©IJAET ISSN: 2231-1963 



22. 



NOISE MODELING OF SIGE HBT BASED ON THE 
CHARACTERIZATION OF EXTRACTED Y- AND Z- PARAMETERS 
FOR HF APPLICATIONS 



210-219 



23. 



Pradeep Kumar and R.K. Chauhan 

DIELECTRIC PROPERTIES 
SEAWATER AT 5 GHZ 



OF NORTH INDIAN OCEAN 220-226 



A.S. Joshi, S.S. Deshpande, M.L.Kurtadikar 

24. AN EFFICIENT DECISION SUPPORT SYSTEM FOR DETECTION 227-240 
OF GLAUCOMA IN FUNDUS IMAGES USING ANFIS 

S.Kavitha, K.Duraiswamy 

25. STEP-HEIGHT MEASUREMENT OF SURFACE FUNCTIONALIZED 241-248 
MICROMACHINED MICROCANTILEVER USING SCANNING 
WHITE LIGHT INTERFEROMETRY 

Anil Sudhakar Kurhekar and P. R. Apte 

26. EXPERIMENTAL INVESTIGATION ON FOUR STROKE CERAMIC 249-257 
HEATER SURFACE IGNITION C.I. ENGINE USING DIFFERENT 
BLENDS OF ETHYL ALCOHOL 

R.Rama Udaya Marthandan, N.Sivakumar, B. Durga Prasad 

27. PERFORMANCE VERIFICATION OF DC-DC BUCK CONVERTER 258-268 
USING SLIDING MODE CONTROLLER FOR COMPARISON WITH 

THE EXISTING CONTROLLERS - A THEORETICAL APPROACH 

Shelgaonkar (Bindu) Arti Kamalakar, N. R. Kulkarni 

28. PERFORMANCE EVALUATION OF DS-CDMA SYSTEM USING 269-281 
MATLAB 

Athar Ravish Khan 

29. RECENT PHILOSOPHIES OF AGC OF A HYDRO-THERMAL 282-288 
SYSTEM IN DEREGULATED ENVIRONMENT 

L. ShanmukhaRaol, N.Venkata Ramana 

30. DYNAMIC ROUTING SCHEME IN ALL-OPTICAL NETWORK 289-298 
USING RESOURCE ADAPTIVE ROUTING SCHEME 



31. 



S. Suryanarayana, K.Ravindra, K. Chennakesava Reddy 

ENHANCED BANDWIDTH UTILIZATION IN WLAN FOR 299-308 
MULTIMEDIA DATA 



32. 



Z. A. Jaffery, Moinuddin, Munish Kumar 

ANALYSIS AND INTERPRETATION OF LAND RESOURCES 
USING REMOTE SENSING AND GIS: A CASE STUDY 



309-314 



in 



Vol. 2, Issue 1, pp. i-vii 



International Journal of Advances in Engineering & Technology, Nov 2011. 

©IJAET ISSN: 2231-1963 

S.S. Asadi, B.V.T.Vasantha Rao, M.V. Raju and M.Anji Reddy 

33. IPV6 DEPLOYMENT STATUS, THE SITUATION IN AFRICA AND 315-322 
WAY OUT 

Agbaraji E.C., Opara F.K., and Aririguzo M.I. 

34. STUDY AND REALIZATION OF DEFECTED GROUND 323-330 
STRUCTURES IN THE PERSPECTIVE OF MICROSTRIP FILTERS 

AND OPTIMIZATION THROUGH ANN 

Bhabani Sankar Nayak, Subhendu Sekhar Behera, Atul Shah 

35. ANALYSIS OF DISCRETE & SPACE VECTOR PWM CONTROLLED 331-341 
HYBRID ACTIVE FILTERS FOR POWER QUALITY 
ENHANCEMENT 

larupula Somlal, Venu Gopala Rao Mannam 

36. COMPARISONS AND LIMITATIONS OF BIOHYDROGEN 342-356 
PRODUCTION PROCESSES: A REVIEW 

Karthic Pandu and Shiny Joseph 

37. MORPHOMETRIC AND HYDROLOGICAL ANALYSIS AND 357-368 
MAPPING FOR WATUT WATERSHED USING REMOTE SENSING 

AND GIS TECHNIQUES 

Babita Pal, Sailesh Samanta and D. K. Pal 

38. ADAPTIVE HYSTERESIS BAND CURRENT CONTROL FOR 369-376 
TRANSFORMERLESS SINGLE-PHASE PV INVERTERS 

B. Nagaraju , K. Prakash 

39. PARAMETRIC STUDY OF A NOVEL STACKED PATCH ANTENNA 377-384 
V. Rajya Lakshmi, M. Sravani, G.S.N.Raju 

40. BIOMETRICS STANDARDS AND FACE IMAGE FORMAT FOR 385-392 
DATA INTERCHANGE - A REVIEW 

Nita M. Thakare and V. M. Thakare 

41. COMPARATIVE ANALYSIS OF LOW-LATENCY ON DIFFERENT 393-400 
BANDWIDTH AND GEOGRAPHICAL LOCATIONS WHILE USING 
CLOUD BASED APPLICATIONS 

N. Ajith Singh and M. Hemalatha 

42. EVALUATION OF TEXTURAL FEATURE EXTRACTION FROM 401-409 
GRLM FOR PROSTATE CANCER TRUS MEDICAL IMAGES 

R.Manavalan and KThangavel 

43. ANALYSIS AND MULTINOMIAL LOGISTIC REGRESSION 410-418 
MODELLING OF WORK STRESS IN MANUFACTURING 

iv | Vol. 2, Issue 1, pp. i-vii 



International Journal of Advances in Engineering & Technology, Nov 2011. 

©IJAET ISSN: 2231-1963 

INDUSTRIES IN KERALA, INDIA 
K. Satheesh Kumar and G. Madhu 

44. AUDIO DENOISING USING WAVELET TRANSFORM 419-425 
B. JaiShankar and K. Duraiswamy 

45. HYBRID ACTIVE POWER FILTER USING FUZZY DIVIDING 426-432 
FREQUENCY CONTROL METHOD 

SaiRam.I, Bindu.V and K.K. Vasishta Kumar 

46. MINIMUM LINING COST OF TRAPEZOIDAL ROUND CORNERED 433-436 
SECTION OF CANAL 

Syed Zafar Syed Muzaffar, S. L. Atmapoojya, D.K. Agarwal 

47. VOLTAGE CONTROL AND DYNAMIC PERFORMANCE OF 437-442 
POWER TRANSMISSION SYSTEM USING STATCOM AND ITS 
COMPARISON WITH SVC 

Amit Garg and Sanjai Kumar Agarwal 

48. ASSOCIATION RULE MINING ALGORITHMS FOR HIGH 443-454 
DIMENSIONAL DATA - A REVIEW 

K.Prasanna and M.Seetha 

49. ACHIEVING EFFICIENT LOAD BALANCING IN PEER TO PEER 455-462 
NETWORK 

Ritesh Dayama, Ranjeet Kagade, Kedar Ghogale 

50. ANALYSIS AND SIMULATION OF SERIES FACTS DEVICES TO 463-473 
MINIMIZE TRANSMISSION LOSS AND GENERATION COST 

M. Balasubba Reddy, Y. P. Obulesh and S. Sivanaga Raju 

51. MODELING AND SIMULATION OF THE PATCH ANTENNA BY 474-484 
USING A BOND GRAPH APPROACH 

Riadh Mehouachi, Hichem Taghouti, Sameh Khmailia and Abdelkader 
Mami 

52. DESIGN AND VERIFICATION ANALYSIS OF AVALON 485-492 
INTERRUPT INTERFACE WITH COVERAGE REPORT 

Mahesh Kumar Jha, Richa Sinha and Akhilesh Kumar 

53. CONCATENATION OF BCH CODE WITH SPACE TIME CODE A 493-500 
LOW SNR APPROACH FOR COMMUNICATION OVER POWER 

LINES FOR SUBSTATION AUTOMATION 

Rajeshwari Itagi, Vittal K. P., U. Sripati 

54. UTILIZATION OF EXTRUSION AS AN ADVANCED 501-507 



Vol. 2, Issue 1, pp. i-vii 



International Journal of Advances in Engineering & Technology, Nov 2011. 

©IJAET ISSN: 2231-1963 

MANUFACTURING TECHNIQUE IN THE MANUFACTURE OF 
ELECTRIC CONTACTS 



Virajit A. Gundale ,Vidyadhar M. Dandge 

55. CONTROL AND PERFORMANCE OF A CASCADED H-BRIDGE 
MLI AS STATCOM 

M. Vishnu Prasad and K. Surya Suresh 

56. IMAGE RETRIEVAL USING TEXTURE FEATURES EXTRACTED 
USING LBG, KPE, KFCG, KMCG, KEVR WITH ASSORTED COLOR 
SPACES 

H.B.Kekre, Sudeep D. Thepade, Tanuja K. Sarode, Shrikant P. Sanas 



508-519 



520-531 



57. DEVELOPMENT OF A SIMPLE & LOW-COST 

INSTRUMENTATION SYSTEM FOR REAL TIME VOLCANO 
MONITORING 

Didik R. Santoso, Sukir Maryanto and A.Y. Ponco Wardoyo 



532-542 



58. INTENSIFIED ELGAMAL CRYPTOSYSTEM (IEC) 543-551 
Prashant Sharma, Amit Kumar Gupta, Sonal Sharma 

59. A ZIGZAG-DELTA PHASE-SHIFTING TRANSFORMER AND 552-563 
THREE-LEG VSC BASED DSTATCOM FOR POWER QUALITY 
IMPROVEMENT 

R.Revathi and I.Ramprabu 

60. MODELING AND SIMULATION OF NANOSENSOR ARRAYS FOR 564-577 
AUTOMATED DISEASE DETECTION AND DRUG DELIVERY 

UNIT 

S.M. Ushaa, Vivek Eswaran 

61. MISSING BOUNDARY DATA RECONSTRUCTION BY AN 578-586 
ALTERNATING ITERATIVE METHOD 

Chakir Tajani and Jaafar Abouchabaka 

62. A COMPREHENSIVE PERFORMANCE ANALYSIS OF ROUTING 587-593 
PROTOCOL FOR ADHOC NETWORK 

Sachin Dahiya, Manoj Duhan, Vikram Singh 

63. COMPARISON OF GA AND LQR TUNING OF STATIC VAR 594-601 
COMPENSATOR FOR DAMPING OSCILLATIONS 

Nuraddeen Magaji, Mukhtar F. Hamza, Ado Dan-Isa 

64. INVERTED SINE PULSE WIDTH MODULATED THREE-PHASE 602-610 



VI 



Vol. 2, Issue 1, pp. i-vii 



International Journal of Advances in Engineering & Technology, Nov 2011. 

©IJAET ISSN: 2231-1963 

CASCADED MULTILEVEL INVERTER 
R.Seyezhai 



65. ON THE SUNSPOT TIME SERIES PREDICTION USING JORDON 
ELMAN ARTIFICIAL NEURAL NETWORK (ANN) 

Rohit R. Deshpande and Athar Ravish Khan 



66. SESSION DATA 
DEPENDENCY 



PROTECTION 



611-621 



USING TREE-BASED 622-631 



G. Shruthi, Jayadev Gyani, R. Lakshman Naik, G. Sireesh Reddy 

67. ESTIMATION AND OPTIMIZATION OF PROSODIC TO IMPROVE 632-639 
THE QUALITY OF THE ARABIC SYNTHETIC SPEECH 

Abdelkader CHABCHOUB & Adnen CHERIF 

68. INTEGRATION OF CONTROL CHARTS AND DATA MINING FOR 640-648 
PROCESS CONTROL AND QUALITY IMPROVEMENT 

E. V. Ramana and P. Ravinder Reddy 

69. BINS APPROACH TO IMAGE RETRIEVAL USING STATISTICAL 649-659 
PARAMETERS BASED ON HISTOGRAM PARTITIONING OF R, 

G, B PLANES 

H. B. Kekre and Kavita Sonawane 

70. ADAPTIVE ALGORITHM FOR CALIBRATION OF ARRAY 660-667 
COEFFICIENTS 

K. Ch. Sri Kavya, B. V. Raj Gopala Rao, Gopala Krishna.N, J. 
Supriyanka, J.V.Suresh, Kota Kumar, Habibulla Khan, Fazal Noor Basha 



71. SYNTHESIS AND CHARACTERIZATION OF CATALYSTS 668-676 
CONTAINING MOLYBDENUM AND TUNGSTEN AND THEIR 
APPLICATION IN PARAFFIN ISOMERIZATION 

Aoudjit Farid 



Members of IJAET Fraternity 



AG 



vn 



Vol. 2, Issue 1, pp. i-vii 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



State of Art: Hand Biometric 

Sarah BENZIANE^nd Abdelkader BENYETTOU 2 



institute of Maintenance and Industrial Safety, University of OranEs-Senia, Algeria 

Science, Faculty of Science, Universit 
Mohamed Boudiaf of Oran, Algeria 



9 

Department of Computer Science, Faculty of Science, University of Science & Technology 



Abstract 

This paper present a state of art about biometric hand, different techniques used.Biometric is essentially used to 
avoid risks of password easy to find or Stoll; with as slogan save Time and Attendance. We can note that 
biometrics is a true alternative to the passwords and other identifiers to make safe the access controls. It makes 
it possible to check that the user is well the person who it claims to be. 

KEYWORDS' Hand, palmprint, geometry, biometric system, identification, authentification, verification. 

I. Introduction 

Biometrics is in full growth and tends to join other technologies of safety like the smart card. Within 
the biometric systems used today, we notice that the hand biometric is one of those, the users most 
accept because they don't feel persecute in their private life. A survey of 129 users illustrated that the 
use of hand geometry biometric system at Purdue University's Recreation Centre has many 
advantages; the survey participants, 93% liked using the technology, 98% liked its ease of use, and 
specially more no else find the technology intrusive [KUK06]. 

It's why; nowadays hand biometrics recognition has been developed with a great success for the 
biometric authetification and identification. The biometric recognition process allows the recognition 
of a person basing on physical and behavioral features. Because of each person have characteristics 
which are clean for him: voice, fingerprints, features of his face, his signature... his ADN and by the 
way hand physionomy and physiology, an overview of such systems can be found in [ROS06].The 
hand is the almost appropriate for some situations and scenarios. 

For the hand biometric modality, within the main features used; we note: the length and width 
analysis, the shape of the phalanges, articulations, lines of the hand ...etc 

The hand biometrics presents a high ease to use a system based on. Although, the hardware system 
from time to time makes error incidence's due to the injury of the hand and by the way the hand age. 
Setting besides that, the systems gives a very high accuracy with a medium security level required. 
However, for a long term the stability is somehow average and need to be improved. Most of the 
previous works has elaborated systems based on hand biometric contact [SAN00]. 
The reminder of this paper is organized as follow. In section 2, we present why we use the hand 
biometric. In section 3, we describe how does hand biometric system works. In Section 4, we present 
the hand identification techniques. In Section 5, we present the bottom up feature based methods. In 
section 6, we present the data capture. In Section 7, we present the hand biometric identification/ 
authentification. In section 8, we present a tabular representation of the existing method. In last 
section, we offer our conclusion. 

II. Why Hand Biometric? 

The suitability of a specific biometric to a particular application depends on many issues [50]; amid 
them, the user acceptability appears to be the most important [JAI97]. For various access control 

"T[ Vol. 2, Issue 1, pp. 1-9 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

applications, as immigration, border control and dormitory meal plan access, very distinctive 
biometrics, e.g., fingerprint and iris, could not be suitable for protecting person's privacy. In such 
circumstances, it is preferable that the given biometric key be only unique enough for verification but 
not for identification. The evaluation of a biometric method depends on the reliability, security, 
performance, cost, user acceptance, life detection, users, size of sensor. One of its advantages is the 
aging issues, both young and old. 

III. HOW DOES HAND BIOMETRIC SYSTEM WORK? 

A hand biometric system works like the other systems based on the other modality as fingerprint, 
voice, iris... Maybe, it can differ only in some few points, like the way to make safe the information. 
But, generally the scenario bellow (Fig. 1) is used to conceive a hand or another biometric system: 







User Enrollemnt 








PRESENT 

Hand to 
sensor 




CAPTURE 

Raw hand 
data 




PROCESS 

Hand 
features 




CRYPT 

Hand 
features 
to a key 




STORE 

Biometric 
key 



Match 




User Identification 



User Verification 



PRESENT 

Hand to 
sensor 



CAPTURE 

Raw hand 
data 



PROCESS 

Hand 
features 



Do not Match 



UNCRYPT 

Hand 
features 
to a key 



Figure 1 Hand biometric system scenario's 



It is based on three basic processes; the enrolment, the verification and the identification. The 
enrolment phase is used for Adding a biometric identifier to the database. The Verification, more 
known as one towards one, because it must make sure that the person is whom he/she claim to be by 
matching against a single record. The Identification, more known to as one against all, since it ought 
to find who is this individual through a matching against all the records in the database. 

IV. Hand Identification 

There are three clusters of characteristics which are used in hand identification, which are called, too 
bottom up features: 

Geometric features; such as the width, length and area of the palm. Geometric features are a 
rough measurement and they are not sufficiently distinct; 

Line features, principal lines and wrinkles. Line features identify the size, position, depth and 
length of the various lines and wrinkles on a palm. Although wrinkles are very characteristic 
and are not easily copied, principal lines may not be satisfactorily distinct to be a reliable 
identifier; 

Point features or minutiae. Point features or minutiae are similar to fingerprint minutiae and 
classify, between other features, ridges, ridge endings, bifurcation and dots. 



TT 



Vol. 2, Issue 1, pp. 1-9 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 




Figure 2 : Hand's Lines 

V. Bottom-Up Feature-Based Methods 

The human hand is the source of a number of unique physiological characteristics. The main 
technologies for hand recognition fall into three categories: palmprint technologies - those measuring 
the unique pattern of prints from the palm of the hand - similar to a fingerprint; Hand geometry 
measurements - those measuring the shape and size of either all or part of the human hand or fingers; 
Hand vein patterns - those measuring the distinct vascular patterns of the human hand, including hand 
dorsum vein and palm vein. 

5.1. Palmprint features 

They are made up of principal lines, deltapoints, minutiae, wrinkles, singular points and 
texture,etc...[32] .Several approaches are used for. Within the most popular methods, those 
considered the palmprint images as textured images which are sole for each person. [9] apply gabor 
filter for palmprint image analysis using a digital camera where [11] used the wavelets, [16] the 
Fourrier Transform, [44] the local texture energy and [41] the directional line energy features. 
Therefore, [DUT03] used a set of feature points the length of the major palm lines. Though, in 
palmprints the creases and ridges often overlie and cross each other. So, [3] has putted forward the 
extraction of local palmprint features by eliminating the creases; but this work is only limited to the 
extraction of ridges. Where [45] by generating a local gray level directional map; has tried to 
approximate palmprint crease points. 

Generally the steps used for the palmprint based biometric are; first to align and localize palm images 
by detecting and aligning to inter-finger anchor points: index-middle and ringpinky junctions. After, 
to extract with a certain resolution pixel region and down sample on each of the 5 direct multispectral 
images per hand placement. Then, Process with orthogonal line ordinal filter to generate level I 
palmprint features. Next, Perform round-robin, single-sample matching of palm features by for 
example the Maximum Hamming distance over multiple translations [21]. Finally, if the palmprint is 
used in a multibiometric; so we must fuse the palm print by normalizing the match scores to the same 
range taking into account the product of the individual match scores. 

5.2. Hand geometry features 

They are based on the area/size of palm, length and width of fingers. Most of the works in the 
biometric hand are based on the geometric features [36] [39]; [21] used geometric features 
and implicit finger polynomial invariants. [SAN00] use user-pegs to constrain the rotation 
and translation of hand. 



■sT 



Vol. 2, Issue 1, pp. 1-9 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

5.2.1. Contact hand biometric 

Most of the systems proposed and/or used are based on research restricted to significantly old patents 
and commercial products [26]. These systems are made as the user must push his/her hand on the 
sensor surface, placing his/her dwells correctly with the guidance's peg. From that process it's 
possible to extract some features like: length, width and height of the fingers, thickness of the hand, 
aspect ratio of fingers and palm which make possible the building of small template. Some works 
based on the systems described above was focused on accuracy. [SAN00][29]have proposed an 
original and better-off geometric features set and have examined the use of multiple templates for a 
person using the Gaussian Mixture Models to model each focus. [8] suggested the use of the all 
contour silhouette of the hand in a straight line for matching. 

Although, several studies has shown that the peg-based alignment is not very efficient and can be in 
some cases the source of failure [SAN00] [26]. So, more recent studies has concentrate their works on 
a more suitable design of a peg-free system [23] [39] [12] [2] [18] [42]. Extraction of the hand from 
the background is the first step of the processing, to after segment the hand in fingers and palm to get 
finally the geometric features [39] [12] and the contours related to each one of them [18] [42]. 

5.2.2. Contactless hand biometric 

A new approach for hand biometric has been used recently in many work, which is the contactless 
hand biometric. [20] centered on a peg-free hand recognition, based on EIH-inspired method for 
robustness against noise. 

5.3. Hand vein 

To provide fast, accurate and robust personal identification, some authors [34] proposed to use the 
hand vein as feature identification. [17] gives an overview of the hand- vein application. Current 
products based vein identification permit single person authentication in less than a second. 

5.4. Palmprint & hand geometry features 

To mitigate each previous technics problems some authors proposed to use the palmprint and the hand 
geometry features. Some propose to use two different sensors for each part. In [13], the plamprint and 
the hand shape are extracted from sensor but the fingerprint is extracted from another sensor. 
Although, the most interesting is to use as for the other bimodal biometric systems, a single sensor. 
The most appropriate for this situation is to make use of a digital camera to get only one image to 
process and with a high resolution. This is what proposed [12] and used; they combined the both 
features kind with fingerprints information, after examining them and using a simple image 
acquisition setup. 

VI. Data capture 

We can count three techniques for capturing the hand: 

Off-line, palm prints are inked into paper and after scanned by the palm print system. By the 

past, the researchers used in their works offline palmprint images and get interesting results 

[DUT01] [44][SHI01]. 

On-line, palm prints are scanned directly as in [44]; which presents survey the use of texture 

to represent low-resolution palmprint images for online personal identification 

Real-time, palm prints are captured and processed in real-time. 

6.1. Resolution quality 

The both low and high resolutions are based on some features, and it depends on the 
application where it's used. 
6.1.1. Low resolution 

PalmPrint features, which are composed of principal lines, wrinkles, minutiae, delta points, etc., and 
must quote the features and give some techniques and works for the both. However, features like 



T[ Vol. 2, Issue 1, pp. 1-9 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

principal lines and wrinkles, can be extract from a low resolution image with less than 100 dpi 
[32][47]. 

6.1.2. High resolution 

For features such as minutiae points, ridges, and singular points, a high image resolution is required 
for a good extraction with at least 400 dpi (dots per inch) [SHI01]. 

VII. Hand Biometric identification/authentification 

7.1. Detection 

First for the recognition, we must extract the hand shape from background [28], as well as motions to 
obtain hand features [SHI01]. Most of the works used can be based on the hand gesture extraction 
[31]; because the both are using motion information from extracted hand area in sometimes complex 
background image for the contactless hand biometric. 
Some techniques are used like: 

Background substraction: used mainly for multiple tracking; human ( detection in the meeting 
room), faces and too for the hand detection [22] 

Skin color: Human skin color [10] has been exploited and established to be an efficient 

feature in many applications from face to hand detection applied in the different color spaces 

(RGB, HSV, CIE LUV and the CIE LUV). It integrates strong spoof detection and 

acquisition. 

[21]uses the length and the width of the finger.To get the extraction of the hand, when the localization 

of the hand extremities, the fingertips and the valleys the main problems met are the artifacts and the 

unsmoothed contour [43]. 

In some framework, it's both possible to detect a hand and its corresponding shape efficiently and 
robustly without constraints upon either user or environment. This has long been an area of interest 
due to its obvious uses in areas such as sign and gesture recognition to name but just two. Boosting is 
a general method that can be used for improving the accuracy of a given learning algorithm [30]. 

7.2. Features extraction 

7.2.1. Hand geometry 

The hand shape integrated acquisition and reduced computational requirements. Several apparatus 
was issued based on the hand geometry [19] [7] [33]. 

7.2.2. Palmprint 

A plamprint pattern is made up of palm lines; principal lines and creases. Line feature matching is 
known to be strong and present a high accuracy in palmprint verification [32] [47]. 
Unfortunately, it is difficult to get a high identification rate by means of only principal lines as their 
similarity amid different people. The texture representation for coarse-level palmprint classification 
offers a successful technique [44] survey the use of texture to represent low-resolution palmprint 
images for online personal identification. 

We found in [35] that according to the features used for palmprint recognition, we can distinguish 
within the various palmprint identification techniques three classes: the structural feature based, 
appearance based and the texture based. For [27], the best palmprint matching approach, in terms of 
authentication accuracy is those of [35]. This method is based on the comparison of two line-like 
image areas and the generation one -bit feature code representing at each image location. The success 
of this method is due to its stability even when the image intensities vary; which were implemented 
and tested in [27] successfully for the palmprint matching. 

When saying Palmprint, we sepakmaily of the major features and the ridges. They reduced need to 
manipulate hand or pre processing the skin and integrated acquisition. Sometimes, the fingerprints are 
used because they represent a robust acquisition under adverse conditions. 



TT 



Vol. 2, Issue 1, pp. 1-9 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

7.3. Motion 

To know more how the articulation of the hand function, studies was done to analyze and synthesize 
the 3D movement of the hand [4] which was extended to others biometric. 

7.4. Translation/rotation 

To be used, many problems can be met like the position of the hand in the image, i.e. the way the 
hand is presented to the sensor mainly in a contactless hand identification situation. [13]employed 
unconstrained peg-free imaging, based on the efficiencies of the algorithm to achieve illumination, 
translation and rotation invariant features. Where the acquired images were binarized and in use for 
feature extraction. The thresholding edge was automatically calculated, by Otsu's approach, once for 
each acquisition setup. 

7.5.Verification 

Some works are based on simple classifier as the Mahanlanobis distance [21], mean average distance 
of contours [8]. [5] applied morphological and Sobel edge features to characterize palmprints and 
used a neural network classifier for their verification. However, this work has shown the utility of 
inkless palmprint images acquired from the digital scanner instead of the classical way i.e. the 
acquisition systems using CCD based digital camera [9]. 

7.6. Virtual interface 

Another main approach in the literature implies the 3D surface reconstructing of the hand. [40]has 
exploited a range sensor to rebuild the dorsal part hand; they used Local shape index values of the 
fingers. Sometimes to modalize the hand movement, is used a virtual interface [38] 

[14] built an exact hand shape using the splines and hand state recovery could be achieved by 
minimizing the difference between the silhouettes. 

The synthesis fingerprint technique can be applied to synthetic palmprint generation. 
7.8. Reconstruction 

The estimation of hand pose from visual cues is a key problem in the development of intuitive, non 
intrusive human computer interface. The solution is to recover a 3d Hand pose from a monocular 
color sequence; using concepts from stochastic visual segmentation, computer graphics and non linear 
supervised learning. In [24], made contribution in proposing a automatic system that tracks the hand 
and estimates its 3D configuration in every frame [ATH01], that does not impose any restrictions on 
the hand shape, does not require manual initialization, and can easily recover from estimation error. It 
is possible to approach this problem using a combination of vision and statistical learning tools. 

VIII. EXISTING Method 

Different hand biometric (measurement) techniques need differentresources from operating systems to 
enable biometricauthentication on the technical basis of measuring a biologicalcharacteristic. Next 
table gives a tabular overview of different features used. 

Six features are considered: 



Systems 


No. of 
people 


No. of 
sample per 
person 


Pegs 


Nof 
template (s) 


Feature (s) 


Similarity 


Performance 


Resolution 


Zhang 
[46] 


500 


6 


No 


2 


Joint palmprint 
and palmvein 
verification 


Dynamic 
weight sum 


EER 0.0212% 
and 0.0158% 


352 * 288 


Ladoux 
[15] 


24 


N/N 


N/N 


N/N 


Palm Vein 


SIFT 


EER0%. 


232x280 


Heenaye 
[6] 


200 


N/N 


N/D 


N/D 


Dorsal hand 
vein pattern, 


Cholesky 
decompositi 
on and 
Lanczos 
algorithm 


FAR 0%, FRR 

0% 


320x240 



TT 



Vol. 2, Issue 1, pp. 1-9 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Shahin 


50 


10 


N/D 


2 


Dorsal hand 


maximum 


FAR of 0.02% 


N/D 


[SAH07] 










vein pattern 


correlation 
percentage 


and FRR of 
3.00 % 




Uhl [37] 


25 


25 


No 


N/D 


Eigen fingers 
and minutiae 
Features 


Parallel 

Versus 

Serial 

Classifier 

Combination 


97.6% RR at 
0.1% FAR) 


500 dpi 


Zhang 


120 


48 


No 


4 


FINGER- 


angular 


FRR 0.01% 


N/D 


[ZHA09] 










KNUCKLE- 
PRINT 


distance 


anfFAR 
96.83% 




Oden 


27 


10 


No 


270 


Geometric 


Mahalanobis 


N/D 


N/D 


[ODE03] 










features and 
implicit 
polynomials 
invariants of 
fingers 


distance 







IX. Conclusion 

In this paper, we considered a state of art of the hand biometric. The hand can be fusion with other 
biometrics as face fingerprint and many others [25]. The fact that a disgruntled employee or customer 
or a person with criminal intentions of entitlement of an active employee in her property and thus 
brings gives unauthorized access, is another security risk that exclude the biometric hand scanners 
effectively. One of the most important indirect problems of the hand biometric, is the hand geometry 
imitation. If the person has arthritis, long fingernails, is wearing hand cream or has circulation 
problems then this will not produce a good reading. The experimental results provide the basis for the 
furtherdevelopment of a fully automated hand-based security systemwith high performance in terms 
of effectiveness, accuracy, robustness,and efficiency.Individual mobility doesn't have a price; hence, 
Hand Biometric Technologies have to be implemented whenever and wherever possible. 

References 

[1] Y. Bulatov, S. Jambawalikar, P. Kumar, and S. Sethia. Hand recognition using geometric classifiers. 
ICBA'04, Hong Kong, China, pages 753-759, July 2004. 

[2] J. Funada, N. Ohta, M. Mizoguchi, T. Temma, K. Nakanishi, A. Murai, T. Sugiuchi, T. Wakabayashi, and Y 
Yamada, "Feature extraction method for palmprint considering elimination of creases," Proc.l4th Intl. 
Conf. Pattern Recognition., vol. 2, pp. 1849 -1854, Aug. 1998. 

[3] G. Gibert, G. Bailly, D. Beautemps, F. Elisei, and R. Brun, "Analysis and synthesis of the 3D movements of 
the head, face and hand of a speaker using cued speech," Journal of Acoustical Society of America, 
vol. 118, pp. 1144-1153, 2005. 

[4] C. -C. Han, H.-L. Cheng, C. -L.Lin and K.-C. Fan, "Personal authentication using palmprint features," 
Pattern Recognition, vol. 36, pp. 371-381, 2003. 

[5] "Feature Extraction of Dorsal Hand Vein Pattern using a fast modified PCA algorithm based on Cholesky 
decomposition and Lanczos technique", MaleikaHeenaye- Mamode Khan , NaushadMamode Khan and 
Raja K.Subramanian, World Academy of Science, Engineering and Technology 61 2010 

[6] I. H. Jacoby, A. J. Giordano, and W. H. Fioretti, "Personal identification apparatus," U. S. Patent No. 
3648240, 1972. 

[7] A. K. Jain and N. Duta,"Deformable matching of hand shapes for verification," presented at the Int. Conf. 
Image Processing, Oct. 1999. 

[8] W. K. Kong and D. Zhang, "Palmprint texture analysis based on low-resolution images for personal 
authentication," Proc. ICPR-2002, Quebec City (Canada). 

[9] Jure Kovac, Peter Peer, Franc Solina (2003), "Human Skin Colour Clustering for Face Detection.", 
International Conference on Computer as a Tool, pp.144- 148. 

[10] A. Kumar and H. C. Shen, "Recognition of palmprints using wavelet-based features," Proc. Intl. Conf. Sys., 
Cybern., SCI-2002, Orlando, Florida, Jul. 2002. 



TT 



Vol. 2, Issue 1, pp. 1-9 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[11] Y. A. Kumar, D. C.M.Wong, H. C. Shen, and A. K. Jain, "Personal verification using palmprint and hand 
geometry biometric", in Proc. 4th Int. Conf. Audio Video-Based Biometric Person Authentication, 
Guildford, U.K., Jun. 9-C11, 2003, pp. 668-C678 

[12] A. Kumar and D. Zhang, "Personal recognition using shape and texture," IEEE Trans. Image Process, vol. 
15,no. 8, pp 2454- 2461, Aug. 2006. 

[13] James J. Kuch and Thomas S. Huang. Vision-based hand modeling and tracking for virtual 
teleconferencing and telecollaboration. In Proc. of IEEE Int'l Conf. on Computer Vision, pages 666- 
67 1 , Cambridge, MA, June 1 995 . 

[14] Pierre-Olivier Ladoux, Christophe Rosenberger, Bernadette Dorizzi, "Palm Vein Verification System based 
on SIFT matching", Advances in Biometrics : Third International Conference, ICB 2009, Alghero, 
Italy, June 2-5, 2009 (2009) 1290-1298", DOI : 10.1007/978-3-642-01793-3_130 

[15] W. Li, D. Zhang, and Z. Xu, "Palmprint identification by Fourier transform," Int. J. Patt. Recognit. Art. 
Intell., vol. 16, no. 4, pp. 417-432, 2002. 

[16] S. Malki, S., Y. Fuqiang, and L. Spaanenburg, "Vein Feature Extraction using DTCNNs", Proceedings 10th 
IEEE Workshop on CNNA and their Applications (Istanbul, August 2006) pp. 307 - 312. 

[17] Y. L. Ma, F. Pollick, , and W. Hewitt. Using b-spline curves for hand recognition. Proc. of the 17th 
International Conference on Pattern Recognition (ICPR'04), Vol. 3:274-277, Aug. 2004. 

[18] R. P. Miller, "Finger dimension comparison identification system," U. S. Patent No. 3576538, 1971. 

[19] "Robust hand image processing for biometric application", JugurtaMontalvao, Lucas Molina, JanioCanuto, 
Pattern Anal Applic (2010) 13:397-407, DOI 10.1007/s 10044-0 10-0 185-7 

[20] Oden, A. Ercil, and B. Buke, "Combining implicit polynomials and geometric features for hand 
recognition," Pattern Recognition Letters, vol. 24, pp. 2145-2152, 2003. 

[21] S. Ribaric, D. Ribaric, and N. Pavesic. Multimodal biometric user-identification system for network-based 
applications. IEE Proceedings on Vision, Image and Signal Processing, Volume 150, Issue 6:409-416, 
15 Dec. 2003. 

[22] Romer Rosales, VassilisAthitsos, Leonid Sigal, and Stan Sclaroff, "3D Hand Pose Reconstruction Using 
Specialized Mappings", Boston University Computer Science Tech. Report No. 2000-22,Dec. 2000 
(revised Apr. 2001), To Appear in Proc. IEEE International Conf. on Computer Vision (ICCV). 
Canada. Jul. 2001. 

[23] "Information Fusion in Biometrics", Arun Ross and Anil Jain, in Pattern Recognition Letters, Vol. 24, 
Issue 13, pp. 2115-2125, September, 2003 

[24] A. Jain, A. Ross, and S. Pankanti. A prototype hand geometry-based verification system. Proc. 2nd Int. 
Conf. on Audio- and video-based personal authentication (AVBPA), Washington, USA, pages 166— 
171, March 1999. 

[25] "A MULTISPECTRAL WHOLE-HAND BIOMETRIC AUTHENTICATION SYSTEM", Robert K Rowe, 
UmutUludag, MeltemDemirkus, SulanParthasaradhi, Anil K Jain, IEEE 2007 Biometrics Symposium 

[26] H.Sagawa,M.Takeuchi, "A Method for Recognizing a Sequence of Sign language Words Represented in 
Japanese Sign Language Sentence", Face and Gesture, pp. 434-439,2000. 

[27] R. Sanchez-Reillo. Hand geometry pattern recognition through gaussian mixture modelling. 15th 
International Conference on Pattern Recognition (ICPR'00), Volume 2:937-940, 2000. 

[28] Jae-Ho Shin, Jong-Shill Lee, Se-KeeKil, Dong-Fan Shen, Je-Goon Ryu, Eung-Hyuk Lee, Hong-Ki Min, 
Seung-Hong Hong, "Hand Region Extraction and Gesture Recognition using entropy analysis", 
IJCSNS International Journal of Computer Science and Network Security, VOL.6 No.2A, PP. 216-223, 
February 2006 

[29] W. Shu and D. Zhang, "Automated personal identification by palmprint," Opt. Eng., vol. 37, no. 8, pp. 
2359-2362, Aug. 1998. 

[30] D. P. Sidlauskas, "3D hand profile identification apparatus," U. S. Patent No . 4736203, 1988 

[31] S. Malki, L. Spaanenburg: Hand Veins Feature Extraction using DT-CNNs Proceedings SPIE 3rd Int. 
Symposium on Microtechnologies for the New Millennium, Maspalomas, Vol. 6590, 2007-05. 



■sT 



Vol. 2, Issue 1, pp. 1-9 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[32] Z. Sun, T. Tan, Y. Wang, and S.Z. Li, "Ordinal palmprint representation for personal identification", Proc. 
EEE Computer Vision and Pattern Recognition (CVPR), vol. 1, pp. 279-284, 2005. 

[33] S; Travieso CM.; Alonso, J.B.; Ferrer M.A.; "Automatic biometric identification system by hand 
geometry", IEEE 37th Annual 2003 International Carnahan Conference on 14-16, 281 - 284, Oct. 
2003. 

[34] Andreas Uhl and Peter Wild, "Parallel Versus Serial Classifier Combination for Multibiometric Hand- 
based Identification", In M. Tistarelli, M.S. Nixon, editors, Proceedings of the 3rd International 
Conference on Biometrics 2009 (ICB'09), pp. 950-959, LNCS, 5558, Springer Verlag, 2009 

[35] Vassilis Athitsos and Stan Sclaroff, "3D Hand Pose Estimation by Finding Appearance-Based Matches in a 
Large Database of Training Views". Technical Report BU-CS-TR-200 1-021. A shorter version of this 
paper is published in the proceedings of IEEE Workshop on Cues in Communication, 2001 

[36] Alexandra L.N. Wong and Pengcheng Shi, "Peg-Free Hand Geometry Recognition Using Hierarchical 
Geometry and Shape Matching", IAPR Workshop on Machine Vision Applications (MVA02), 2002. 

[37] D. L.Woodard and P. J. Flynn. Personal identification utilizing finger surface features. In CVPR, San 
Diego, CA, USA, 2005. 

[38] X. Wu, K. Wang, and D. Zhang, "Fuzzy directional energy element based palmprint identification," Proc. 
ICPR-2002, Quebec City (Canada). 

[39] W. Xiong, C. Xu, and S. H. Ong.Peg-free human hand shape analysis and recognition. Proc. of IEEE 
International Conference on Acoustics, Speech, and Signal Processing (ICASSP '05), Volume 2:77-80, 
March 18-23 2005. 

[40] Yoruk, E. Konukoglu, E. Sankur, B. Darbon, J., "Shape-based hand recognition", IEEE transactions on 
image processing, Vol. 15, Issue 7, Page 1803-1815, 2006 

[41] J. You, W. Li, and D. Zhang, "Hierarchical palmprint identification via multiple feature extraction," Pattern 
Recognition., vol. 35, pp. 847-859, 2002. 

[42] J. Chen, C. Zhang, and G. Rong, "Palmprint recognition using crease," Proc. Intl. Conf. Image Process., pp. 
234-237, Oct. 2001. 

[43] Zhi Liu, Yilong Yin, Hongjun Wang, Shangling Song, Qingli Li, "Finger vein recognition with manifold 
learning", Journal of Network and Computer Applications 33 (2010) 275-282 

[44] D. Zhang and W. Shu, "Two Novel Characteristics in Palmprint Verification: Datum Point Invariance and 
Line Feature Matching," Pattern Recognition, vol. 32, no. 4, pp. 691 702, 1999 

Authors 

Sarah BENZIANE is assistant professor in computer science; she obtained her magister 
electronics about mobile robotics. She hold basic degree from computer science 
engineering. Now, she's working with biometrics system's processing in SMPA 
laboratory, at the university of Science and Technology of Oran Mohamed Boudiaf 
(Algeria). She teaches at University of Oran at the Maintenance and Industrial Safety 
Institute. Her current research interests are in the area of artificial intelligence and image 
processing, mobile robotics, neural networks, Biometrics, neuro-computing, GIS and 
system engineering. 

Abdelkader Benyettou received the engineering degree in 1982 from the Institute of 
Telecommunications of Oran and the MSc degree in 1986 from the University of Sciences 
and Technology of Oran-USTO, Algeria. In 1987, he joined the Computer Sciences 
Research Center of Nancy, France, where he worked until 1991 on Arabic speech 
recognition by expert systems (ARABEX) and received the PhD in electrical engineering 
in 1993 from the USTOran University. From 1988 throught 1990, he has been an assistant 
Professor in the department of Computer Sciences, MetzUniversity, and Nancy-I 
University. He is actually professor at USTOran University since 2003. He is currently a researcher director 
of the Signal-Speech-Image- SIMPA Laboratory, department of Computer Sciences, Faculty of sciences, 
USTOran, since 2002. His current research interests are in the area of speech and image processing, 
automatic speech recognition, neural networks, artificial immune systems, genetic algorithms, neuro- 
computing, machine learning, neuro-fuzzy logic, handwriting recognition, electronic/electrical engineering, 
signal and system engineering. 





■»T 



Vol. 2, Issue 1, pp. 1-9 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Using Dynamic Dual Keys Encryption Algorithm as 
Partial Encryption for a Real-Time Digital Video 

Abdul Monem S. Rahma 1 and Basima Z.Yacob 2 
Computer Science Department, University of Technology, Baghdad, Iraq 
2 Computer Science Department, University of Duhok, Duhok, Kurdistan Iraq 



Abstract 

Advances in digital video transmission have increased in the past few years. Security and privacy issues of the 
transmitted data have become an important concern in multimedia technology. Digital video stream is quite 
different from traditional textual data because interframe dependencies exist in digital video. Special digital 
video encryption algorithms are required because of their special characteristics, such as coding structure, 
large amount of data and real-time constraints. This paper presents a real-time partial encryption to digital 
video technique depends on Dynamic Dual Key Encryption Algorithm Based on joint Galois Fields which is fast 
enough to meet the real-time requirements with high level of security. In this technique the I-frame (Intra- 
frame) of the digital video scene is extracted and decomposed the color picture into its three color channels: 
luma channel (Y) and two chrominance channels Cb and Cr, with note that the frames of digital video is in 
YCbCr color system, the Dynamic Dual Key Encryption Algorithm Based on joint Galois Fields is applied to the 
Y channel. The encryption technique achieves best timing results, and it provides high level of security by its 
great resistant against brute force attacks. 

KEYWORDS' Encryption digital video, partial encryption for Digital video, Digital video encryption in real 
time. 

I. Introduction 

In the digital world nowadays, the security of digital images/videos becomes more and more 
important since the communications of digital products over network occur more and more 
frequently. In addition, special and reliable security in storage and transmission of digital 
images/videos is needed in many digital applications, such as pay-TV, broadcasting, confidential 
video conferencing and medical imaging systems, etc. Normal data, such as program code or text, has 
much less redundancy in its structure. These factors make providing secure digital video a challenge. 
Various encryption algorithms have been proposed in recent years as possible solutions for the 
protection of the video data. Large volume of the video data makes the encryption difficult using 
traditional encryption algorithms. Often, we need the encryption to be done in real-time. The naive 
approach for video encryption is to treat video data as text and encrypt it using standard encryption 
algorithms like AES (Advanced Encryption Standard) or DES (Data Encryption Standard). The basic 
problem with these encryption algorithms is that they have high encryption time making them un- 
suitable for real-time applications like PAY-TV, Pay-Per View and Video On Demand (VOD) etc. A 
unique characteristic of video data is that, even though information rate is very high, information 
value is very low. 

This paper presents an efficient partial encryption technique depends on Dynamic Dual Key 
Encryption algorithm Based on joint Galois Fields for real-time video transmission. 



»T 



Vol. 2, Issue 1, pp.10-18 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

The Dynamic Dual Key Encryption algorithm Based on joint Galois Fields is considered as a stream 
of bits and the technique uses dual key, first key (control key) to determine the length of bits block 
and the second one is used for encryption a according to the equation that used addition and 
multiplication based on mathematical theory of Galois field GF(2 n ). Each block (3, 4, 5, or 6) bits 
size in this algorithm are interpreted as finite field elements using a representation In which a 3, 4, 5 
or 6 bits with bits b b 1 b 2 , b b 1 b 2 b 3 , b b 1 b 2 b 3 b 4 or b b 1 b 2 b 3 b 4 b 5 represents the 
polynomial consecutively, this algorithm is existing and introduced in details in[l] 
We apply the encryption algorithm to a part of I-frames of video, exclusively on Y Channel of YCbCr 
color vector. This technique is fast enough to meet the real-time requirements, in addition it provides 
high level of security by its great resistant against brute force attacks. To decrypt the ciphertext with 
128 bits, the attacker needs 1.86285884e + 204 of possibilities of keys as minimum 
and 1.80032832e + 399 as maximum [1]. 

The paper is organized as follows. Section 2 presents related work, Section 3 introduces digital video 
preliminaries. Section 4 and 5 present the methodology of partial encryption and decryption algorithm 
video consecutively. In Section 6 the suggested technique for partial video encryption is presented. 
Section 7 shows the experimental results for proposed technique of partial video encryption, 
Discussion the proposed technique for partial video encryption is presented in section8. Finally, 
conclusions are provided in Section 9. 

II. Related Work 

Many video encryption algorithms have been proposed which encrypt only selected parts of the data. 
Meyer and Gadegast [2] have designed an encryption algorithm named SECMPEG which 
incorporates selective encryption and additional header information. In this encryption selected parts 
of the video data like Headers information, I-blocks in P and B frames are encrypted based on the 
security requirements. Qiao and Nahrstedt [3] proposed a special encryption algorithm named video 
encryption algorithm in which one half of the bit stream is XORed with the other half. The other half 
is then encrypted by standard encryption algorithm (DES). The speed of this algorithm is roughly 
twice the speed of naive algorithm, but that is arguably still the large amount of computation for high 
quality real-time video applications that have high bit rates [4] Some of the other encryption 
algorithms are based on scrambling the DCT coefficients. Tang's [5] scrambling method is based on 
embedding the encryption into the MPEG compression process. The basic idea is to use a random 
permutation list to replace the zig-zag order of the DCT coefficients of a block to a 1 x 64 vector. 
Zeng and Lie [6] extended Tang permutation range from block to segment, with each segment 
consisting of several macroblocks. Within each segment, DCT coefficients of the same frequency 
band are randomly shuffled within the same band. Chen, et. al [7] further modified this idea by 
extending the permutation range from a segment to a frame. Within a frame, DCT coefficients are 
divided into 64 groups according to their positions in 8 x 8 size blocks, and then scrambled inside 
each group. Apart from shuffling of the I frames, they also permuted the motion vectors of P and B 
frames. In order to meet the real-time requirements, Shi,et. al [8] proposed a light-weight encryption 
algorithm named Video Encryption Algorithm (VEA). It uses simple XOR of sign bits of the DCT 
coefficients of an I frame using a secret m-bit binary key. The algorithm was extended as Modified 
Video Encryption Algorithm (MVEA) [9] wherein motion vectors of P and B frames are also 
encrypted along with I frames. 

III. Digital Video Preliminaries 

Digital video consists of a stream of images captured at regular time intervals, where the digital image 
is a discrete two-dimensional function, f (x, y) which has been quantized over its domain and range 
[10]. Without loss of generality, it will be assumed that the image is rectangular, consisting of Y rows 
and X columns. The resolution of such an image is written as X X Y. Each distinct coordinate in an 
image is called a pixel color space and each color pixel is a vector of color components. 
The Color spaces provide a standard method of defining and representing colors. Each color space is 
optimized for a well-defined application area [11]. The most popular color models are RGB (used in 
computer graphics); and YCrCb (used in video systems). Processing an image in the RGB color 



nT 



Vol. 2, Issue 1, pp.10-18 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



space, with a set of RGB values for each pixel is not the most efficient method. To speed up some 

processing steps many broadcast, video and imaging standards use luminance and color difference 

video signals, such as YCrCb, this color space is widely used for digital video. In this format, 

luminance information is stored as a single component (Y), and chrominance information is stored as 

two color-difference components (Cb and Cr). 

In the RGB representation the channels are very correlated, as all of them include a representation of 

brightness, in which the brightness information can be recognized from R, G and B channels shown 

separately. But in YCbCr representation the luminance information of (Y) component is more than 

chrominance information of (Cb and Cr) components [12]. 

A color in the RGB color space is converted to the YCrCb color space using the following 

equation[13]: 



Y 

C b 
C r 



0.257 0.504 0.098 

-0.148 -0.291 0.439 
0.439 -0.368 -0.071 



\ R ] 




[161 


G 


+ 


128 


VB\ 




Ll28J 



.(1) 



While the inverse conversion can be carried out using the following equation: 



R\ 




[1.164 


0.000 


G 


= 


1.164 


-0.392 


Bl 




L 1.164 


2.017 



1.5961 


r r-16 I 


-0.813 


C b - 128 


0.000 J 


[C r - 1281 



.(2) 



Digital video stream is organized as a hierarchy of layers called: Sequence, Group of Pictures (GOP), 
Picture, Slice, Macroblock and Block. The Sequence Layer consists of a sequence of pictures 
organized into groups called GOPs .Each GOP is a series of I, P and B pictures [14]. I pictures are 
intraframe coded without any reference to other pictures. P pictures are predicatively coded using a 
previous I or P picture. B pictures are bidirectionally interpolated from both the previous and 
following I and/or P pictures [7]. 

Each picture is segmented into slices, where a picture can contain one or more slices. Each slice 
contains a sequence of macroblocks where a macroblock consists of four luminance blocks (Y) and 
two chrominance blocks (Cb and Cr). Each block is organized into a matrix of 8x8 pixel samples with 
a macroblock covering a 16 x 16 pixel area, Figure (1) shows the Structural hierarchy of digital video. 




Video sequence 



GOP 



quence 
a code 



Picture 



Picture 



Slice 



Slice 



Macroblock 



Block 1 



Block 2 




VLCnui 




Sequence 
layer 



Group of pictui 
layer 



Picture 
layer 



Slice 
layer 



Macroblock 
layer 



Block 
layer 



(if intra macroblock) 

Figure 1: Structural hierarchy of Digital video. 

The properties of the I, P, and B frames can help further improve the encryption and decryption 
performance. Since B frames depend on I or P frames, and P frames depend on the closest preceding I 
frame, we need only encrypt the I frames while leaving the P and B frames untouched. Without I 
frames, one cannot decode P and B frames. 



n\ 



Vol. 2, Issue 1, pp.10-18 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

IV. Partial Encryption Algorithm Of Video (Methodology) 

The encryption scheme can be described by the following steps: 



Input : Y-channel as Plaintext , KeyOne , KeyTwo 
Output : Y-channel as Ciphertext 

No_Kl //number of bits from key One 
that are used in one round 
No_K2 //number of bits from keyTwo 
that are used in one round 



Step 0: 

Rounds 
- While Round < 2 do : 

Step 1: Read a portion of KeyOne (Control key) 

Step 2: depending on the value of KeyOne' s portion do the following: 
Select the block size ( 3 , 4 ,5 or 6 bits) from plaintext. 
Read from KeyTwo A and B Keys. 
Perform the following Encryption Equation : 

Y = X *A + B 
Step3 : Compute the number of bits for KeyOne and KeyTwo that are used in one round 
Check If Round =0 then 
No_Kl=No_Kl + 2 
No_k2=No_K2 + block size * 2 
End if 
Step 4: Repeat steps 1, 2 and 3 until plaintext is finished. 
Round=Round+l 
End while. 



The partial encryption of video technique is based on Dynamic Dual Key Encryption algorithm 
which uses two keys, the first key is called control key (keyOne) which is used to determine the size 
of bit block and the second one(KeyTwo) is used for encryption. The size of bit block is 3,4,5 or 6 bits 
[1]. 

V. Partial Decryption Algorithm Of Video (Methodology) 

The decryption Technique can be described by the following steps: 



Input : Y-Channel as Ciphertext , KeyOne , KeyTwo 
Output : Y-channel as Plaintext 



STEPl.Round=0 
-While Round < 2 do 
STEP2. apply a circular left shift of (No_Kl) bits and (No_K2) bits for KeyOne and 

KeyTwo consecutively 
STEP3. Read a portion of KeyOne(Control key) 
STEP4. depending on the value of KeyOne' sportion do the following 

Select the block size ( 3 ,4,5 or 6 bits) from plaintext 

Read from KeyTwo A and B Keys 

Perform the following Decryption Equation : 

X = (Y + addtion inverse(B)) * multiplicative inverse(A) 
STEP5. Repeat steps 1 ,2 and 3 until Ciphertext is finished 
Round=Round+ 1 
End while 



IJT 



Vol. 2, Issue 1, pp.10-18 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

For decryption the same steps of encryption are applied but with reverse equation's operations are 
performed [1]. 

VI. Partial Video Encryption Technique 

The Suggested Technique model consists of two parts; the main stages of the first part are started 
from reading video file (with note that the frames of digital video is in YCbCr color system), 
converting it into frames, the output of this stage is frames in YCbCr color representation, the last 
stage deals with selecting the I-frame. In the second part of system, the Dynamic Dual Key 
Encryption Algorithm Based on joint Galois Fields is applied on Y-channel of I-frame, then 
reconstructing the video file before broadcasting. At the receiver side, the video file will be converted 
into frames, and applying the Dynamic Dual Key decryption Algorithm Based on joint Galois Fields 
on Y-channel of I-frame, Figure 2 illustrates the steps of proposed system. 



Input video file, in view that 




k- 


( 


Convert video file into 




^ 


Select the I-frame of video 




it is in YCbCr color system 


w 


frames 


w 


scene 




















ir 






At receiver side, convert 
video file into frames 


^ 


Reconstruct the video file 
before broadcasting 




Apply encryption algorithm 
on the Y channel 




^ 























Select the I-frame of 
video scene 



Apply decryption 
algorithm on the Y channel 



Figure (2): The steps of partial encryption Technique 

VII. Experimental Results 

Advanced Encryption Standard (AES) is an algorithm of the first category which is used nowadays in 
communication and encrypted video broadcasting, and it provides much higher security level than 
DES and perform it in 3 to 10 less computational power than 3-DES [15], it has better performance 
than DES, 3DES, and RC2 [16], based on these facts, AES is to be compared with proposed technique 
The following tables represent the experimental results for the speed of the partial video encryption 
based on Dynamic Dual Key Encryption algorithm, and AES algorithm. 

Table 1: The encryption and decryption times for AES algorithm using key size 128 bit on I-frame 



Security 
Algorithm 


I-Frame 
Name 


Size of Frame 
KB 


Encryption 
time (Second) 


Decryption time 
(Second) 


AES-Rijndael 


Car 


60 


8 


12 


Wedding 


1180 


175 


260 


xylophone 


225 


28 


46 



Table 2: The encryption and decryption times for Dynamic Dual Key Encryption algorithm on I-frame. 



Security 
Algorithm 


I-Frame Name 


Size of Frame 
KB 


Encryption 
time (Second) 


Decryption time 
(Second) 


Dynamic Dual 
algorithm 


Car 


60 


0.656 


1.282 


Wedding 


1180 


12.468 


28.594 


xylophone 


225 


2.312 


5.438 



ITT 



Vol. 2, Issue 1, pp.10-18 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

From Tables 1 and 2, Can be observed that the Dynamic Dual encryption algorithm is approximately 
13 times faster than AES encryption and 9 times faster than AES decryption. 

The sample test video sequences include videos like Car, Wedding, and xylophone. Some of the test 
videos along with their frame numbers are shown in Figure 3, Figure 4 and Figure 5. 




OOiOG 


© 


(■) 


6> 




-©®<s> 


fi) C J 


r©- 


t — i i 


v • v^- 


CJ" l,<: 




WWWQF9 




Figure (3): The encryption results after applying partial encryption based on Dynamic Dual keys algorithm for 
the 1st frame in "Car" and xylophone video. a)Original I-frame of car video b) car I-frame after encryption c) 
Original I-frame of xylophone video d) xylophone I-frame after encryption. 




WQJ'QJQTO; 




■ • 



00:0H 



©(PL)© 



Cww§ 





Figure (4): The effect of the partial encryption based on Dynamic Dual keys algorithm on Car Video Frames is 
used as test object. (a)Original car film after 4 seconds (b) Encryption car film after 4 seconds (c) Original car 
film after 8 seconds (d) Encryption car film after 8 seconds 



IsT 



Vol. 2, Issue 1, pp.10-18 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 








®&(2 



Figure (5) The effect of the partial encryption based on Dynamic Dual keys algorithm on xylophone Video 
Frames is used as test object (a) Original xylophone film after 2 seconds (b) Encryption xylophone film after 
2 seconds (c) Original xylophone film after 5 seconds (d) Encryption xylophone film after 5 seconds (e) 
Original xylophone film after 8 seconds ( f) Encryption xylophone film after 8 seconds. 

The designed technique and AES algorithm both has been implemented successfully using visual 
basic 6 Programming language and also implemented with processor of Pentium IIII (3.40 GHZ) and 
3GB of RAM on windows XP. 

VIII. Discussion 

The comparison between the speed of partial encryption digital video technique which uses AES 
algorithm, and the partial encryption digital video technique which uses Dynamic Dual Keys 
algorithm ,it can be seen from tables land 2 that the speed of the technique which uses Dynamic 
Dual Key Encryption algorithm is faster than the technique which uses AES algorithm, whereas the 
technique which uses Dynamic Dual Keys algorithm gets the best results , and it is approximately 13 
time faster than AES encryption and 9 times faster than AES decryption. Because of the high security 
obtained by Dynamic Dual Key Encryption algorithm this will make the proposed technique high 
security. 



16T 



Vol. 2, Issue 1, pp.10-18 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

No variation has been made in the digital video structure by the proposed technique, because of 
making use of the present broadcasting technique; whereas a change has been made in the part of 
complete structure. 

IX. Conclusion 

In this paper, we have proposed a new partial digital video encryption technique. The proposed 
technique which encrypts only Y-channel from I-frame of the digital video scene will reduce the 
encryption and decryption time, in addition to its high security depending on Dynamic Dual Key 
Encryption algorithm which uses dynamic block cipher and dual keys. All these properties will make 
the proposed technique suitable for Real-Time Application RTA. 

References 

[I] Abdul Monem S. Rahma , and Basima Z.Yacob'The Dynamic Dual Key Encryption Algorithm Based on 
joint Galois Fields", International Journal of Computer Science and Network Security, VOL.11 No. 8, August 
2011. 

[2] J. Meyer and F. Gadegast, "Security Mechanisms for Multimedia Data with the Example MPEG-1 Video", 

Project Description of SECMPEG, Technical University of Berlin, Germany, May 1995. 

[3] L. Qiao and Klara Nahrstedt, "A New Algorithm for MPEG Video Encryption", In Proc. of First 

International Conference on Imaging Science System and Technology, pp 21-29, 1997. 

[4] Borko Furht and Darko Kirovski, "Multimedia Encryption Techniques, Multimedia Security Handbook," 

CRC Press LLC ,Dec. 2004 . 

[5]L.Tang, "Methods for Encrypting and Decrypting MPEG Video Data Efficiently", In Proc. of ACM 

Multimedia, Boston, pp 219-229, 1996. 

[6] W. Zeng and Sh. Lei, "Efficient Frequency Domain Selective Scrambling of Digital Video", In Proc. of the 

IEEE Transactions on Multimedia, pp 118-129, 2002. 

[7] Z. Chen, Z. Xiong, and L. Tang. "A novel scrambling scheme for digital video encryption". In Proc. of 

Pacific-Rim Symposium on Image and Video Technology (PSIVT), pp 997-1006, 2006. 

[8] C. Shi, S. Wang, and B. Bhargava,"MPEG Video Encryption in Real time Using Secret Key Cryptography", 

In Proc. of International Conference on Parallel and Distributed Processing Techniques and Applications, Las 

Vegas, NV ,1999. 

[9] C. Shi and B. Bhargava, "A Fast MPEG Video Encryption Algorithm", In Proc. of ACM Multimedia, 

Bristol,UK, pp 81-88, 1998. 

[10] Robert M. Gray and David L. Neuhoff. "Quantization", IEEE Transactions on Information Theory, 

44(6): 1.63, October 1998. 

[II] R. C. Gonzalez and R. E. Woods, "Digital Image Processing", Second Edition, Printice Hall Inc, 2002. 
[12] Iain E. G. Richardson. "H.264 and MPEG-4 Video Compression" The Robert Gordon University, 
Aberdeen, John Wiley & Sons Ltd, UK, 2003. 

[13] Li & Drew," Fundamentals of Multimedia ", Chapter 5, Prentice Hall 2003. 

[14] P. N. Tudor. "MPEG-2 video compression". In Electronics and Communication Engineering Journal, 

December - 1995. 

[15] J. Dray, "Report on the NIST Java AES Candidate Algorithm Analysis", NIST ,1999. 

[16] D. S. Abd Elminaam, H. M. Abdual Kader, and M. M. Hadhoud, "Evaluating The Performance of 

Symmetric Encryption Algorithms", International Journal of Network Security, Vol.10, No. 3, PP. 2 16-222, 

May 2010. 

Authors 

Abdul Monem Saleh Rahma awarded his MSc from Brunei University and his PhD from 
Loughborough University of technology United Kingdom in 1982, 1985 respectively. He 
taught at Baghdad university department of computer science and the Military Collage of 
Engineering, computer engineering department from 1986 till 2003. He fills the position of 
Dean Asst. of the scientific affairs and works as a professor at the University of 
Technology Computer Science Department .He published 82 Papers in the field of 
computer science and supervised 24 PhD and 57 MSc students. His research interests 
include Cryptography, Computer Security, Biometrics, image processing, and Computer 




"nT 



Vol. 2, Issue 1, pp.10-18 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

graphics. And he Attended and Submitted in many Scientific Global Conferences in Iraq and Many other 
countries. 



Basima Zrkqo Yacob received the B.Sc. degree in Computer Science from Mosul 
University, Iraq, in 1991, The MSc. Degree in computer science from University of Duhok 
,Iraq in 2005. Currently she is a PhD student at faculty of computer science at Duhok 
University 




»r 



Vol. 2, Issue 1, pp.10-18 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Design and Prototyping of a Miniaturized Sensor 
for Non-Invasive Monitoring of Oxygen Saturation 

in Blood 

Roberto Marani, Gennaro Gelao and Anna Gina Perri 
Electrical and Electronic Department, Polytechnic of Bari, via E. Orabona 4, Bari - Italy 



Abstract 

In this paper a new sensor for the non-invasive monitoring of the oxygen saturation in blood has been designed, 
realized and tested to obtain an ultra-small device having very high noise immunity. This goal has been reached 
by using a particular integrated circuit, the PsoC (Programmable System on Chip), which integrates two 
programmable arrays, one analogue and one digital, to obtain a device with very large capabilities. We have 
configured the PsoC and developed the electronic interfaces. The proposed design allows the acquisition of the 
continuous component of the signal and the data elaboration has been done in place using a local CPU, without 
requiring to pass data to an external computer. 

KEYWORDS? Oxyhaemoglobin Saturation, Spectrophotometric Method, Pulse Oximeters, Electronic 
Interfaces and Data Processing, Sensor Prototyping and Testing. 

I. Introduction 

Non-invasive health monitoring is the main goal of modern electronic applications to medicine. In 
particular, among the most critical vital parameters, one can find the oxygen saturation of 
oxyhaemoglobin Hb0 2 . Currently the standard procedure for monitoring gases in blood is to take 
arterial blood samples. This is time consuming for the nurses and stressful particularly for those 
patients with cardiac respiratory or renal insufficiency, i.e. requiring a continuous monitoring. 
Several invasive methods for continuous monitoring have been proposed, based on the use of catheter 
or optical fibre sensors, but they have many problems such as inevitable pain for the patient, possible 
infections, long term drift caused by blood's substances deposition on the catheter, need for 
hospitalization, and last but not least, the high cost. 

In order to overcome these problems, there is an effort to develop other devices with better 
characteristics, which allow mainly the non-invasive, continuous monitoring with good accuracy. 
Among these devices, the pulse oximeter, which senses the oxygen saturation in blood using non- 
invasive optical sensors, seems to be the best [1]. Although this device is typically used in hospitals, it 
still has some drawbacks that should be solved in order to make this device available even for home 
proposes without the assistance of registered nurses. Moreover, among the required enhancements, it 
has to be cheap, small, user-friendly, accurate and noise immune. 

In this paper we present a new pulse oximeter, which has been realized and tested at the Electronic 
Devices Laboratory (Electrical and Electronic Department) of Polytechnic of Bari. 
The proposed sensor, designed in order to obtain a cheap device with reduced size and very high noise 
immunity, uses a single chip, such as the PSoC (Programmable System on Chip) [2], produced by 
Cypress MicroSystems, which, through its programmability, i.e. its capability within hardware and 
software to change, allows the signal acquisition and conditioning of the whole system on a single 
chip. 



TsT 



Vol. 2, Issue 1, pp. 19-26 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

In Section 2 we have described the front end of the proposed pulse oximeter, while in Section 3 the 
obtained results are analyzed and discussed. The conclusions and future scope are illustrated in 
Section 4. 

II. Front End of the Proposed Pulse Oximeter 

The pulse oximetry is a spectrophotometric method for non-invasive measurement of the arterial 
oxygen saturation, Sp0 2 , and pulse [3]. It is based on the different light-absorbing characteristics of 
oxyhaemoglobin (Hb0 2 ) and deoxyhemoglobin (Hb) at two different wavelengths, typically 660 nm 
(RED) and 940 nm (IR), and on the pulsatile nature of arterial blood flow [4]. Of course, the optical 
sensor measurements of Hb0 2 and Hb are dependent on the concentration of these molecules in 
blood. With pulse oximeters, a finger or earlobe probe is used. A red light-emitting diode (LED) and 
an infrared LED is located, as sources, on one side of the probe, and a couple of photodiodes, as 
receivers, are located on the other side. 

The method relies on difference in the absorption spectra of oxygenated and de-oxygenated 
hemoglobin. The ratio between these, as shown in [3], has a peak at approximately 660 nm and at 
higher wavelengths the ratio is lower than one. 

Conventionally the previous two wavelengths are used since the absorption ratio is large and small at 
those wavelengths, respectively. This minimizes the uncertainty of the Sp0 2 measurement. The 
measured absorption is then displayed as an estimate of the arterial oxygen saturation, Sp0 2 . The 
sensor, applied to a finger or to earlobe, can work on the transmitted light, realizing in this way a 
transmission pulse oximeter, or on the reflected light, as a reflectance pulse oximeter [5-9]. The 
equipment is electrically the same in both cases. 
Fig. 1 shows the block scheme of the proposed pulse oximeter. 



GEN 



?? 





BPF 




AMP 


1 








t 






MUX 












r 


S&ll 




ijt 






















CONV 




Ah/D 




ADC 


-+ 


CPU 




DRD 




DFL 


$ 














i 


5&H 




LHh 






























1 






? 






BPF 




AMP 







GEN: Pulses generator 
CONV: l-V converter 
AMD: Differential amplifier 
SaH: Sample a Hold 
LPF: low pas© filter 
BPF: band pass filter 



AMP: Non-inverting amplifier 

MUX:Multiplexer 

ADC: Analogic-digital converter 

CPU : Microprocessor 

DRD: Display driver 

DPL: Display 



□ PSoC Internal blocks 

□ External blocks 



Figure 1. Block scheme of the proposed pulse oximeter. 

We have used the PSoC CY8C27443 produced by Cypress MicroSy stems [2], a family of devices 
which allows the implementation of systems on a single chip that contains both analogue and digital 
programmable blocks, thus allowing the synergic management of analogue and digital signals in a 
single programmable device, reducing in this way the number of integrated circuits on board. 
LEDs are powered by a sequence of square pulses, as shown in Fig. 2, 0.2 ms long at a frequency of 
500 Hz and with a phase difference of 1 ms, obtained by an internal PSoC oscillator. 



loT 



Vol. 2, Issue 1, pp. 19-26 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



H 



Figure 2. Control signal of the power on the LED. 

For each LED we have placed a photodiode on the other side of the finger to collect the transmitted 
light (Fig. 3). This layout allows us to have a larger collection efficiency. 



u u 



Figure 3. Sensor's scheme. 

The light signal measured by the photodiode can be divided in two components: A and B [3]. 
Component A is the light signal during a systole and is a function of the pulsations of oxygenated 
arterial blood, while component B is the light signal (during a diastole) that has a constant intensity 
and is a function of various tissues (i.e. skin pigment, muscle, fat, bone, and venous blood). The pulse 
oximeter divides the pulsatile absorption of component A by the background light absorption of 
component B, at the two different wavelengths (RED and IR), to obtain an absorption ratio, R: 



R = (A R /B R )/(A IR /B IR ) 



(1) 



The photodiode transforms the light signal into an electrical signal that is amplified and converted into 

digital information. 

The current generated by the photodiode is the input of a current-voltage converter working in 

differential mode, followed by INA105 amplifier (Fig. 4), used to obtain the signal in single ended 

mode. 

The resulting amplifier topology is then that of an instrumental amplifier but with inputs placed in at 

different nodes, allowing in this way an high noise immunity at the input since most of the noise is a 

common mode noise. 

After the amplifier, the acquisition system splits in two just alike channels, each of them obtained 

using a Sample & Hold, synchronized with the pulse generator that feeds the LEDs. In this way, it is 

possible to distinguish between the signal corresponding to red wavelength and the one corresponding 

to infrared wavelength. 



mT 



Vol. 2, Issue 1, pp. 19-26 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 




Ec-^R, 



A r Aj:1^0PA2111 



Figure 4. Converter and amplifier circuital configuration. 

The oxygen partial pressure in blood has a power spectra concentrated around 1 Hz, while our signal 
has also high frequency noise, coming from our sampling, and a low frequency noise, down to the 
continuum, coming from light background, photodiode and from amplifier (1/f noise). For this reason 
we have applied a band pass (BP) filter realized with two second order stages, both in Sallen-Key 
configuration [10]. First stage is a low pass filter, the second a high pass filter. 

Subsequently the signal goes to a non-inverting amplifier, which insulates the filter from the Analog- 
Digital Converter (ADC) load, and drives the ADC input. 

Based on the red/infrared absorption ratio, defined by the Eqn. (1), an algorithm within the pulse 
oximeter allows to measure Sp0 2 , as a function of the measured magnitude at the systolic and 
diastolic states on the two photoplethysmo grams: 



SpQ 2 = (S R /B R )/(S IR /B IR ) 



(2) 



where S R is the peak-to-peak red photodiode signal and B R is the red photodiode continuous 

component, measured at the systolic and diastolic states respectively, and likewise for Si R and B IR . 

Since we need also the continuous components, we have used PSoC's internals component to create a 

low pass filter with a cut frequency at 200 mHz. 

To digitalize the signal, we used a 12 bit ADC available inside the PSoC, and, since we had only one 

ADC, we had to multiplex the signals to the ADC input under software control. 

The digitized signal is then passed to the PSoC's CPU where both signals are normalized and from 

these data the partial pressure of Sp0 2 is computed and shown on a 3 -digit display. 

With regard to the LEDs, they have a tight band emission spectra with peaks concentrated at 660 nm 

(R) and 940 nm (IR), as shown in Fig. 5. 



UJ 



Vol. 2, Issue 1, pp. 19-26 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



1.0 



0.5 



0.0 




550 



600 



X(nm) 



650 



700 



1.U 






/\ 








n r 






/ ^ 


\ 










j 


' 


\ 






C\ ft 








\ 






U.O 








\ 






r\ a 




/ 




\ 










/ 




\ 






n 9 




/ 




y 


\ 








/ 






\ 




n n 


^ 













900 



950 
l(nm) 



1000 



Figure 5. Emission spectra of the two LEDs normalized at their maximum value. 

Furthermore their emission is concentrated in a narrow cone in the forward direction. 

LEDs are driven with a 50 mA current for the red light and 60 mA for the infrared light. 

The photodiode has a wide acceptance angle to maximize the light collection so that it could 

efficiently collect all the light diffused inside the finger. Its spectral sensibility has a peak at 850 nm, 

as shown in Fig. 6, and it is at least 60% (compared to the peak) in the bands between 600 nm and 

1000 nm, with a good covering of the emission bands of LEDs. 



O-FTOOTE 



t 



iW 








?"S 


V 




f^n 










\ 




uU 










\ 




^n 












v 


UU 












\ 


■*ri 












\ 














\ 


jTi r 












\ 
































400 500 600 7W1 BOO S00 nm 110O 

m~7* 



Figure 6. Relative spectral sensibility. 



III. Analysis of Results and Discussion 

With the proposed design, it has been possible to obtain a gain, whose value is about unitary between 
0.66 Hz and 3.2 Hz, as shown in Fig. 7. 



UT 



Vol. 2, Issue 1, pp. 19-26 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



* VIOlAiOOT) 













































































.../^r: 


..X- 














£..j 


j \ 














/ ! 


i \ 






O.DV 






| 


f I 


i \ 
































j 


c::::::i::::::::: 
















\ ! 
















^ 1 

_L j 
















1 J 
















_\ J 
















\ ! 
















..!....: 
















_.!._ 
















_..i_i 
















\ 
















i 
















£ 
















V. 




nv- 












1 ^-— 





lHb l.BK 



Figure 7. Output voltage of the band pass filter versus frequency. 

Fig. 8 shows the amplified signals coming from the sensors for the red and infrared light respectively, 
after the continuous component removal. 



2.0V 



ov 



-2.0V 



4.0V 





: i 


„ i : 


U 




, -j- 


A . a 


A j\ 


\ / 


\ A/ 


^1 J 


^-AlM-i 


\r 


-t/£ZtE_ 


-t?__D__ 


\J J 


Xj 


LL v 


1/ 


v \i V 




\? 


• ■■ — — - — — 


Xr 



, V(red) 



-4.0V 




0s 1 .0s 

n V(infrared) 



3.0s 



Time 



Figure 8. Amplified photoplethysmogram of signal after the continuous component removal. 

The measurement of Sp0 2 , as a function of the measured magnitude at the systolic and diastolic states 
on the two photoplethysmograms allows us to delete the dependence on the LED emissivity and on 
the photodiode sensibility. However, the relation (2) has to be empirically calibrated for the specific 
device [11]. 



ITT 



Vol. 2, Issue 1, pp. 19-26 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Moreover, as our sensor has been shielded on three sides, we have been obtained a low probability 
that the ambient light may reach the photodiode and, therefore, may influence the measurement. 
Finally Fig. 9 shows the prototype realized and tested at the Electronic Devices Laboratory 
(Electrical and Electronic Department) of Polytechnic of Bari. 




Figure 9. The realized prototype: a double- sided printed circuit board. 

From this image it is clear that, even if the prototype is already small, but it will hugely squeezed 
using SMD (Surface Mount Device) technology. The image also shows the display with the measured 
values, there is no action to be taken to have the measurement since the device is continuously 
working. 

IV. Conclusions and Future Scope 

We have presented the design and realization of an electronic device for non invasive monitoring of 
the oxygen saturation in blood (transmission pulse oximeter). The main goals of our design have been 
the miniaturization, the cheapness and a good noise rejection. The key element to achieve these goals 
has been the PSoC, a system on chip family for mixed analogue and digital applications, 
programmable in both analogue and digital parts, allowing the implementation of a whole acquisition 
chain, from signal generator to the display driver, passing through sensor's amplifier, ADC and CPU. 
Having a single programmable device for both analogue and digital part, it has been easy to reach our 
goals. Furthermore this implementation of the pulse oximeter, using PSoC, has required some 
innovation in the circuit compared to previous schemes. The whole acquisition chain has a new plan 
that allows the collection of the continuous component of the signal. Moreover the whole data 
elaboration has been done in place using a local CPU, without requiring to pass data to an external 
computer. 
For further development of this system, we are planning to include a miniaturized electrocardiograph. 

References 

[i] 



[2] 



Mengelkoc L. J., Martin D. & Lawler J., (1994) "A review of the Principles of Pulse Oximetry and 
Accuracy of Pulse Oximeter Estimates during Exercise", Physical Therapy, Vol. 74, No 1, pp. 40-49. 

Datasheet PSoC CY8C27443, http://www.cypress.com/?mpn=CY8C27443-24PXI 



17T 



Vol. 2, Issue 1, pp. 19-26 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[3] Duun S., Haahr R. G., Birkelund K., Raahauge P., Petersen P., Dam H., Noergaard L. & Thomsen E. 

V., (2007) " A Novel Ring Shaped Photodiode for Reflectance Pulse Oximetry in Wireless 
Applications", Proceedings of IEEE SENSORS Conference, Atlanta, Georgia, USA, pp. 596-599. 

[4] Mannheimer P. D.,. Casciani J. R, Fein M. E. & Nierlich S. L., (1997) "Wavelength selection for low- 

saturation pulse oximetry", IEEE Transactions on Biomedical Engineering, Vol. 44, No 3, pp. 148-158. 

[5] Konig V., Huch R. & Huch A., (1998) "Reflectance Pulse Oximetry - Principles and Obstetric 

Application in the Zurich System", Journal of Clinical Monitoring and Computing, Vol. 14, No 6, pp. 
403-412. 

[6] Goldman J. M., Petterson M. T., Kopotic R. J. & Barker S. J., (2000) "Masimo Signal Extraction Pulse 

Oximetry", Journal of Clinical Monitoring and Computing, Vol. 16, No 7, pp. 475-483. 

[7] Gisiger P. A., Palma J. P. & Eberhard P., (2001) "Oxicarbo, a single sensor for the non-invasive 

measurement of arterial oxygen saturation and C0 2 partial pressure at the ear lobe", Sensors and 
Actuators B: Chemical, Vol. 76, No 1, pp.527 -530. 

[8] Mendelson Y. & Kent J. C, (1989) "Variations in optical absorption spectra of adult and fetal 

haemoglobins and its effect on pulse oximetry", IEEE Transactions on Biomedical Engineering, Vol. 
36, No 8, pp. 844-849. 

[9] Fine I. & Weinreb A., (1995) "Multiple scattering effect in transmission pulse oximetry", Medical & 

Biological Engineering & Computing, Vol. 33, No 5, pp.709-712. 

[10] Sallen R. P. & Key E. L., (1955) "A Practical Method of Designing RC Active Filters", IRE 
Transactions on Circuit Theory, Vol. 2, No 1, pp. 74-85. 

[11] Webster J. G., (1997) Design of Pulse Oximeters, IOP Publishing, Bristol, UK. 



Authors 



Roberto Marani received the Master of Science degree (cum laude) in Electronic 
Engineering in 2008 from Polytechnic of Bari where he is currently pursuing the Ph.D. 
degree. He worked in the Electronic Device Laboratory of Bari Polytechnic for the design, 
realization and testing of nanometrical electronic systems, quantum devices and FET on 
carbon nanotube. Moreover he worked in the field of design, modeling and experimental 
characterization of devices and systems for biomedical applications. 

Currently he is involved in the development of novel numerical models to study the 
physical effects involved in the interaction of electromagnetic waves with periodic metallic 
nano structures. Dr. Marani has published over 50 scientific papers. 

Gennaro Gelao received the Laurea degree in Physics from University of Bari, Italy, in 
1993 and his Ph.D. degree in Physics in 1996 at CERN. He cooperates with the Electronic 
Device Laboratory of Polytechnic of Bari for the design, realization and testing of 
nanometrical electronic systems, quantum devices and CNTFETs. 
Dr. Gelao has published over 80 papers. 




Anna Gina Perri received the Laurea degree cum laude in Electrical Engineering from the 
University of Bari in 1977. In the same year she joined the Electrical and Electronic 
Department, Polytechnic of Bari, where she is Professor of Electronics from 2002. Her 
current research activities are in the area of numerical modelling and performance 
simulation techniques of electronic devices for the design of GaAs Integrated Circuits and 
in the characterization and design of optoelectronic devices on PBG. Moreover she works in 
the design, realization and testing of nanometrical electronic systems, quantum devices and 
FET on carbon nanotube. Prof. Perri is the Director of Electron Devices Laboratory of the 
Electronic Engineering Faculty of Bari Polytechnic. She is author of over 250 book 
chapters, journal articles and conference papers and serves as referee for many international journals. 




ITf 



Vol. 2, Issue 1, pp. 19-26 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Effects of PGPR on Growth and Nutrients Uptake 

of Tomato 

Shahram Sharafzadeh 
Department of Agriculture, Firoozabad Branch, Islamic Azad University, Firoozabad, Iran 



Abstract 

Tomato is one of the most popular garden vegetable in the world. Tomatoes have high values in Vitamin A and 
C and are naturally low in calories. Inoculation with plant-growth promoting rhizobacteria (PGPR) has been 
attributed to the production of plant growth regulators at the root interface, which stimulate root development 
and result in better absorption of water and nutrients from the soil. A greenhouse experiment was conducted to 
evaluate the effects of some PGPR on growth and nutrients uptake of tomato (Lycopersicon esculentum Red 
Cherry) plants. Seven treatments were used for bacteria (Pseudomonas, Azotobacter, Azosprillum, 
Pseudomonas + Azotobacter, Pseudomonas + Azosprillum, Azotobacter + Azosprillum and Pseudomonas + 
Azotobacter + Azosprillum) which were compared to control. Plants were cut at prebloom stage. Maximum 
level of shoot fresh weight was shown on Azotobacter + Azosprillum, Pseudomonas + Azotobacter + 
Azosprillum and Azosprillum treatments which significantly differed from other treatments. Maximum level of 
root fresh weight was achived in Azotobacter + Azosprillum, Pseudomonas + Azotobacter + Azosprillum and 
Azotobacter treatments which significantly differed from other treatments. Maximum level of shoot and root dry 
weights were achieved on Azotobacter + Azosprillum and Pseudomonas + Azotobacter + Azosprillum 
treatments. Minimum level of shoot and root dry weights were obtained in Pseudomonas + Azosprillum. 
Maximum root length was shown on Azotobacter + Azosprillum which significantly differed from other 
treatments. The highest amount of N, P and K were achieved on Pseudomonas + Azotobacter + Azosprillum 
treatment and the lowest amount was shown on Pseudomonas + Azotobacter treatment. Maximum level of Ca 
and Mg were obtained on Pseudomonas + Azotobacter and Pseudomonas + Azosprillum treatments which 
significantly differ from other treatments. 

KEYWORDS' Pseudomonas, Azotobacter, Azosprillum, Lycopersicum esculentum 

I. Introduction 

Plant growth-promoting rhizobacteria (PGPR) help plants through different mechanisms, for example 
(i) the production of secondary metabolites such as antibiotics, cyanide, and hormonelike substances; 
(ii) the production of siderophores; (hi) antagonism to soilborne root pathogens; and (iv) phosphate 
solubilization [1,2,3,4,5,6,7]. These organisms possessing one or more of these characteristics are 
interesting since it may influence plant growth. Improvement of phosphorus (P) nutrition is one of the 
factors involved in plant growth promotion by PGPR. These bacteria may improve plant P acquisition 
by solubilizing organic and inorganic phosphate sources through phosphatase synthesis or by 
lowering the pH of the soil [8]. The objective of this study was to compare the effects of the PGPR at 
several treatments (alone and mixed) on growth and nutrients uptake of tomato plants. 

II. Materials and Methods 

2.1. Plant Materials and Experimental Conditions 



17J 



Vol. 2, Issue 1, pp. 27-31 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

A greenhouse experiment was conducted to evaluate the effects of 7 treatments of bacteria 
(Pseudomonas, Azotobacter, Azosprillum, Pseudomonas + Azotobacter, Pseudomonas + Azosprillum, 
Azotobacter + Azosprillum and Pseudomonas + Azotobacter + Azosprillum) on tomato 
(Lycopersicon esculentum Red Cherry) growth and nutrients uptake. The plants were grown from 
seeds after inoculated with bacteria in pots containing 7 kg of field soil, sand and peat (1/3 ,v/v each 
of them). Experiment was set in a complete randomized design with four replicates. At prebloom 
stage, the shoots were cut at the soil surface level. The roots were separated from the soil. Shoot and 
root fresh weights and root length were measured, then dry weights of shoots and roots were 
determined after drying at 75°C. 

2.2. Nutrient Determination 

N, P and K were determined by kjeldahl, Olsen and flame photometery methods, respectively. Ca and 
Mg were determined by calciometery. 

2.3. Statistical Analysis 

Statistical analyses were done using SAS software. SAS (Statistical Analysis System) is an integrated 
system of software products provided by SAS Institute Inc. that enables programmers to perform 
statistical analysis. SAS is driven by SAS programs, which define a sequence of operations to be 
performed on data stored as tables. Means were compared by Duncan's multiple range test at P < 0.05 
(5% level of probability). 

III. Results 

The highest shoot fresh weight was observed in Azotobacter + Azosprillum (53.77 g/plant), 
Pseudomonas + Azotobacter + Azosprillum (53.29 g/plant) and Azosprillum (51.87 g/plant) treatments 
which significantly differed from other treatments. The lowest shoot fresh weight (42 g/plant) was 
obtained in Pseudomonas + Azosprillum. The maximum level of root fresh weight was achieved in 
Azotobacter + Azosprillum (10.81 g/plant), Pseudomonas + Azotobacter + Azosprillum (10.49 g/plant) 
and Azotobacter (10.30 g/plant) treatments which significantly differed from other treatments. 
Maximum level of shoot dry weight was shown on Azotobacter + Azosprillum (6.84 g/plant) and 
Pseudomonas + Azotobacter + Azosprillum (7.05 g/plant) treatments which significantly differed from 
others. The highest root dry weight was achieved on Azotobacter + Azosprillum (0.92 g/plant) and 
Pseudomonas + Azotobacter + Azosprillum (0.94 g/plant) treatments. Minimum level of shoot and 
root dry weights were achieved in Pseudomonas + Azosprillum. The maximum root length was shown 
on Azotobacter + Azosprillum (40.33 cm) which significantly differed from other treatments (Table 
1). 

Table 1. Effect of bacterial treatments on shoot and root fresh weights, shoot and root dry weights and root 
length. 



T realm aits 


Shoot ftr 


Shoot dw 


Root fn- 


Root dw 


Root Length 




(e/plant} 


(e/plant) 


(e/plant} 


(g/plant) 


(cm) 


Pseud 


43.29b" 


5.3Scd 


3.2* 


063cd 


34.13b 


Az&Lql 


44.06b 


5 46bcd 


10.3fti 


0.79b 


27.23c 


Az&sp. 


51.37a 


5.15d 


SJ9bc 


0.53d 


32.35b 


Pseud. +■ Azoto. 


-i-. _ :r 


5.63bc 


9.03b 


060td 


32.15bc 


Pseud. +■ Azosp. 


-:.!Xb 


413e 


7.5Si 


043e 


3L.-Ctz 


Attla.+- Az&sp. 


53.77a 


6.34i 


10.31a 


0.92a 


40.33a 


Pseud + Azota. +■ 


53.29* 


7.05a 


10.45b 


0.94a 


33.45b 


Azttp. 












Control 


42.41b 


5.93b 


3 03cd 


0.66c 


34.00b 



f In each column, means with the same letters are not significantly different at 5% level of Duncan's 
multiple range test. 



IS! 



Vol. 2, Issue 1, pp. 27-31 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

The highest amount of N (32.65 mg/g dry matter), P (3.40 mg/g dry matter) and K (35.10 mg/g dry 
matter) were shown on Pseudomonas + Azotobacter + Azosprillum treatment which significantly 
differed from other treatments and the lowest amount was shown on Pseudomonas + Azotobacter 
treatment. The maximum level of Ca was achieved on Pseudomonas + Azotobacter (30.38 mg/g dry 
matter) and Pseudomonas + Azosprillum (30.30 mg/g dry matter) treatments which significantly 
differed from other treatments. The maximum level of Mg was observed on Pseudomonas + 
Azotobacter (6.18 mg/g dry matter) and Pseudomonas + Azosprillum (6.27 mg/g dry matter) 
treatments (Table 2). 

Table 2. Effect of bacterial treatments on nutrients uptake in tomato. 





N 


P 


K 


Ca 


^E 


T rea tojeriG 


(m&'E 


(mgg dry 


(nig'g diy 


(mg'gdry 


Cms-'gdiT.- 




dry 


uatter) 


matter) 


matter) 


matter) 




matter) 










Pseud 


15.15k" 


UStc 


M-.GCb«l 


20.05c 


4_30c 


AzoLa. 


16.70d 


223bc 


26.6bc 


2+£3b 


5.15b 


Aiosp. 


24.45b 


2.55b 


23.70b 


::.c5t: 


5.1Bb 


Pseud. +■ AzoLa. 


10.93e 


1.93c 


2123d 


30.33a 


6.1Bi 


Pseud. +■ Amsp. 


13.53cd 


2.0Bbc 


20.73d 


30.30a 


6.27a 


AjoLo. +■ Azosp. 


2-.25h 


2.30bc 


2410bcd 


21.90bc 


5.45b 


Pseud. +■ AzoLa. +■ 


32.65s. 


3.-Ca 


35.10a 


21.40bc 


405c 


Azosp. 












Control 


22.15bc 


235bc 


22,60td 


22.20bc 


443c 



f In each column, means with the same letters are not significantly different at 5% level of Duncan's 
multiple range test. 

IV. Discussion 

The results indicated that PGPR affect the growth and nutrients uptake. In the impact of root 

inoculation with beneficial rhizosphere microorganisms on some quality parameters is being explored 

[9,10,11]. 

Facilitating plant nutrition could be the mechanism by which PGPR enhance crop yield, since the 

nutritional plants status is enhanced by increasing the availability of nutrients in the rhizosphere 

[12,13]. 

Phytohormones produced by PGPR, are believed to be changing assimilate partitioning patterns in 

plants altering growth in roots, the fructification process and development of the fruit under 

production conditions [14]. 

This work supports that tomato root inoculation with PGPR enhances growth under greenhouse 

conditions. However, field experiments should be carried out to ensure that positive effects are 

maintained under conventional production systems. 

A series of other factors (ability to grow on root exudates, to synthesize amino acids and vitamins) 

defined as "rhizospheric competence" is involved in the establishment of effective and enduring root 

colonization by an introduced bacterium [15]. 

Pseudomonas fluorescens 92rk, alone or co-inoculated with P190r, increased mycorrhizal 

colonization of tomato roots by G. mosseae BEG 12. This result suggests that strain 92rk behaves as a 

mycorrhiza helper bacterium (MHB) in L. esculentum. MHB have been described for ectomycorrhizal 

symbiosis [16] and only a few examples of MHB have been reported for AM symbiosis [17,18]. P. 

fluorescens 92rk increased total root length, surface area and volume. This is in agreement with the 

effects of P. fluorescens A6RI [19] and 92r [20] on the development of tomato and cucumber root, 

respectively. Longer root systems are more adapted to soil exploration and exploitation [21]. The 



"29T 



Vol. 2, Issue 1, pp. 27-31 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

modification of root architecture parameters induced by the bacterial strains could be related to 

increased P acquisition: root systems with higher root surface area and volume are indeed 

characterized by a higher absorptive surface. 

An investigation showed the effects of inoculating of two Bred cultivars of tomato (Fl Hybrid, Delba 

and Fl Hybrid, Tivi ) roots with plant growth-promoting rhizobacteria (PGPR). Azotobacter was more 

effective than Pseudomonas to increase all traits' value except for shoot dry weight and K Content 

[22]. Differences between genotypes can explain differences between results. 

An investigation showed that PGPR and AMF (fungus) can increase tomato fruit quality. It may be 

related to increasing of minerals by inoculated plants [23]. 

Increased nutrient uptake by plants inoculated with plant growth promoting bacterium has been 

attributed to the production of plant growth regulators at the root interface, which stimulated root 

development and resulted in better absorption of water and nutrients from the soil [24,25,26]. 

V. Conclusion 

In conclusion, Azotobacter + Azosprillum and Pseudomonas + Azotobacter + Azosprillum resulted in 
the highest values of shoot fresh and dry weights and root fresh and dry weights at prebloom stage. 
Pseudomonas + Azotobacter + Azosprillum treatment was the best for N, P and K uptake in tomato 
shoots. 

References 

[I] A.W. Bakker and B. Schippers, "Microbial cyanide production in the rhizosphere in relation to potato yield 
reduction and Pseudomonas spp. -mediated plant growth- stimulation", Soil Biol. Biochem., Vol. 19, PP. 
451-457, 1987. 

[2] PJ. Dart, "Nitrogen fixation associated with non-legumes in agriculture", Plant Soil, Vol. 90, PP. 303-334, 

1986. 
[3] A.N. Dubeikovsky, E.A. Mordukhova, V.V. Kochetkov, F.Y. Polikarpova, and A.M. Boronin, "Growth 

promotion of blackcurrant softwood cuttings by recombinant strain Pseudomonas fluorescens BSP53a 

synthesizing an increased amount of indole-3 -acetic acid", Soil Biol. Biochem., Vol. 25, PP. 1277-1281, 

1993. 
[4] A.H. Goldstein, "Bacterial solubilization of mineral phosphates: historical perspective and future 

prospects", Am. J. Altern. Agric, Vol. 1, PP. 51-57, 1986. 
[5] J. Leong, "Siderophores: their biochemistry and possible role in the biocontrol plant pathogens", Annu. 

Rev. PhytopathoL, Vol. 24, PP. 187-209, 1986. 
[6] M.N. Schroth and J.G. Hancock, "Selected topics in biological control". Annu. Rev. Microbiol., Vol. 35, 

PP. 453-476, 1981. 
[7] D.M. Weller, "Biological control of soilborne pathogens in the rhizosphere with bacteria", Annu. Rev. 

PhytopathoL, Vol. 26, PP. 379-407, 1988. 
[8] H. Rodriguez and R. Fraga, "Phosphate solubilizing bacteria and their role in plant growth promotion", 

Biotechnol. Adv., Vol. 17, PP. 319-339, 1999. 
[9] G. Charron, V. Furlan, M. Bernier-Cardou and G. Doyon, "Response of onion plants to arbuscular 

mycorrhizae. 1. Effects of inoculation method and phosphorus fertilization on biomass and bulb firmness", 

Mycorrhiza, Vol. 11, PP. 187-197, 2001. 
[10] C. Kaya, D. Higgs, H. Kirnak and I. Tas, "Mycorrhizal colonization improves fruit yield and water use 

efficiency in watermelon (Citullus lanatus Thunb.) grown under well-watered and water stressed 

conditions", Plant Soil, Vol. 253, PP. 287-292, 2003. 

[II] H.G. Mena-Violante, O. Ocampo-Jimenez, L. Dendooven, G. Martinez- Soto, J. Gonzalez-Castafieda, F.T. 
Davies Jr. and V. Olalde-Portugal, "Arbuscular mycorrhizal fungi enhance fruit growth and quality of chile 
ancho (Capsicum annuum L. cv San Luis) plants exposed to drought", Mycorrhiza, Vol. 16, PP. 261-267, 
2006. 

[12] E. Bar-Ness, Y. Hadar, Y. Chen, V. Romheld, and H. Marschner, "Short term effects of rhizosphere 

microorganisms on Fe uptake from microbial siderophores by maize and oat", Plant Physiol., Vol. 100, PP. 

451-456, 1992. 
[13] A.E. Richardson, "Prospects for using soil microorganisms to improve the acquisition of phosphorus by 

plants", Aust. J. Plant Physiol., Vol. 28, PP. 897-906, 2001. 
[14] J. A. Lucas-Garcia, A. Probanza, B. Ramos, M. Ruiz-Palomino and F.J. Gutierrez Manero, "Effect of 

inoculation of Bacillus licheniformis on tomato and pepper", Agronomie, Vol. 24, PP. 169-176, 2004. 



loT 



Vol. 2, Issue 1, pp. 27-31 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[15] B.J.J. Lugtenberg and L.C. Dekkers, "What makes Pseudomonas bacteria rhizosphere competent?" 

Environ. Microbiol., Vol. 1, PP. 9-13, 1999. 
[16] J. Garbaye, "Helper bacteria: a new dimension to the mycorrhizal symbiosis", New Phytol., Vol. 128, PP. 

197-210, 1994. 
[17] M. Toro, R. Azcn and J.M. Barea, "Improvement of arbuscular development by inoculation of soil with 

phosphate-solubilizing rhizobacteria to improve rock phosphate bioavailability (32P) and nutrient cycling", 

Appl. Environ. Microbiol., Vol. 63, PP. 4408-4412, 1997. 
[18] S. Singh and K.K. Kapoor, "Effects of inoculation of phosphate solubilizing microorganisms and an 

arbuscular mycorrhizal fungus on mungbean grown under natural soil conditions", Mycorrhiza, Vol. 7, PP. 

249-253, 1998. 
[19] E. Gamalero, M.G. Martinotti, A. Trotta, P. Lemanceau and G. Berta, "Morphogenetic modifications 

induced by Pseudomonas fluorescens A6RI and Glomus mosseae BEG 12 in the root system of tomato 

differ according to plant growth conditions", New Phytol., Vol. 155, PP. 293-300, 2002. 
[20] E. Gamalero, L. Fracchia, M. Cavaletto, J. Garbaye, P. Frey-Klett, G.C. Varese and M.G. Martinotti, 

"Characterization of functional traits of two fluorescent pseudomonads isolated from basidiomes of 

ectomycorrhizal fungi", Soil Biol. Biochem., Vol. 35, PP. 55-65, 2003. 
[21] G. Berta, A. Fusconi and J.E. Hooker, "Arbuscular mycorrhizal modifications to plant root systems", In: S. 

Gianinazzi and H. Schuepp (eds) "Mycorrhizal Technology: from genes to bioproducts achievement and 

hurdles in arbuscular mycorrhizal research", Birkh_user, Basel, pp. 71-101, 2002. 
[22] M. Zare, K. Ordookhani and O. Alizadeh, "Effects of PGPR and AMF on Growth of Two Bred Cultivars 

of Tomato", Adv. Environ. Biol., Vol. 5, PP. 2177-2181, 2011. 
[23] K. Ordookhani, K. Khavazi, A. Moezzi and F. Rejali, "Influence of PGPR and AMF on antioxidant 

activity, lycopene and potassium contents in tomato", Afr. J. Agric. Res., Vol. 5, PP. 1108-1116, 2010. 
[24] J.W. Kloepper, R.M. Zablowicz, B. Tipping and R. Lifshitz, "Plant growth mediated by bacterial 

rhizosphere colonizers", In: D.L. Keister and B. Gregan (eds.) "The rhizosphere and plant growth", 14. 

BARC Symposium, PP. 315-326, 1991. 
[25] W. Zimmer, K. Kloos, B. Hundeshagen, E. Neiderau and H. Bothe, "Auxin biosynthesis and enitrification 

in plant growth promotion bacteria", In: J. Fendrik, J. De Gallo Vandeleyden and D. De Zamoroczy (eds.) 

"Azospirillum VI and related microorganisms", Series G:Ecological, Vol. 37, PP. 120 141, 1995. 
[26] G. Hoflich and G. Kuhn, "Forderung das Wachstums und der Nahrstoffaufnahme bei kurziferen O- und 

Zwischenfruhten durch inokulierte Rhizospherenmikroorganismen", Zeischrift fu r Pflanzenerna hrung 

und Bodenkunde, Vol. 159, PP. 575-578, 1996. 

Author 

Shahram Sharafzadeh was born in Shiraz, Iran in 1971. He received his Bachelor degree in 
Horticultural Science from Shiraz University, Iran in 1994; MSc in Horticultural Science 
from Shiraz University, Iran in 1998 and Ph.D in Horticultural Science from Science and 
Research Branch, Islamic Azad University, Tehran, Iran in 2008. He is working as a full 
time Lecturer, assistant professor, in the Firoozabad Branch, Islamic Azad University, 
Firoozabad, Iran. His research interests include medicinal and aromatic plants and 
biotechnology. He is supervisor and advisor of some MSc thesis. There are several projects 
he is working on such as effects of organic matters on growth and secondary metabolites of 
plants. 




ITT 



Vol. 2, Issue 1, pp. 27-31 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



The Application of PSO to Hybrid Active Power 

Filter Design for 3 Phase 4- Wire System with 

Balanced & Unbalanced Loads 

B. Suresh Kumar 1 , K. Ramesh Reddy 2 & S. Archana 3 
Department of Electrical and Electronics Engineering, CBIT, Hyderabad, India 
department of Electrical and Electronics Engineering, GNITS, Hyderabad, India 



Abstract 

This paper presents a application of PSO to Hybrid active power filter used to compensate for total harmonic 
distortion in three-phase four-wire systems. The shunt active filter employs a simple method for the calculation 
of the reference compensation current based on Fast Fourier Transform. The presented Shunt Active Power 
filter is able to operate in balanced, unbalanced and Variable load conditions. Classic filters may not have 
satisfactory performance in fast varying conditions. But auto tuned active power filter gives better results for 
harmonic minimization, reactive power compensation and power factor improvement. The proposed auto tuned 
shunt active filter maintains the THD well within IEEE- 5 19 standards. The proposed methodology is extensively 
tested for wide range of different Loads with Improved dynamic behavior of shunt active power filter using PSO 
to Hybrid active power filter. The results are found to be quite satisfactory to mitigate harmonic Distortions, 
reactive power compensation and power factor correction thereby increase in Power Quality improvement and 
reduction in %THD. 

KEYWORDS' Hybrid active power filter (HAPF), Multiobjective optimization, particle swarm optimization 
(PSO), Total Harmonic distortion (THD), Power factor, Reactive power 



I. Introduction 

Power Systems have to cope with a variety of nonlinear Loads which introduce significant amounts of 
harmonics. IEEE Standard 519-1992 provides a guideline for the limitation and mitigation of 
harmonics. Passive power filters (PPFs), Active power filters (APFs), and hybrid active power filters 
(HAPFs) can all be used to eliminate harmonics. For Medium- and high-voltage systems, PPFs and 
HAPFs appear to be better choices considering cost where the ratings are of several tens of megavolt- 
amperes. The design of such PPFs and HAPFs is a complicated nonlinear programming problem. 
Conventional trial-and-error Methods based on engineering experience are commonly used, but the 
results are not optimal in most cases. 

In recent years, many studies have appeared involving optimal PPF design. A Method based on the 
sequential unconstrained minimization Technique has been used for PPF design because of its 
simplicity and versatility, but numerical instability can limit the application of this method. PPF 
design using simulated Annealing has been reported, but the major drawback is the repeated 
annealing. 

Genetic algorithms (gas) have been widely used in PPF design, but the computing burden and 
convergence problems are disadvantages of this approach. A design method for PPFs using a hybrid 



ITf 



Vol. 2, Issue 1, pp. 32-42 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Differential evolution Algorithm has also been proposed, but the algorithm is Complex, involving 

mutation, crossover, migrant, and accelerated 

Operations For the optimal design of HAPFs, a method based on gas has been proposed in order to 

minimize the rating of APF, but no other optimal design methods appear to have been suggested. 

Many methods treated the optimal design of PPFs and HAPFs as a single objective problem. In fact, 

filter Design should determine the optimal solution where there are multiple objectives. As these 

objectives generally conflict with One another, they must be cautiously coordinated to derive a Good 

compromise solution. 

In this paper, optimal multi objective designs for both PPFs and HAPFs using an advanced particle 

swarm optimization (PSO) algorithm are reported. The objectives and constraints were developed 

from the viewpoint of practicality and the Filtering characteristics. 

For the optimal design of PPFs, the capacity of reactive Power compensation, the original investment 

cost, and the total Harmonic distortion (THD) were taken as the three objectives. The constraints 

included individual harmonic distortion, fundamental Reactive power compensation, THD, and 

parallel and Series resonance with the system. For the optimal design of HAPFs, the capacity of the 

APF, The reactive power compensation, and the THD were taken as the three objectives; the 

constraints were as for the PPFs. 

The Uncertainties of the filter and system parameters, which will Cause detuning, were also 

considered as constraints during the optimal design process. A PSO-based algorithm was developed to 

search for the optimal solution. The numerical results of case Studies comparing the PSO method and 

the conventional trial and- Error method are reported. From which, the superiority and Availability of 

the PSO method and the designed filters are certified. 

II. System Under Study 

A typical 10-kV 50-Hz system with nonlinear loads, as shown in Fig. 1, was studied to determine the 
optimal design for both PPFs and HAPFs. The nonlinear loads are the medium frequency furnaces 
commonly found in steel plants with abundant harmonic currents, particularly the fifth and seventh 
orders, as shown in Table I. The utility harmonic tolerances given in IEEE Standard 519-1992 and the 
Chinese National Standard GB/T 14549-93 are listed in Table I as percentages of the fundamental 
current. 

66kv 



Harmonic current 
Measurement point [z± 



Main transformer 



Filters 
PPForHAPF 



-1- 




Nonlinear 
loads 



Fig 1. Single diagram of system for case studies. 



33 



Vol. 2, Issue 1, pp. 32-42 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Table I : Harmonic Current distributions in phase A and utility tolerances 



Harmonic 
Order 


Measured 
Value 

(%) 


National 
Standard 

(%) 


IEEE 

standard 
519- 
1992 

(%) 


5 


6.14 


2.61 


4 


7 


2.77 


1.96 


4 


11 


1.54 


1.21 


2 


13 


0.8 


1.03 


2 


17 


0.6 


0.78 


1.5 


19 


0.46 


0.7 


1.5 


23 


0.95 


0.59 


0.6 


25 


0.93 


0.53 


0.6 


THD 


7.12 


5 


5 



Table I shows that current THD, and the 5th, 23rd, and 25th order harmonic currents exceed the 

tolerances based on both standards. In addition, the 7th and 11 th order harmonics exceed the tolerance 

based on the National standard. 

Filters must therefore be installed to mitigate the harmonics sufficiently to satisfy both standards. 

Both PPF and HAPF are suitable and economical for harmonic mitigation in such systems. For this 

system with nonlinear loads as medium frequency furnaces, the even and triple harmonics are very 

small and far below the standard values, so these harmonics are not considered. In addition, the 

harmonic voltages are in fact very small, so the voltages are assumed to be ideal. 

The fundamental current and reactive power demands are 1012 A and 3-4 MVar, respectively. The 

short circuit capacity is 132 MVA, and the equivalent source inductance of the system is 2.4 mH 

III. HAPF Design Based on PSO 

A. HAPF Structure and Performance: 

In order to demonstrate the optimal design method of HAPFs based on PSO, an HAPF was designed 
and is shown in Fig. 2; it is supposed to be used in the same situation as that shown in Fig. 1. In this 
HAPF, PPFs are mainly used to compensate for harmonics and reactive power, and an APF is used to 
improve the filtering performance .The PPF consists of the fifth and seventh single-tuned filters and a 
high-pass damped filter. The APF is implemented with a three-phase voltage-source inverter. Fig. 3(a) 
shows the single-phase equivalent circuits of the HAPF, assuming that the APF is an ideal 
controllable voltage VAF and that the load is an ideal current source IL. ZS is the source impedance, 
ZF is the total impedance of the PPF, Vpcc is the voltage of the injected point, and K is the controlling 
gain of the APF. 



v s 



l s 



-©- 



10kv bus 






-&- 



System source 



PPF 



C5-rC7jC| 




* 



Nonlinear 
loads 






Coupling 
ITransformer 



Fig.2. Shunt HAPF. 



*TT 



Vol. 2, Issue 1, pp. 32-42 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 




(a) 



Ish 



V, 



z s K 



pcch 



Wh 



z f 



(b) 



iLh 



Fig 3. Single-phase equivalent circuits of the HAPF (a) Equivalent circuit, 
(b) Equivalent harmonic circuit. 

The equivalent harmonic circuit is redrawn as in Fig. 3(b). The harmonic current Ish into the system 
source and the harmonic attenuation factor y are given in the following equations: 

Z " \I Lh -> CD 



/,* =' 



I Lh 



K + Z F + Z s 



:| 



K + Z F + Z s 



l-> (2) 



Assuming that the fundamental component of the supply voltage is fully dropped across the PPF, the 
voltage and current of APF can be derived as follows [24]: 

v„ =5X* =Ir Z f» J m =E^-4 ->(3) 

h h h 



The rms value of ' VAF is defined as 

V ' „ = 



V ■ 



-> (5) 



f A = 5, 7 ,11, 13, 17, 19, 23, 25 

The capacity of the APF is determined by the current /' AF and the voltage ' VAF. It is obvious that the 

low VA rating of APF can be achieved by decreasing ' /AF and ' VAF. In this shunt hybrid topology, 

the current 7" AF is almost constant, so the only way to reduce the VA rating of APF is to decrease the 

voltage VAF. 

B. Multi objective Optimal Design Model for HAPF: 

As mentioned earlier, when designing an HAPF, it is very important to minimize the capacity of the 

APF component, and there are some other objectives and constraints to be considered when the APF 

of the HAPF is cut off due to faults, the PPF part keeps work in to mitigate harmonics until the APF is 

restored. It follows that some additional constraints should be included in respect of such occasions. 

The constructions of objective functions and constraints are described next. 

Three important objective functions are defined as follows. 

1) Minimize the capacity of the APF, which is mainly determined by the harmonic voltage across it 



vrimV AF - 



2) Minimize the current THD with HAPF 

mmTHDI, 



V 



AFh 



(6) 



h=5,7 ,11,13,17,19,23,25 



= JI 



v'w 



-> (7) 



where THDIHAPF is the current THD with the HAPF in place; and the definitions of Ish, 71 , and N 

are the same as those in (7). 

3) Maximize the reactive power compensation 

max £ Q, -» (8) 



= 5,7,77 



Where Qi is same with that in (9) 



li7 



Vol. 2, Issue 1, pp. 32-42 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Constraints are defined as follows. 

The tolerated values quoted hereinafter are also based on the National standard. 

1) Requirements of total harmonic filtering with the HAPF 

THDI HAPF < THDI MAX -> (9) 

Where THDIMAX is defined in (10) 

When PPF runs individually with the APF cutoff, an additional constraint is added as follows: 

THDI PPF < THDI MAX -> (10) 

Where THDIPPF is the THD with the PPF alone 

2) Requirements of individual harmonic filtering with HAPF and PPF alone: Each order harmonic 
should satisfy the standards, so the following constraints are included: 

7^</ /w ,/z=5,7,ll,iai7,19,2a25^(ll) 

/^</ /w ,/i=5,7,Um7,19,2i25^(12) 

Where IHAPF sh and IPPF sh are, respectively, the rms values of the h th order harmonic current into the 
system source with the HAPF and the PPF alone 7/z max is defined by (1 1). 

3) Fundamental reactive power compensation limits: The fundamental reactive power must be 
restricted as 

Q < Y Q < Q -> (13) 

i = 5,7 ,H 

Where Q min and Q max are as defined in (12). 

4) Parallel and series resonance restrictions: Parallel and series resonance with system source will 
rarely happen for the HAPF due to the active damping function of the APF. Nevertheless, it is 
necessary to consider, during the HAPF design, parallel and series resonance restrictions when PPF 
works alone with the APF cutoff. Therefore, constraints are constructed, which are the same as those 
constructed during the PPF optimal design in (13)— (16). 

5) Consideration of the detuning constraints: The HAPF system is not sensitive to detuning effects 
because of the APF damping function. In the worst case, that the system impedance decreases by 
20%, the system frequency changes to 50.5 Hz, the capacitance of each branch increases by 5%, and 
the reactance also increases by 2%, then the filtering performance of the PPF alone should still satisfy 
all the standards and limit described earlier, as set out in (10), (12), and (13). 

C. Optimal Design for HAPF Based on PSO Based on the objectives and constraints constructed 
earlier for HAPF, the multi objective optimization task is carried out using an advanced PSO 
algorithm. The capacitance in each branch of the PPF and the characteristic frequency of the high-pass 
damped filter are chosen as optimal variables 

Xi = (C5, CI, CH, fH)T, while the tuning frequencies of the fifth and seventh single-tuned filters are 
predetermined as 242 and 342 Hz, respectively. According to the optimization objectives, the 
corresponding fitness functions are defined as 

F/(X ) = V AF -> (14) 

F 2 (X ) = THDI HAPF -> (15) 

F 3 (X ) = x e* -> ( 16 ) 

f = 5,7 ,H 

Similar methods were adopted to solve this multi objective optimization problem. The objective of 
minimizing the APF capacity is chosen as the final objective, while the other two objectives are 
solved by using acceptable level constraints, as shown in the following equations: 



m in 


Fr 


— > 


(17) 


F ' 

1 2 


< 


a I 


-> 


(18) 


a 2 


< 


f; 


< 


a; -> 



(19) 

Wherea'i, a '3, and a\ are the highest and lowest acceptable levels for the secondary objectives, 
respectively. The overall optimization process for the HAPF based on PSO is similar to that of the 
PPF in Fig. 4. 



IfTT 



Vol. 2, Issue 1, pp. 32-42 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Table II : Design results of HAPFs based on PSO and conventional methods 



Design parameters 


Pso-method 


Conventional 
method 


The 5th Single- 
tuned filter 


C 5 =59.76uF 

L 5 =7.24mH 

Q 5 =60 


C 5 =80.6uF 

L 5 =5.37mH 

Q 5 =60 


The 7th single - 
tuned filter 


C 7 =12.32uF 

L 7 =17.58mH 

Q 7 =60 


C 7 =23.76uF 

L 7 =9.11mH 

Q 7 =60 


High-pass damped 
filter 


C H =52.06uF 

L H =1.20mH 

m=0.5 


C H =15.28uF 

L H =3.32mH 

m=0.5 



Table III : Harmonic current distributions with HAPFs based on PSO and conventional methods 



Harmonic 
orders 


PSO 
Method 

(%) 


Conventional 
Method 

(%) 


5 


0.24 


0.17 


7 


0.24 


0.11 


11 


0.25 


0.71 


13 


0.1 


0.3 


17 


0.07 


0.16 


19 


0.06 


0.12 


23 


0.13 


0.26 


25 


0.13 


0.26 


THD 


0.48 


0.91 


V AF 


116.64 V 


340.82 V 


Reactive Power 
Compensation 


4MVar 


3.88MVar 



The design results of HAPFs using PSO and conventional trial-and-error methods are listed in Table 
II. The design results based on the conventional method in Table II .It can be seen that the harmonic 
currents and reactive power are well compensated by both HAPFs and that the HAPF designed using 
the method based on PSO can obtain better filtering performance with lower THD (0.48%) and larger 
reactive power compensation (4MVar). Moreover, the voltage VAF of the APF, in this case, was 
much smaller than that based on conventional method. Therefore, the investment cost of the whole 
system is much reduced. Table IV shows the harmonic current distributions when the PPF is working 
alone, without the APF. 

A comparison is made between the PSO method and conventional method, and it can be seen that all 
the harmonic currents and the THD are still within the standards, and the filtering performance of PPF 
based on PSO is a little better. 

Table IV: Harmonic current distributions with PPFs alone Based on PSO and conventional methods 



Harmonic 
orders 


PSO 
Method 

(%) 


Conventional 
Method 

(%) 


5 


1.1 


0.82 


7 


0.76 


0.39 


11 


0.94 


1.13 


13 


0.26 


0.60 


17 


0.14 


0.29 


19 


0.11 


0.21 


23 


0.21 


0.40 


25 


0.20 


0.38 


THD 


1.68 


1.71 



!7T 



Vol. 2, Issue 1, pp. 32-42 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Table V: Harmonic current distributions with HAPFs alone considering detuning effects 



Harmonic 
Orders 


PSO 
Method HAPF 

(%) 


Conventional 
Method HAPF 

(%) 


5 


0.65 


0.44 


7 


0.75 


0.27 


11 


0.23 


0.71 


13 


0.1 


0.27 


17 


0.08 


0.16 


19 


0.06 


0.13 


23 


0.14 


0.28 


25 


0.14 


0.28 


THD 


1.05 


1.02 



In order to verify the filtering performances of HAPF alone under the worst detuning situations, 

comparisons are shown in Table V. It is clear that both HAPFs, using PSO method and conventional 

method, can obtain excellent filtering performance in spite of detuning effects. 

Fig. 4 shows the harmonic attenuation factors of HAPF alone using the PSO design method and 

considering detuning effects. It can be seen that the harmonic currents are still well attenuated, and no 

resonance point can be found. Furthermore, the attenuation factor of HAPF is much smaller than that 

of PPF, which shows the excellent harmonic mitigation performance of HAPF. 

The simulation using the MATLAB/SIMULINK software has been run based on field measurement 

data. The compensated source current with the HAPF is shown in Fig. 5. From Fig. 5, we can see that 

the source current is very close to a pure sine wave, with the current THD decreasing to 0.48%. 

Fig. 6 shows the convergence characteristics of the PSO algorithm developed in this paper for optimal 

design of HAPF. In this paper, the PSO algorithm is run 200 times, and every time, it can converge 

within 360 iterations. All those can demonstrate the efficiency and validity of PSO for the optimal 

HAPF design. 

.v:i r 




10 2fl 30 

Harmonic order 

(a) 



*r. 




:n 20 30 AQ 

Harmonic order 



51 : 



fhi 



Fig 4. Harmonic attenuation factors of the HAPF and its PPF alone based on the PSO method. 

(a) Harmonic attenuation factor of the HAPF based on the PSO method. 

(b) Harmonic attenuation of the PPF alone based on the PSO method. 



•"X 














v y 











■o. x -a o. i -ft. ■ 

E i I I I ._■ « >. •, 

: - ■ i 
(SOI 4j^:> ] C) I Z*LjTK. _ L I ■■> C* _ -* *lf^-^ 



I ■■ M 



-f J 



Fig 5.Compensated source current and its THD analysis with HAPF based on the PSO method 

(a) Compensated source currents of phase A with HAPF. 

(b) THD analysis of compensated source current. 



Is] 



Vol. 2, Issue 1, pp. 32-42 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



I TO 



&■ Uffl -■ 



U<1 

> 150 

£ HO 

£ 130 

E 120 



no 



I 



II 



ICMJfl 



BD 400 OO 800 

llcraCKin number 
Fig.6. Convergence characteristics of PSO for HAPF design. 



— @i B — _nnnr>— 



— a 



KE 



r 



i r 



Current Measuremei 



111 112 111 

TT0TT0 TT^ 



B—B-rp-jp-"- 

■__nmr»-»- 
-B-qnnp-o-i 






J 



t 



nearTransforner 



+ 



Three- Phase 
Transformer 
(Two Windings) 

-PI 



*B 



B-43 

■rami 5HB61 

©— L-»pj 







1 | Scopes 



E> 



Instantaneous 
-.tti.= Ci F.=activ= "; 






ng 



Fig.7. Simulink model of HAPF using PSO & without PSO with balanced,& unbalanced models 




wwvwww 




— ► Time (sec) 
Fig 8. Wave form of balanced load for HAPF-Conventional Method 



"39T 



Vol. 2, Issue 1, pp. 32-42 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



jmrwumMjiMf^^ 



vwwwwwwvwwwww 





— ► Time (sec) 
Fig.9. Wave form of Unbalanced load for HAPF-Conventional Method 



AAAAAAAAAAAAAAAAAAAAAA 



WVWWWWWWWWVWM 




— ► Time (sec) 
Fig.10. Wave form of Balanced load for HAPF-PSO Method 



PP& *L> 



MMAjajVWljWlMM 



ffiSS 



■ 




<l(lflflf|#^^ 



— ► Time (sec) 
Fig.ll. Wave form of Unbalanced load for HAPF-PSO Method 



Table: VI. Results with Balanced Load 



SCHEME 


With PSO 


With PSO 


% 
THD 


P.F 


Reactive 
Power 
(VAR) 


% 
THD 


P.F 


Reactive 
Power 
(VAR) 


HAPF 


0.47 


0.9929 


6.665 


26.54 


0.599 


2.887 



loT 



Vol. 2, Issue 1, pp. 32-42 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Table: VII. Results with Unbalanced Load 



SCHEME 


With PSO 


With PSO 


% 
THD 


P.F 


Reactive 
Power 
(VAR) 


% 
THD 


P.F 


Reactive 
Power 
(VAR) 


HAPF 


0.49 


0.9933 


6.663 


33.68 


0.764 


-8.0257 



IV. Conclusion 

The application of PSO to Hybrid active power filter is designed. The proposed control technique is 
found satisfactory to mitigate harmonics from utility current especially under balanced and 
unbalanced loading conditions. Thus, the resulting total current drawn from the ac mains is sinusoidal. 
The proposed design of SAF improves the overall control system performance over other 
conventional controller. The validity of the presented controllers was proved by simulation of a three 
phase four wire test system under balanced and unbalanced loading conditions. The proposed Hybrid 
shunt active filter compensate for balance and unbalanced nonlinear load currents, adapt itself to 
compensate variations in non linear load currents, and correct power factor of the supply side near to 
unity. Proposed APF topology limits THD percentage of source current under limits of IEEE-519 
standard. It has also been observed that reactive power compensation has improved leading to power 
factor improvement with the PSO Technique. 

References 

[I] V. E. Wanger, "Effects of harmonics on equipment," IEEE Trans. Power Del., vol. 8, no. 2, pp. 672-680, 
Apr. 1993. 

[2] J. C. Das, "Passive filters-potentialities and limitations," IEEE Trans. Ind.Appl, vol. 40, no. 1, pp. 232-241, 

Jan.Feb 2004. 

[3] K. K. Shyu, M. J. Yang, Y. M. Chen, and Y. F. Lin, "Model reference adaptive control design for a shunt 

active-power-filter system," IEEE Trans. Ind. Electron., vol. 55, no. 1, pp. 97-106, Jan. 2008. 

[4] L. Asiminoaei, E. Aeloiza, P. N. Enjeti, and F. Blaabjerg, "Shunt activepower-filter topology based on 

parallel interleaved inverters," IEEE Trans. Ind. Electron., vol. 55, no. 3, pp. 1175-1189, Mar. 2008. 

[5] A. Luo, Z. K. Shuai, W. J. Zhu, and Z. J. Shen, "Combined system for harmonic suppression and reactive 

power compensation," IEEE Trans. Ind. Electron., vol. 56, no. 2, pp. 418-428, Feb. 2009. 

[6] B. Singh and V. Verma, "An indirect current control of hybrid power filter for varying loads," IEEE Trans. 

Power Del, vol. 21, no. 1, pp. 178-184, Jan. 2006. 

[7] D. Rivas, L. Moran, J. W. Dixon, and J. R. Espinoza, "Improving passive filter compensation performance 

with active techniques," IEEE Trans. Ind. Electron., vol. 50, no. 1, pp. 161-170, Feb. 2003. 

[8] V. F. Corasaniti, M. B. Barbieri, P. L. Arnera, and M. I. Valla, "Hybrid active filter for reactive and 

harmonics compensation in a distribution network," IEEE Trans. Ind. Electron., vol. 56, no. 3, pp. 670-677, 

Mar. 2009. 

[9] K. P. Lin, M. H. Lin, and T. P. Lin, "An advanced computer code for single-tuned harmonic filter design," 

IEEE Trans. Ind. Appl, vol. 34, no. 4, pp. 640-643, Jul/Aug. 1998. 

[10] C. J. Chou, C. W. Liu, J. Y. Lee, and K. D. Lee, "Optimal planning oflarge passive-harmonic-filter set at 

high voltage level," IEEE Trans. PowerSyst., vol. 15, no. 1, pp. 433-441, Feb. 2000. 

[II] Y. M. Chen, "Passive filter design using genetic algorithms," IEEE Trans. Ind. Electron., vol. 50, no. 1, pp. 
202-207, Feb. 2003. 

[12] Z. S. Guang, W. Y. Ping, and J. L. Cheng, "Adaptive genetic algorithm based optimal design approach for 

passive power filters," Proc. Chin. Soc. Elect. Eng., vol. 24, no. 7, pp. 173-176, Jul. 2004. 

[13] Y. P. Chang and C. J. Wu, "Optimal multiobjective planning of large scale passive harmonic filters using 

hybrid differential evolution method considering parameter and loading uncertainty," IEEE Trans. Power Del, 

vol. 20, no. 1, pp. 408-416, Jan. 2005. 

[14] B. Duro, V. S. Ramsden, and P. Muttik, "Minimization of active filter rating in high power hybrid filter 

system," in Proc. IEEE Int. Conf. Power Electron. Drive Syst., Hong Kong, Jul. 1999, pp. 1043-1048. 

HE et al: APPLICATION OF PSO TO PASSIVE AND HYBRID ACTIVE POWER FILTER DESIGN 2851 

[15] C. J. Ling, J. X. Jian, and Z. D. Qi, "Multi-object optimization of hybrid active power filter based on 

genetic algorithm," /. Tsinghua Univ. Sci. Technol, vol. 46, no. 1, pp. 5-8, 2006. 



«T 



Vol. 2, Issue 1, pp. 32-42 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[16] J. Kennedy and R. Eberhart, "Particle swarm optimization," in Proc. IEEE Int. Conf. Neural Netw., Perth, 

Australia, Nov./Dec. 1995, vol. 4, pp. 1942-1948. 

[17] Y. D. Valle, G. K. Venayagamoorthy, S. Mohagheghi, J. C. Hernandez, and R. G. Harley, "Particle swarm 

optimization: Basic concepts, variants and applications in power systems," IEEE Trans. Evol. Comput., vol. 12, 

no. 2, pp. 171-195, Apr. 2008. 

[18] L. S. Coelho and B. M. Herrera, "Fuzzy identification based on a chaotic particle swarm optimization 

approach applied to a nonlinear Yo-yo motion system," IEEE Trans. Ind. Electron., vol. 54, no. 6, pp. 3234- 

3245, Dec. 2007. 

[19] S. Dutta and S. P. Singh, "Optimal rescheduling of generators for congestion management based on particle 

swarm optimization," IEEE Trans. Power Sy St., vol. 23, no. 4, pp. 1560-1569, Nov. 2008. 

Authors 




B. Suresh Kumar was born in Koppel, Parkas (Dot) India. He Received B.E from Bangalore 
University in 1999 and his M.Tech from JNT University of Hyderabad, India in 2003 both in 
Electrical and Electronics. He is at the finishing stage of PhD study in JNT University, 
Hyderabad, India. His employment experience included lecturer in the Department of 
Electrical and Electronics Engineering, At present He is working as Asst.professor in EEE dept 
CBIT Hyderabad, India. He is guiding 6 M.Tech Projects Power Quality. His special fields of 
interest are Power Quality, Power Systems, and Power Harmonics & Power Dynamics 

K. Ramesh Reddy He obtained B.Tech from Andhra University in 1985, M.Tech from REC, 
Warangal in 1989 & Ph.D. from S.V. University, Tirupatl in 2004 India. He worked at 
GPREC,Kurnool as Teaching Assistant during 1985-1987.Also at KSRMCE,Kadapa as lecturer 
& Asst.Prof. From 1989 to 2000. During 2000-2003 he worked at LBRCE, Mylavaram as 
Professor & Head in EEE dept. Presently he is working as Professor & head EEE dept at G. 
Narayanamma Institute of Technology & Science, Hyderabad from 2003. He is also Dean of PG 
studies. He is having 22 years of teaching experience. He hold different positions as Chief 
Superintendent of Exams, ISTE chapter Secretary, IEEE branch Counselor, Officer in charge Library & Vice- 
Principal. He published 22 research papers in conferences & 6 papers in international Journals. He authored a 
textbook entitled" Modeling of Power System Components" published by Galgotia Publishers, New Delhi. He is 
reviewer for the international journal IEEE Transactions on Power Delivery and National Journal Institution of 
Engineers (India), Kolkata. Also he is a Technical committee member of IASTED, Calgary, Canada for 
conducting conferences at different countries. He is recipient of Best Engineering Teacher award by ISTE in 
2007 & received Academic Excellence award from GNITS in 2008. At present he is guiding 9 Ph.D. students. 
His areas of research interest are Power Quality, Power Harmonics & Power Dynamics 




S. Archana was born in Jaggayyapet, Krishna (Dt) India in 1988. She Received B.Tech degree 
in Electrical and Electronics Engineering from ANURAG Engineering CollegeJNT University, 
Hyderabad in 2009 ,Currently She is pursuing M.Tech-final year degree in Electrical and 
Electronics Engineering with the specialization of Power Systems and Power Electronics from 
CBIT ,Hyderabad. Her special fields of interest are Power Quality, Power Systems & Power 
Electronics. 




ITf 



Vol. 2, Issue 1, pp. 32-42 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



A Survey of Coupling Measurement in Object 

Oriented Systems 

V. S. Bidve l and Akhil Khare 2 

information Technology Department, M.Tech. (II), BVCOE, Pune, India 

2 Assistant Professor, Information Technology Department, BVCOE, Pune, India 



Abstract 

Coupling measurement is a focus of study for many of the software professionals from last few years. Object- 
oriented programming is an efficient programming technique for programmers because of its features like 
reusability, data abstraction etc. Coupling is a very impotent factor in object-oriented programming for 
software quality measurement and used as predictors of software quality attributes such as fault proneness, 
impact analysis, ripple effects of changes, changeability etc. Many researchers have worked on coupling 
measurement and found various dimensions of coupling. Researchers have also worked on various aspects of 
coupling like static coupling measurement, dynamic coupling measurement, class level coupling, object level 
coupling etc. But still there is no standardization in the field of coupling measurement which is accepted 
worldwide. As a result of this it is very difficult to select any existing measure which obtain clear picture of 
state-of-art of coupling measurement for object-oriented systems. This paper analyses some terminologies of 
coupling measurement proposed earlier and discusses usefulness of each. 

KEYWORDS: Coupling, measurement, object-oriented, dynamic, static 

I. Introduction 

Object oriented technology gaining significant importance in the field software development. To 

evaluate and maintain quality of object-oriented software there is a need to assess and analyse its 

design and implementation using appropriate measurement metrics [5]. A quality metrics should 

relate to external quality attributes of a design. External quality attributes include maintainability, 

reusability, error proneness, and understandability. 

Based on observations and empirical studies, coupling has shown a direct impact on software quality 

[5]. In general, one of the goals of software designers is to keep the coupling in object-oriented system 

as low as possible. Classes of the system that are strongly coupled are most likely to be affected by 

changes and bugs from other classes. Such classes have more architectural importance; coupling 

measures helps in such happenings [2]. 

It is commonly observed in an object-oriented programming technique, inheritance and polymorphism 

is used more frequently. Static coupling measurement attributes are not sufficient to measure coupling 

due to inheritance and polymorphism. 

As a result we focus on coupling measurement. We discuss various proposed dynamic coupling 

measurement metrics and their correlation quality attributes. We compare all measurement aspects 

and discuss in order to design uniform and standardized framework for coupling measurement. 

The following section outlines the related work for object-oriented coupling metrics. Section 3 a 

detailed survey of existing coupling measures is carried out. In section 4 we provide comparative 

study of all the frameworks. Section 5 concludes the paper. 



«T 



Vol. 2, Issue 1, pp. 43-50 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

II. Motivation 

Object-oriented measurement has become popular area. There are large numbers of measures 
proposed for object oriented attributes such as coupling, inheritance, coherence. Also, there are 
several negative aspects regarding the manner in which the measures are developed and being 
developed. Coupling is a more complex software attribute in object oriented systems but our 
understanding about coupling measurement factors is poor. There is no standardization for expressing 
coupling measures; many measures are not operationally defined i.e. there is some ambiguity in their 
definitions. As a result, it is difficult to understand how different measures relate to one other and 
what their potential use is. All above aspects ultimately shapes to a need of detailed study of coupling 
measurement in object-oriented systems. 

III. Related Work 

In this section we perform a detailed survey of existing coupling measurement frameworks in object- 
oriented systems. 

3.1. Framework by Erik Arisholm [4] 

The framework described by Erik considered following points regarding coupling measurement. 

• Dynamic behavior of software can be precisely inferred from run time information. 

• Static coupling measures may sometimes be inadequate when attempting to explain 
differences in changeability for object oriented design. 

The author derived three dimensions of coupling. 

1. Mapping : object or class 

2. Direction: import or export 

3. Strength: number of dynamic messages, distinct methods, or distinct classes. 
The empirical evaluation of the proposed dynamic coupling measure consists of two parts. 

• First to assess fundamental properties of the measure Second part evaluates whether the 

dynamic coupling measures can explain the change proneness of class. 

• Erik used the concept of role-models for dynamic coupling measurement. 

• Scenario: a specific sequence of interactions between the objects. 

• Role: abstract representation of the functional responsibility of each object in a given 

scenario. 
Object can have many roles because it may participate in many scenarios. The role-model reflects the 
dynamic coupling between the roles along three orthogonal dimensions: direction, mapping and 
strength. 

• Direction of Coupling (Import and Export coupling): Dynamic import coupling counts the 
messages sent from a role, whereas dynamic export coupling counts the messages 
received. 

• Mapping: Object-level and Class-level Coupling: Object-level coupling quantifies the extent 
to which messages are sent and received between the objects in the system. Dynamic, class- 
level coupling quantifies the extent of method dependencies between the classes 
implementing the methods of the caller object and the receiver object. 

• Strength of Coupling: The strength of coupling quantifies the amount of association between 
the roles. It is of three types. 

1. Number of dynamic messages. Within a run-time session, to count the total number of 
times each message is sent from one role to another to implement a certain functional 
scenario. 

2. Number of distinct method invocations. To count the number of distinct method 
invocations between two roles. 

3. Number of distinct classes. To count the number of distinct classes. 

Dynamic coupling is compared with static coupling and three important differences are Scope of 
Measurement, Dead code, Polymorphism. In all three measures dynamic coupling is considered more 
suitable than the static coupling. The relationship between dynamic coupling measures and the change 
proneness of the classes is explored by Erik and concluded that changes may prone to error. 



ITT 



Vol. 2, Issue 1, pp. 43-50 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

3.2. Framework by R. Harrison, S. Counsell, R. Nithi [6] 

This framework involves following points regarding coupling measurement. 

1. Coupling between Object (CBO): Is a count of the number of classes to which a class is 
coupled. It counts each usage as a separate occurrence of coupling. This includes coupling via 
inheritance. 

2. Number of Associations (NAS): is defined as the number of associations of each class. 
Counted from design documents. Counts repeated invocations as a single occurrence of 
coupling. This also includes coupling from inheritance. 

Author considered that CBO is greater than NAS. 

Three hypotheses related to coupling are investigated by the authors: 

1. HI: As inter-class coupling increases, the understandability of a class decreases. This 

hypothesis is rejected by authors. 

2. H2: As inter-class coupling increases, the total number of errors found in a class increases. 

This hypothesis is rejected by authors. 

3. H3: As inter-class coupling increases, the error density of a class increases. This hypothesis is 

supported by authors. 
To investigate these hypotheses author studied dependent variables such as 

• Software Understandability (SU) 

• Number of known errors(KE) 

• Error per thousand non-comment source lines(KE/KNCSL) 

Also, Coupling due to object as a parameter of methods and return type for a method is considered by 
authors. 

3.3. Framework by Sherif M. Yacoub, Hany H. Ammar, and Tom Robinson [5] 

The authors have referred many papers and conclude following points regarding coupling 

measurement. 

Two design metrics are considered by authors. 

1. Static: can only calculate design time properties of an application. 

2. Dynamic: used to calculate actual runtime properties of an application. 
Two types of coupling is considered by authors 

1. Class level coupling (CLC): Only invocations from one method to another are considered. 

2. Object level coupling (OLC): The invocations from one method to another and frequency of 
invocations at run time is also considered. 

Authors also considered that, there is correlation between the number of faults and complexity of 
system. Therefore, static complexity is used to assess the quality of software. To measure dynamic 
complexity metrics authors used ROOM design charts. Cyclomatic complexity, operation complexity 
is calculated from ROOM charts. 

Authors explained export and import object coupling with context, description, formula and impact on 
design quality attributes. 

1. Export object coupling (EOC): Measure is a percentage of number of messages sent from 
object A to object B with respect to total number of messages exchanged during the execution 
of some scenario. 

2. Import Object coupling (IOC): Measure is a percentage of the number of messages received 
by object A and was sent by object B with respect to the total number of messages exchanged 
during the execution of some scenario. 

3.4. Framework by Erik Arisholm, Lionel C. Briand, and Audun F0yen [1] 

The authors described many significant dynamic coupling measures and highlights way in which they 
differ from static measures. Authors collected the measures using UML diagrams and accounted 
precisely for inheritance, polymorphism and dynamic binding. 
Classification of dynamic coupling measures 

1. Entity of measurement: The entity of measurement may be a class or an object. 

2. Granularity: The granularity can be class level or object level. 

3. Scope: The objects and classes are to be accounted for measurement. 
The authors captured situations for import and export coupling. 



17T 



Vol. 2, Issue 1, pp. 43-50 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

1. Dynamic messages: Total number of distinct messages sent from one object to other objects 
and vice versa, within the scope considered. 

2. Distinct method invocations: Number of distinct methods invoked by each method in each 
object. 

3. Distinct classes: Number of distinct classes that a method in a given object uses. 
The measures described by authors are summarized into table 1. 

Table 1. Heading and text fonts. 



Direction 


Entity 


Strength 


Import coupling 


Class 


Dynamic messages 


Distinct Methods 


Distinct classes 


Object 


Dynamic messages 


Distinct Methods 


Distinct classes 


Export coupling 


Class 


Dynamic messages 


Distinct Methods 


Distinct classes 


Object 


Dynamic messages 


Distinct Methods 


Distinct classes 



The authors described much more regarding polymorphism and dynamic binding using the above 
measures than the others and concluded that coupling is one of the factors which affect change 
proneness. 

3.5. Framework by Lionel C. Briand, John W. Daly, and Jurgen Wust [3] 

The authors identified six criteria of dynamic coupling measures. 

1. The type of connection: What items (attribute, method or class) constitutes coupling. 

2. The locus of impact: To decide whether to count import or export coupling. 

3. Granularity of the measure: The level of detail at which information is gathered, i.e. what 
components are to be measured and how exactly the connections are counted. 

4. Stability of server: There are two categories of class stability defined. The first is unstable 
classes which are subject to modification or development (user defined) and stable classes 
which are not subject o to change (library). 

5. Direct or indirect coupling: To decide whether to count direct or indirect coupling. For 
example, if a method ml invokes a method m2, which in turn invokes a method m3, we can 
say that ml indirectly invokes m3. Methods ml and m3 are indirectly connected. 

6. Inheritance: inheritance-based vs. non-inheritance-based coupling, and how to account for 
polymorphism, and how to assign attributes and methods to classes. 

IV. Comparison and Discussion of Existing Framework 

A comparison shows that there are differences in the manner in which coupling is addressed. There 
are differences due to different points of focus by different authors. It is observers that there is no 
uniformity in the measurement. The significant differences are discussed in the following subsection. 

4.1 Type of coupling 

In most of the frameworks the entity of measurement is a class or an object. But, the mechanisms that 
constitute coupling are different. Erik uses the concept of role-model that constitutes dynamic 
coupling. Harrison considers coupling due to any means between the classes including parameter 
passing to a method, return type of a method. Sherif and his team consider invocation from one 
method to another as a coupling. Lionel and his team consider connection due attributes, classes, and 



"467 



Vol. 2, Issue 1, pp. 43-50 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

method as a coupling. There are differences in the mechanism that constitute coupling with respect 
each framework. 

4.2 Strength of coupling 

It depends on type of connection and frequency of connection between two classes. Different types of 
coupling have different strengths. Erik counts the strength in terms of number of dynamic messages, 
number of distinct method invocations, and number of distinct classes involved in coupling. Harrison 
counts it in term of number of classes to which a class is coupled and counted each invocation 
separately. Sherif and his team counts strength in terms of method invocations and frequency of 
invocation. Lionel and his team have considered granularity instead of strength. Measure for strength 
of coupling is not uniform it also varies as per author. 

4.3 Direction of coupling 

The framework by Erik distinguishes import and export coupling. Import coupling counts messages 
sent from a role, whereas export coupling counts the messages received. Harrison has not discussed 
anything regarding direction of coupling. Sherif and his team have explained import and export 
coupling with respect to total number of messages exchanged during scenario. Lionel and his team 
explained it as a locus of impact in which import coupling is analyzed as client whereas export 
coupling as server in their roles. Definition of import and export coupling is also ambiguous. There is 
need to clearly define the concept of client and server class. 

4.4 Direct and indirect coupling 

Only Lionel and his team have discussed the concept of direct and indirect coupling. The observation 
is that many measures stated have used the direct coupling but some of measures have used indirect 
coupling also. Consideration of direct or indirect measure is again a matter of discussion. Many 
authors have not defined direct and indirect coupling. There is a need to clearly define these terms and 
to show measures under the terms. 

4.5 Stability of server class 

This point is unique to the framework by Lionel and his team. Using a stable class is better than using 
an unstable class, because modifications which could ripple through the system are less likely to 
occur. The remaining frameworks have not discussed this point; this is again an important point. How 
stability of server class is important that is also not discussed by many authors. 

4.6 Inheritance 

Inheritance is very important aspect of dynamic coupling. It is observed that there is a need to 
consider inheritance based coupling in measurement. Erik has considered polymorphism as part of 
dynamic coupling but not discussed inheritance. Harrison has considered coupling due to inheritance 
but not given any measures of inheritance and non-inheritance based coupling. Sherif also has used 
ROOM charts which shows coupling due to inheritance but explicitly it is not discussed. Erik and his 
team accounted inheritance, polymorphism and dynamic binding using various levels of granularity. 
Lionel and his team have differentiated various measures under the category of inheritance based and 
non-inheritance based coupling. There is no clear picture of how to use inheritance in coupling. Every 
author has different idea regarding inherence for coupling measurement. 

4.7 Granularity 

The granularity of the measure is the level of detail at which information is gathered. This is also an 
important point but not discussed by all the authors. Erik and Lionel have discussed the point but both 
have given a different explanation of the same. Very few authors have discussed this point. There is 
no clear understanding regarding granularity when we consider multilevel inheritance. 

V. Conclusions 

In this paper, we have studied five frameworks of dynamic coupling measurement for object-oriented 
systems. The motivation is to point out lack standardization and uniformity in the dynamic coupling 



17T 



Vol. 2, Issue 1, pp. 43-50 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

measurement. We have made comparison of all five frameworks with respect to total seven aspects. It 
is found that all frameworks differ in the definitions of measure, depth of measure, scope of measure 
and inclusion of points for coupling measurement. Many measures are ambiguous for e.g. type of 
coupling is an aspect in which cases which constitutes to a coupling are clearly not defined. Similarly, 
it is found with inheritance, strength of coupling and all other aspects of dynamic coupling 
measurement. Finally we come to the conclusion with following points. 

• There is need of standardization in the field of dynamic coupling measurement 

• Clear definition of every aspect of a measurement is needed 

• Scope of measurement is needed to define for each measure 

• Every measure must be empirically supported 

These are problems we faced in the study of the various frameworks emerged as ideas to design a new 
framework model for dynamic coupling measurement. 

References 

[I] Erik Arisholm, Lionel C. Briand, and Audun F0yen, "Dynamic Coupling Measurement for Object- 
Oriented Software," IEEE Transactions on Software Engineering, 30(8), 2004. 

[2] Denys Poshyvanyk, Andrian Marcus, "The Conceptual Coupling Metrics for Object-Oriented Systems," 

ICSM '06 Proceedings of the 22nd IEEE International Conference on Software Maintenance, 2006. 
[3] Lionel C. Briand, John W. Daly, and Jurgen Wust, "A unified framework for coupling measurement in 

object-oriented systems," IEEE Transactions on Software Engineering, 25(1), 91-121, 2002. 
[4] Erik Arisholm, "Dynamic Coupling Measures for Object-Oriented Software," IEEE Symposium on 

Software Metrics in Proc.8, 33-42, 2002. 
[5] Sherif M. Yacoub, Hany H. Ammar, and Tom Robinson, "Dynamic Metrics for Object Oriented 

Designs," Software Metrics Symposium, Proceedings. 6, 50-61, 2002. 
[6] R. Harrison, S. Counsell, R. Nithi, "Dynamic Metrics for Object Oriented Designs," Software Metrics 

Symposium, Proceedings. 5, 150-157, 2002. 
[7] S.R. Chidamber, C.F. Kemerer, "Towards a Metrics Suite for Object Oriented design", in A. Paepcke, 

(ed.) Proc. Conference on Object-Oriented Programming: Systems, Languages and 

Applications(OOPSLA' '91), October 1991. Published in SIGPLAN Notices, 26 (11), 197-211, 1991. 
[8] S.R. Chidamber, C.F. Kemerer, "A Metrics Suite for Object Oriented Design", IEEE Transactions on 

Software Engineering, 20 (6), 476-493, 1994. 
[9] V. R. Basili, L. C. Briand, and W. L. Melo. A validation of object-oriented design metrics as quality 

indicators. IEEE Transactions on Software Engineering, 22(10):751{761, 1996. 
[10] F. Abreu, M. Goulao, R. Esteves, "Toward the Design Quality Evaluation of Object-Oriented Software 

Systems", 5th International Conference on Software Quality, Austin, Texas, USA, October 1995. 

[II] V. Basili, L. Briand, W. Melo, "Measuring the Impact of Reuse on Quality and Productivity in Object- 
Oriented systems", Technical Report, University of Maryland, Department of Computer Science, CSTR- 
3395, January 1995. 

[12] V.R. Basili, L.C. Briand, W.L. Melo, "A Validation of Object-Oriented Design Metrics as Quality 

Indicators", IEEE Transactions on Software Engineering, 22 (10), 751-761, 1996. 
[13] L. Briand, P. Devanbu, W. Melo, "An Investigation into Coupling Measures for C++", Technical 

ReportlSERN 96-08, IEEE ICSE '97, Boston, USA, (to be published) May 1997. 
[14] L. Briand, K. El Emam, S. Morasca, "Theoretical and Empirical Validation of Software Product 

Measures", Technical Report, Centre de Recherche Informatique de Montreal, 1995. 
[15] L. Briand, S. Morasca, V. Basili, "Measuring and Assessing Maintainability at the End of High-Level 

Design", IEEE Conference on Software Maintenance, Montreal, Canada, September 1993. 
[16] L. Briand, S. Morasca, V. Basili, "Defining and Validating High-Level Design Metrics", Technical 

Report, University of Maryland, CS-TR 3301, 1994. 
[17] L. Briand, S. Morasca, V. Basili, "Property-Based Software Engineering Measurement", IEEE 

Transactionsof Software Engineering, 22 (1), 68-86, 1996. 
[18] S.R. Chidamber, C.F. Kemerer, "Towards a Metrics Suite for Object Oriented design", in A. 

Paepcke,(ed.) Proc. Conference on Object-Oriented Programming: Systems, Languages and Applications 

(OOPSLA'91), October 1991. Published in SIGPLAN Notices, 26 (11), 197-211, 1991. 
[19] S.R. Chidamber, C.F. Kemerer, "A Metrics Suite for Object Oriented Design", IEEE Transactions 

onSoftware Engineering, 20 (6), 476-493, 1994. 



ITf 



Vol. 2, Issue 1, pp. 43-50 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[20] N.I. Churcher, M.J. Shepperd, "Comments on 'A Metrics Suite for Object-Oriented Design'", 

/^^Transactions on Software Engineering, 21 (3), 263-265, 1995. 
[21] N.I. Churcher, M.J. Shepperd, "Towards a Conceptual Framework for Object Oriented Software 

Metrics",Software Engineering Notes, 20 (2), 69-76, 1995. 
[22] P. Coad, E. Yourdon, "Object-Oriented Analysis", Prentice Hall, second edition, 1991. 
[23] P. Coad, E. Yourdon, "Object-Oriented Design", Prentice Hall, first edition, 1991. 
[24] J. Eder, G. Kappel, M. Schrefl, "Coupling and Cohesion in Object-Oriented Systems", Technical 

Repo 7t,University of Klagenfurt, 1994. 
[25] N. Fenton, "Software Metrics: A Rigorous Approach", Chapman and Hall, 1991. 
[26] M. Hitz, B. Montazeri, "Measuring Coupling and Cohesion in Object-Oriented systems", in Proc. 

Int. Symposium on Applied Corporate Computing, Monterrey, Mexico, October 1995. 
[27] M. Hitz, B. Montazeri, "Chidamber & Kemerer's Metrics Suite: A Measurement Theory 

Perspective", IEEE Transactions on Software Engineering, 22 (4), 276-270, 1996. 
[28] I. Jacobson, M. Christerson, P. Jonsson, G. Overgaard, "Object-Oriented Software Engineering: A Use 

Case Driven Approach", ACM Press/Addison-Wesley, Reading, MA, 1992. 
[29] E. Arisholm, "Empirical Assessment of Changeability in Object-Oriented Software," PhD Thesis, Dept. 

of Informatics, Univ. of 
[30] Oslo, ISSN 1510-7710, 2001. 
[31] E. Arisholm, "Dynamic Coupling Measures for Object-Oriented Software," Proc. Eighth IEEE Symp. 

Software Metrics (METRICS '02),pp. 33-42, 2002. 
[32] E. Arisholm, D.I.K. Sj0berg, and M. J0rgensen, "Assessing the Changeability of Two Object-Oriented 

Design Alternatives — 
[33] A Controlled Experiment," Empirical Software Eng., vol. 6, no. 3,pp. 231-277, 2001. 
[34] E. Arisholm, L.C. Briand, and A. F0yen, "Dynamic Coupling Measurement for Object-Oriented 

Software," Technical Report 
[35] 2003-05, Simula Research Laboratory, http://www.simula.no/~erika, 2003. 
[36] G. Booch, J. Rumbaugh, and I. Jacobson, The Unified Modeling Language Users Guide. Addison- 

Wesley, 1998. 
[37] L. Bratthall, E. Arisholm, and M. J0rgensen, "Program Understanding Behaviour During Estimation of 

Enhancement Effort on 

Small Java Programs," Proc. Third Int'l Conf. Product Focused Software Process Improvement (PROFES 
2001), 2001. 
[38] L.C. Briand and J. Wuest, "Empirical Studies of Quality Models in Object-Oriented Systems," Advances 

in Computers, vol. 59, pp. 97-166, 2002. 
[39] L.C. Briand and Y. Labiche, "A UML-Based Approach to System Testing," Software and Systems 

Modeling, vol. 1, no. 1, pp. 10-42,2002. 
[40] L.C. Briand, J. Daly, and J. Wust, "A Unified Framework for Cohesion Measurement in Object-Oriented 

Systems," Empirical Software Eng., vol. 3, no. 1, pp. 65-117, 1998. 
[41] L.C. Briand, J.W. Daly, and J. Wust, "A Unified Framework for Coupling Measurement in Object- 
Oriented Systems," IEEE Trans. 
Software Eng., vol. 25, no. 1, pp. 91-121, 1999. 
[42] L.C. Briand, J. Wust, and H. Lounis, "Using Coupling Measurement for Impact Analysis in Object- 
Oriented Systems," Proc. Int'l Conf. Software Maintenance (ICSM '99), pp. 475-482, 1999. 
[43] F. BritoeAbreu, "The MOOD Metrics Set," Proc. ECOOP '95 Workshop Metrics, 1995. 
[44] M. Cartwright and M. Shepperd, "An Empirical Investigation of an Object-Oriented Software System," 

IEEE Trans. Software 
[45] Systems, vol. 26, no. 8, pp. 786-796, 2000. 

[46] M.A. Chaumun, H. Kabaili, R.K. Keller, F. Lustman, and G. Saint-Denis, "Design Properties and Object- 
Oriented Software Changeability, "Proc. Fourth Euro micro Working Conf. Software Maintenance and 

Reeng., pp. 45-54, 2000. 
[47] S.R. Chidamber and C.F. Kemerer, "A Metrics Suite for Object-Oriented Design," IEEE Trans. Software 

Eng., vol. 20, no. 6, pp. 476-493, 1994. 
[48] S.R. Chidamber, D.P. Darcy, and C.F. Kemerer, "Managerial Use of Metrics for Object-Oriented 

Software: An Exploratory Analysis," IEEE Trans. Software Eng., vol. 24, no. 8, pp. 629-637, 1998. 
[49] I.S. Deligiannis, M. Shepperd, S. Webster, and M. Roumeliotis, "A Review of Experimental 

Investigations into Object-Oriented 
[50] Technology," Empirical Software Eng., vol. 7, no. 3, pp. 193-232,2002. 
[51] G. Dunteman, Principal Component Analysis. SAGE, 1989. 
[52] K. El Emam, S. Benlarbi, N. Goel, and S.N. Rai, "The Confounding Effect of Class Size on the Validity 

of Object-Oriented Metrics," 



"497 



Vol. 2, Issue 1, pp. 43-50 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[53] IEEE Trans. Software Eng., vol. 27, no. 7, pp. 630-650, 2001. 

[54] R.J. Freund and W.J. Wilson, Regression Analysis: Statistical Modeling of a Response Variable. 

Academic Press, 1998. 
[55] Jakarta, "The Apache Jakarta Project," http://jakarta.apache.org/,2003. 
[56] Java.net, "Java Compiler Compiler (JavaCC)," https://javacc.dev.java.net/, 2003. 
[57] H. Kabaili, R. Keller, and F. Lustman, "Cohesion as Changeability Indicator in Object-Oriented 

Systems," Proc. IEEE Conf. Software Maintenance and Reeng. (CSRM), pp. 39-46, 2001. 
[58] A. Lakhotia and J.-C. Deprez, "Restructuring Functions with Low Cohesion," Proc. IEEE Working Conf. 

Reverse Eng. (WCRE), pp. 36-46, 1999. 
[59] G. Myers, Software Reliability: Principles and Practices. Wiley, 1976. 
[60] H. Sneed and A. Merey, "Automated Software Quality Assurance,"IEEE Trans. Software Eng., vol. 11, 

no. 9, pp. 909-916, 1985. 
[61] M.M.T. Thwin and T.-S. Quah, "Application of Neural Networks for Software Quality Prediction Using 

Object-Oriented Metrics, "Proc. IEEE Int'l Conf. Software Maintenance (ICSM), 2003. 

Authors 

V. S. Bidve: Computer engineering from University of Aurangabad and the M. Tech 
pursuing from BVUCOE, Pune. He has nine years of teaching experience in Pune and 
Mumbai. He is now working as a lecturer in the Department of information technology 
SKNCOE, Pune. 



A. R. Khare: Has completed bachelor degree in computer engineering from Bhopal 
University, India and M. Tech. from same University. Pursuing P. hD. In the field of computer 
engineering. Working as Assistance Professor in Information Technology Department of 
BVCOE, Pune. Having 10+ years of teaching experience. Working as a PG coordinator for IT 
department and guiding number of students for their project work and various academic 
activities. 




"io~[ 



Vol. 2, Issue 1, pp. 43-50 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



The Computer Assisted Education and its Effects 

on the Academic Success of Students in the 

Lighting Technique and Indoor Installation 

Project Course 

1 o o 

Ismail Kayri , Muhsin Tunay Gen^oglu and Murat Kayri 
faculty of Technical Education, Electrical Department, Batman University, Batman, Turkey 

Faculty of Engineering, Electrical Department, Firat University, Elazig, Turkey 
Department of Computer & Instructional Technology, Yuzuncu Yil University, Van, Turkey 



Abstract 

The purpose of this study is to investigate the effects on students' academic success of a visual didactic material 
developed in a computing environment which is believed it will enlighten the sudents during the process of 
completing the project and the explanation of "the Lighting Technique and Internal Installation Project" 
course that is taught in the curriculum of the electrical and electronic parts from institutions who are given 
formal and widely education such as Technical Education and Engineering Faculties, Vocational Colleges, 
Public Education Centers, Indrustrial Vocational High schools which are the bakcbone of vocational and 
technical education. In addition, the use of the educational software as in this area as a didactic material that is 
developed for the mentioned course is determined as a subsequent goal. To test the effectiveness of the 
developed educational software in the learning process there are two measurement tools for the cognitive 
dimension developed according experts and accordant findings of these measurement tools the effectiveness is 
examined. 

KEYWORDS' Computer-assisted teaching, Computer-assisted education, Electrical Installation Project, 
Visual Education 

I. Introduction 

Vocational and technical education can be defined as "the whole teaching, supervision, magament 

with coordination, organization, development, research and planning activities of any kind vocational 

and technical training in the industry, agriculture and service sectors within the integrity of the 

national education" [1]. 

In developed western countries is vocational training defined as a vocation branch that aims to gain a 

career through practical activities or manual skills [2]. The purpose of vocational-technical education 

is in generally to educate and train individuals as qualified work force for employment in the industry, 

trade and service sectors, and to give the basic education that is required for the transition to higher 

education institutions which is the continuation of their vocation [3]. 

During examination of the curricula from developed countries and European countries in particular, 

there is a presence of a linear proportionality observed between their development and their 

importance for vocational and technical training. 

The self-sufficiency of the countries at the point of producing goods and services and getting healthy 

along with other countries with which they have economic relations, using the current technology and 

producing new technologies is related with the importance they show for vocational and technical 



«T 



Vol. 2, Issue 1, pp. 51-61 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

training in their development plans. Therefore, instructors strive to make the content more effectively 
when programming the learning processes. 

An effective education can be achieved by discussing and eliminating time, space and economic 
concerns which limit the instructors as well as the students. This discussion encloses a wide area that 
includes the systematic and reflective transfering process of the teaching design on the teaching and 
learning principles, didactic materials, didactic activities, information sources and evaluation plans 
[4]. 

While these concepts are discussed, the learning of the student by internalizing the given information 
comes to the fore in the 'teacher, student and environment triangle' which has an important role in the 
teaching process. By thinking that each learner has different psychological, social and cognitive 
development characteristics, the importance remains of converting the teacher-centered education into 
student-centered and preparing in this context the teaching desing by taken the learning style of the 
learner into account [5]. 

According research [6,7], it is determined that taking the learner to the center of the learning process 
and the education processes that are established by considering the learning methods ameliorate the 
creativity, motivation and the acamedic achievement. It is putting forth that in an education designed 
by including the learning methods into the learning process, the learner can use the information more 
effective by remembering it for a longer time [8]. When on the other hand studies on learning styles 
are examined [9,10,11], it is observed that as a result of teaching designs that are configured by the 
participation of the learner in the learner process and that are supported with technologies, the 
academic success and performance increased and that a more positive attitude to learning is 
developed. The concerned findings of the research puts forth that the teacher need to prepare the 
learning environment; method, technique and didactic materials according the properties of the 
student and the lesson [5]. Crowded classrooms, unmet education demands, facilities, equipment 
inability, unbalanced distribution in terms of equality of opportunity, unmet individual needs, yield 
losses of students' success and similar problems are considered as the crucial characteristic problems 
of the traditional education systems. The Turkish vocational and technical education remained behind 
the developed countries in terms of the number students per teaching staff. The number of student per 
teaching staff ranges in the four-year vocational and technical training faculties from 22,7 to 33,6, in 
the Vocational School of Higher Education it is 60,8 and in the secondary school it is 31,7. In 
developed countries these numbers mainly ranges from 5 to 10. The most important problems of the 
vocational and technical education institutions in addition to the problems around the number of 
student and lack of teaching staff are infrastructure, technological equipment and the deficit of 
laboratory and workshop [12]. 

The sheer number of students per teaching staff will negativly affect the learning process in classes 
where computers are used as tool. The elimination of these problems will be possible with bringing 
the invididual to the fore during the education process, with a student-centered education, by 
designing, applying, evaluating and developing the techniques to be applied during the process, with a 
contemporary understanding and in accordance with the needs of the time [13]. 

The each day growing complexity of education, the rise of the information to learn, the need for 
qualified and modern education require the use of computers as a tool in education. The use of 
technology in education will provide that the education will be carried out in accordance with the 
needs of the era as well as that the highest appropriate yield will be received from education [14]. The 
computer which is one of the technological capabilities and a basic element of culture in our century, 
has become a tool which use is rapidly spread [15]. 

Computer-assisted education is the set of applications about the use of the computer as a tool for a 
directly offer of the course contents, for repeating knowledge gained in other ways, for solving 
problems, for practice and for similar activities [15]. The education technology that will meet the 
specified requirements includes the targeted application of human power and other resources so 
learning can be convert to easy, concrete, rich, meaningful, motivational, encouraging, efficient and 
qualified activities by optimizing the teaching-learning processes [16]. 

In this study it is aimed to present the above-mentioned advantages of educational technology in a 
particular course. The research model has an experimental - control group pattern, and the work 
group consist the fourth grade students who follow the Lighting Technique and Internal Installation 
Project course of the Electricity Teacher Training Department from the Faculty of Technical 



UT 



Vol. 2, Issue 1, pp. 51-61 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Education which is located in the Southeast Anatolia Region. The experimental and control group are 
obtained by randomly splitting 30 students of the same branch into equal groups. 
The nature of the executed course includes a project-based process for the experimental group as well 
as for the control group. After the theoretical dimension of the course was explained to the students 
during five weeks in the same environment and equal time frames, the students in the control group 
received a single purpose educational software CD that is developed by researchers. The studenst of 
the control group were released to take advantage of any kind of material including internet after the 
theoretical process of the instructor. Both groups of students were asked to project the strong and 
weak current installation on their own architectural projects. This is the common method troughout 
the country to train students in this course. There has been no intervention in the experimental-control 
groups. 

1.1 Educational Technology 

Educational Technology is the functional structuration of the learning or education processes by 

running knowledge and skills to dominate education in general and learning in particular. 

According to another definition, educational technology is a branch of science that studies the ways to 

carry individuals to the special purposes of education by using wisely and skillfully the accessible 

human power and other sources related to education and based on relevant data about communication 

and learning from behavioral sciences with appropriate methods and techniques and by evaluating the 

results. 

The current sense of educational technology is a technology related to education sciences that 

developes, applies, evaluates and manages appropriate designs by running the whole concerning 

elements (human power, knowledge, method and techniques, tools and necessary arrangements) for a 

systematic and scientific analysis of all aspects of the phenomenon of human learning and to bring 

solutions. In other words, educational technology is an original discipline about learning and teaching 

processes [17]. 

The reflections of the effective use of education technologies on the learning process can be listed as 

follows: 

Technologies: 

• improves the quality of learning. 

• reduces the time spent to reach the goal of the students and teachers. 

• improves the effectiveness of teaching. 

• reduces the cost of education without reducing the quality. 

• involves the student in the environment [18]. 

Some of the facilities of the modern educational technology provided for educational applications can 
be listed as follows: 

• Providing freedom and initiative, 

• Enlarging options, 

• Saving the individual from the monopoly of the group, 

• Providing the student the opportunity of individual and independent learning, 

• Providing information of the first source, 

• Solving the inequality of opportunities, 

• Providing quality in education, 

• Providing standardization, diversity and flexibility in education programs, 

• Increasing the speed of learning, 

• Adding at the same time individuation and popularization properties to the educational 
services, 

• Providing the opportunity to increase the efficiency and effectiveness of the learning-teaching 
processes [17]. 

1.2 Computer- Assisted Teaching 

Thanks to the features like quickly processing, saving and delivering information, the computer has 
become the most wanted tool in education. The use of an intensive technology is indeed regarded as 
strange because of the human labor in the activities of measuring and evaluating the success, in the 



IJT 



Vol. 2, Issue 1, pp. 51-61 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

student guidance-consultancy work and in the running of educational services that have become 
complicated depending on the growing number of student in researchs on education. The intensive 
usage of technological resources in education has received a wide acceptation and the applications has 
increased. Therefore the apply in education of computers which are used in every stage of life can not 
be opposed. Computers are filling an important gap in several topics in which the tools and materials 
used in traditional education are insufficient. Many matters that are difficult or impossible to perform 
in traditional education can be accomplished with computers. 

In computer-assisted education, the computer has an application area where it can be used supportive, 
with the teacher or individually or with other methods and techniques. Therefore, the computer- 
assisted education is seen as the most promising between the methods in education services. It is 
indicated that using a virtual lab in engineering education has a positive impact on factors such as the 
involvement of students in class, the self-confidence and motivation [19], provides students an 
individual learning environment [20], gives the students the opportunity to gain a wide variety of 
experiences about different approaches and helps to learn in an interactive and meaningful way [21]. 
The computer-assisted education as an educational environment where the teacher prepares the 
learning environment, recognizes the students abilities, accomplishes the activities like repeating, 
practising, directing and personalization according to the students' capabilities requires the usage of 
the computer in different place, time and ways according to the learning objectives which are 
determined in agreement with the structure of the teaching matter [22]. 

The following findings were obtained in some international researchs about the usage of the computer 
in education; 

1. The computer helps the students to achieve their academic goals. 

2. Compared to the traditional education, computer programs provide a saving of the learning 
time between 20% and 40%. 

3. Using the computer in the field of education has a positive impact on the students' success 
and increases the motivation. 

4. The effectiveness of educational software is playing an important role in the success of the 
computer-assisted education [23]. 

1.2.1 Benefits of Computer- Assisted Teaching 

The benefits of computer-assisted teaching can be listed as followed; 

The materials that were not understood can be repeated several times by the students. 

There is no dependency on someone else and each student learns at own pace. 

During the implementation of the computer-assisted training the student must participate actively in 

class. 

• Errors and shortages will be discovered and corrected while learning 

• The student has always a chance to answer again. 

• It keeps the students' interest to the class always alive. 

• Gives the teacher the opportunity to deal more closely with the students by saving them from 
works like repeating and correction etc. 

• Dangerous and expensive studies can be easily simulated with computer-assisted teaching 

• The students can learn more quickly and in a systematic way. 

• The level of students' attention can be held very high through drawings, colors, figures, and 
images the students see while following the class. 

• Learning is demoted into small units so that success can be achieved step by step by testing it 
on these units [22]. 

In addition, according to performed research [24], the most import benefit of the use of computers for 
education is the facilitation of the access to information. 

II. Method / Procedure 

2.1 The Goal and Importance of Research 

The current teaching in our education institutions are not going forward than an activity aimed to 
memorise rules by using the blackboard and the textbook which is a teacher-centered education. In 



ITT 



Vol. 2, Issue 1, pp. 51-61 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

addition to the didactic methods applied in the existing system there is need for taking advantage of 
computer-assisted training applications like demonstration, simulation, practice and exercise, 
establishing a dialogue, problem solving, didactic games, information store, creative activities and 
testing [25]. The declines in computer prices and the need for computer in the communication and 
information age made the computer indispensable for especially new generation people. The 
disadvantages faced in education institutions like the insufficiency of time, place, technology and 
teaching staff created the need for learning activities outside of the school. At this point the 
widespread of computers that are present in almost every household has played an important role. 
University students, high school and even elementary school students are using computers to reinforce 
the treated material at school. 

While in many branches of science the effects of the computer-assisted education on the motivation of 
students and their academic success are examined, there is no found any research in electrical and 
electronical science branch in literature containing the project of a building's electrical installation. 
In this research is the following subject examined: The effects on students' academic success from the 
software that is developed to support the traditional education method in the Electrical Installation 
Project course which is an important part of the Electrical and Electronical Science branch. 

2.2 Work Group 

The workgroup of this research is formed by the students of the fourth class who follow the Lighting 
Technique and Indoor Installation Project course in the Electricity Teacher Training department of the 
Faculty of Technical Education of the University of Batman. The experimental group is obtained by 
randomly splitting 30 students studying at the same branch into 2 groups of 15 students. There is no 
significant difference identified between the grade point averages of the groups belonging to earlier 
periods. All members of the group are male. 

The members of group A an B received with a projection device at the same time periods and by the 
same instructor the required theoretical knowledge to project the strong and low current installation of 
a building in an AutoCAD software environment. Both groups received resources which contain the 
theoretical information about the course. The members of group B received also an audio-visual 
education CD that is developed by researchers through various programs in a computer environment. 
This education CD contains an audio-visual description in an AutoCAD drawing program 
environment of how to achieve all required processes from the creation of the symbols in the program 
environment that will be used in the electrical installation's project of a building, as well as the 
drawing of the strong and low current installation from the same building to the required calculations. 
This CD includes also the theoretical materials that are sorted systematically so that students can 
easily access when they require it. After that the groups of students are equiped with theoretical 
knowledge, they were asked to project the strong and low current installation on an architectural plan 
provided by the instructor or by the students. 

2.3 Private Educational Software 

High-tech products such as computers, television and internet are used in education area to support 
the training. Many educational institutions are choosing to develop new alternatives to take advantege 
of the benefits provided by new technologies and to improve the usability of their current education 
programs. It is intended to give more people an education outside of the traditional education 
approach by using this new methods thans to these alternatives [13]. The private didactic software 
which is prepared in a computer environment contains the audio-visual media that describes on a 
sample architectural plan the required processes for the project of an electrical installation of a 
building. With this software the students can repeat the issues that remain limited with the 
commentaries of the instructor in the classroom. Again, thank to the software including CD, they can 
see the path to follow to achieve the required calculation and the drawing of the electrical project. 
In the drawing of the project the AutoCAD program were prefered. Although the students master the 
principles of project drawing some students have had problems on the use of the program and they 
have transfered these problems to the instructor during the class as well as outside the classroom. This 
leads to a loss of time in the educational process. The explanation in the private didactic software is 
mostly done through the AutoCAD program. Thus the students can watch infinitely the processes so 
one assumes that the students will ask less questions to the instructors. The required operations to 



"iFf 



Vol. 2, Issue 1, pp. 51-61 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

make the necessary calculations and to draw the project are systematically determined so that the 
resolution of the problems faced by the students can be reached quickly. 

The prepared private didactic software also contains regulations for the drawing of the electrical 
installation project whereby the students possess an offline source about legislation. In short, the 
students who will project the electrical installation of a building can reach all required operations to 
execute and complete the project independently from space and time thanks to these private 
educational software. 

The videos which constitute the most important leg of a private didactic software were prepared by 
the Screen Recorder module of the Blueberry Flashback Professional software. The video comments 
that were created with the same module are technically optimized in terms of quality and 
performance. Audio files that were mounted on some videos are recorded with the Polderbits Sound 
Recorder and Editor software and the essential quality optimization were done. The image files used 
in the software were optimized for use with the Adobe Photoshop editor. All of these media has been 
turned into an autorun CD within in a suitable menu structure and duplicated through the Camtasia 
Studio software. 

Before the prepared private didactic software were applied on the experimental group, it is applied by 
researchers in the computer lab to detect and eliminate unforeseen technical problems by getting the 
views on the software of 30 final year students who are studying Computer and Instructional 
Technologies at the Education Faculty of the 'Yuzuncu Yil' University. The percentage and frequency 
results concerning the obtained views are presented in Table 1. 

Table 1 . View of students about the Private Educational Software 



CRITERIA 


Very bad 


Bad 


Mediocre 


Good 


Very Good 


N=30 


f 


% 


f 


% 


f 


% 


f 


% 


f 


% 


1 . The level of attractiveness of video 
and animations 


- 


- 


- 


- 


9 


30 


15 


50 


6 


20 


2. Easy usage of interfaces 


- 


- 


- 


- 


4 


13,3 


16 


53,3 


10 


33.3 


3. Understandability of the content 


- 


- 


- 


- 


9 


30 


14 


46,6 


7 


23,3 


4. Systematic transitions between 
topics 


- 


- 


- 


- 


7 


23,3 


18 


60 


5 


16,6 


5. Color matching between text and 
graphics 


- 


- 


- 


- 


5 


16,6 


15 


50 


10 


33,3 


6. Functionality of the transition 
buttons 


- 


- 


- 


- 


7 


23,3 


20 


66,6 


3 


10 


7. Density of the graphics display 


- 


- 


- 


- 


6 


20 


15 


50 


9 


30 


8. Readability of the screen 


- 


- 


- 


- 


6 


20 


19 


63,3 


5 


16,6 


9. Flexibility of the video playback 
buttons 


- 


- 


- 


- 


4 


13,3 


11 


36,6 


15 


50 


10. The sound quality and 
adjustability 


- 


- 


- 


- 


16 


53,3 


14 


46,6 


- 


- 


1 1 . Loading and execution speeds of 
videos 


- 


- 


- 


- 


15 


50 


12 


40 


3 


10 


12. Suitability of the font and point 
values of the characters 


- 


- 


- 


- 


7 


23,3 


13 


43,3 


10 


33,3 



By analyse Table 1 we can see that the students' views about the articles 10 and 11 are middle and 
about the other articles it is good or very good. 

The reason of the high frequency at the middle selection of students' views about the article 
concerning the sound quality and adjustability is that 22050 Hz-48 Kbs-Mono were selected to 
provide that the audio files take a small place. These selections were changed as 44100 Hz-64 Kbs- 
Mono and converted into a Compact Disc quality. The reason that the students' view on the article 
concerning the load and execution speeds of videos are concentrated at the middle selection is that the 
graphic intensity is keeped at a high level to obtain a clear image. 

The different hardware values of the computers used by the students produced such a result. Thinking 
about the hardware sufficiency of the computers belonging to the members of the group that will 
receive a CD; the resolution, frequency and color quality are reduced from 800x600-70 Hz-32 Bit to 
800x600-60 Hz- 16 so that the negativity stipulated in article 11 will disappear. 



"ifff 



Vol. 2, Issue 1, pp. 51-61 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

2.3.1 Operation principle of the Private Educational Software 

The private didactic software developed by researchers through a variety of media development 
software is transformed into an autorun CD. After the installation of the CD on the computer, the 
main page of the software appears on the screen (Figure 1). 



Electrical Installation Project Drawing Educational Softwan 



The creation of the list of symbol 
Drawing of the high current installation 
Drawing of the low current installation 
Drawing of the column charts 

High current column chart 

Automatic ladder lighting column chart 

Diafon installation column chart 

Television and telephone Installation column 
. 'reparation of the upload schedule table 
Calculation Of the voltage drop 



Calculation of the grounding 

References 

Regulations about the electrical installations 

idarts 

it r acts 

cifications 
ghting methods and calculations 
laterials used in the electrical installation 
Help 



Figure 1. Main page of the Electrical Installation Project Drawing Educational Software 

The main page of the software consists 3 parts: 

1. Visual and audio videos commentary: This part contains the visual and audio videos on the 
architectural project prepared by researchers. The students can get support on concerning 
issues in the process of creating the project. 

Referenced documents: This section contains the offline sources which includes the principles 
of the Electrical Installation Project drawing developed by researchers. It is assumed that the 
students can faster attain the desired information thanks to this feature. 

Help: This part contains the help topics that include the operation principle of the program for 
an effective use of the private educational software by students. 



2. 



3. 



The crucial part of the software is formed by audio and videos developed by researchers that 
systematically demonstrate from start to finish the project of an electrical installation on an 
architecectural plan (Figure 2). Thanks to these videos, students can resolve the problems they face in 
the executing process of their own projects. In this way, the need to communicate with the instructor 
for every encountered problem will be eliminated. 



1?T 



Vol. 2, Issue 1, pp. 51-61 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Flash Player 



File Movie About., 



o ■ 



ca d 



J AutoCAD 2002 - [C:\Documents and Settings\Administrator\Desktop\Kuwetli_Akim\Elektrik_Tesisat.dwg] 



Ep File Edit View Insert Format Tools Draw Dimension Modify Image Window Help 

||Df#Bl«QL^|ft«a^l""IB©«ft«ift s K El * & Q? 



ID 



Mi VF ? A 



P ^ | tf a £& ^M kuvvetli akim jj |§ | D ByLayer ^l[ 



ByLayer ^| 1 1 0.40~ T] | f 



~3 




Regenera t i ng mode 1 . 

AutoCAD menu utilities loaded . 



Command : 



^r 



19808.0302,1560.0579,0.0000 



~| SNAP| GRID| 0RTH0| POLAR |0SNAP|0TRACK LWT||M0DEL 

Figure 2. A screenshot of a video from the Educational software 



- 



III. Findings and Commentary 

After the transfer of the necessary theoretical knowledge by the traditional education system on equal 
terms and times to the groups A and B that constitute the research groups, were exercises made on 
sample projects with a projection device. The students of group A and B began to undertake their 
projects after the suitability of the students' architectural projects has been confirmed by the 
instructors. The developments about the conduct of the students' projects were assessed through 
scales prepared by researchers during a process of approximately 30 days (Table 2). 
In the scale showing that the students in group B are in general more successful the remarkable items 
of success are as followed: 



The accuracy of drawn symbols to be used in the Project 

Compliance of drawing elements with the regulation places 

Conformity of the socket number and forces with regulation 

Adequacy of management in the drawing layer 

Accuracy of calculation of the voltage drop 

Accuracy of the upload table 

Accuracy of calculation of the grounding 

The correct use of time in the project execution process 

The research shows that the prepared training Cd assisted the students in the required 
calculations and drawings of the project. Item 13 of the scale especially shows that the students who 
utilized the training Cd did used the time correctly, while the students in group A couldn't use it 
correctly because they often felt the need to consult the educational staff. 



Is] 



Vol. 2, Issue 1, pp. 51-61 



International Journal of Advances in Engineering & Technology, Jan 2012. 
©IJAET ISSN: 2231 



1963 



Table 2. Findings concerning the undertake of the project from the group member 




Students group A 


Students group B 


CRITERIA 


Weak 


Passes 


Mediocre 


Good 


Very 
Good 


Weak 


Passes 


Mediocr 
e 


Good 


Very 
Good 


N=15 


f 


% 


f 


% 


f 


% 


f 


% 


f 


% 


1 


% 


f 


% 


f 


% 


f 


% 


f 


% 


LTbe accuracy of drawn symbols to be 
used in t lie project 


1 


6,6 


3 


20 


7 


46,6 


4 


26,6 


- 


- 


- 


- 


1 


6,6 


2 


13,3 


6 


40 


6 


40 


2, The accuracy of the line type and 

thickness used in the project 


- 


- 


3 


20 


8 


53,3 


3 


20 


1 


6,66 


- 


- 


- 


- 


4 


26,6 


7 


46,6 


4 


26,6 


3, Compliance of drawing elements with 
the regulation places 


- 


- 


T 


13,3 


11 


73,3 


2 


13,3 


- 


- 


- 


- 


1 


6,6 


- 


- 


5 


33,3 


9 


60 


4. Suitability of regulation with lamp 
location, number and forces 


1 


6,6 


4 


26,6 


6 


40 


4 


26,6 














4 


26,6 


5 


33,3 


6 


40 


5, Conformity of the socket number and 
forces with regulation 


- 


- 


2 


13,3 


5 


33,3 


6 


40 


2 


13,3 


- 


- 


- 


- 


3 


20 


8 


53,3 


4 


26,6 


6. Adequacy of management in the 
drawing layer 


- 


- 


4 


26,6 


7 


46,6 


3 


20 


1 


6,66 


- 


- 


- 


- 


4 


26,6 


3 


20 


8 


53,3 


7. Accuracy of the strong current colon 

chart 


- 


- 


5 


33,3 


3 


20 


6 


40 


1 


6,66 


- 


- 


2 


13,3 


6 


40 


3 


20 


4 


26,6 


8, Accuracy of the low current colon chart 


- 


- 


4 


26,6 


5 


33,3 


6 


40 


- 


- 


- 


- 


1 


6,6 


5 


33,3 


4 


26,6 


5 


33,3 


9. Accuracy of calculation of the voltage 
drop 


2 


13,3 


4 


26,6 


7 


46,6 


2 


13,3 














6 


40 


4 


26,6 


5 


33,3 


10. Conformity of scaling 


- 


- 


1 


6,6 


7 


46,6 


5 


33,3 


2 


13,3 


- 


- 


- 


- 


4 


26,6 


8 


53,3 


3 


20 


11. Accuracy of the upload table 


- 


- 


5 


33,3 


5 


33,3 


5 


33,3 














7 


46,6 


3 


20 


5 


33,3 


12. Accuracy of calculation of the 
grounding 


- 


- 


4 


26,6 


6 


40 


3 


20 


2 


13,3 


- 


- 


- 


- 


3 


20 


5 


33,3 


7 


46,6 


1 3. The correct use of time in the project 
execution process 


5 


33,3 


8 


53,3 


2 


13,3 


















3 


20 


6 


40 


6 


40 



IV. Conclusion 

In recent years, the effects from computer-assisted education on learning is extensively examined by 

researchers in different fields. Computer-assisted education is been used frequently in modern 

educational systems because of its benefits like providing persistence in learning in general, providing 

a learner-centered learning process, getting the event of learning out of four walls and making it 

independent from space and time, providing the possibility to practice frequently and providing a 

quick access to information. 

In this experimental study, the effects from computer-assisted education on the success of students 

taking the course Electrical Installation Project drawning were researched. The results obtained in the 

light of research findings are presented in the following items: 

According the results of the scale developed by researchers, was the academic success of the group 

that had received a visual training CD as a supllement to the traditional education was higher than the 

success of the group that had learned only with the traditional education system. 

It is observed that the students of the group that possesed a private didactic software during a process 

of approximately one month, asked the instructors for less help. This shows that the visual education 

CD contributed to the development of the individual competences of the students. 

According these results it can be defend that in project-based courses the targeted and specially 

developed visual and auditory didactic materials that the students can consult independent of time or 

place during the project are more effective than other materials in scattered places. 

There may be some disadvantages besides the benefits that the study revealed. For example, the 

teamspirit may weaken because the students work individually. This and similar disadvantages are 

waiting for researchers as a subject of an other study. 

References 

[1] C. Alkan, H. Dogan & I. Sezgin, (1999) "Principles of Vocational and Technical Education", Gazi 

University Faculty of Communication Press, Ankara. 
[2] H. Ocal, (2008) "Vocational Education and Vocational Guidance", In the light of Science and Intellect 

Journal of Education, Vol. 99, pp. 12-19. 
[3] I. Esme, (2007) "Current Status and Issues about Vocational and Technical Education", T.C. 

YOK International Vocational and Technical Education Conference, Ankara, Turkey. 



l9T 



Vol. 2, Issue 1, pp. 51-61 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[4] P. L. Smith & T. J. Ragan, (1999) "Instructional Design", Second Edition, USA: John Wiley&Sons Inc. 
[5] S. Cengizhan, (2007) "The Effects Of Project Based And Computer Assisted Instructional Designs On 

Those Students' Who Have Depended, Independed And Cooperative Learning Styles, Academic 

Achievement And Learning Retention" Vol. 5, No. 3, pp. 377-401. 
[6] N. Bajraktarevic, W. Hall & P. Fullick, (2003) "Incorporating Learning Styles in Hypermedia 

Environment: Empirical Evaluation", Accessed: ttp://wwwis. win.tue.nl/ah2003/proceedings/paper4.pSd, 

April-2005. 
[7] J. Ingham, R. Meza, P. Miriam & G. Price, (1998) "A Comparison of the Learning Style and 

Creative Talents of Mexican and American Undergraduate Engineering Students", Accessed: 

http://fie.engrng.pitt.edu/fie98/papers/1352.pSd, January-2004. 
[8] R. M. Felder, (1996) "Matters of Style", ASEE Prism, Vol. 6, No. 4, pp. 18-23. 
[9] O. Demirbas & H. Demirkan, (2003)"Focus on Architectural Design Process Through Learning Style, 

Design Studies", Vol. 24, No 5, pp. 437-456. 
[10] R. Dunn & S. Griggs, (1996) "Hispanic- American Students and Learning Style", ERIC 

Identifier :ED3 93 607, Accessed: http://www.ericfacility.net/ericdigests/ed393607.html, January-2004. 
[11] L. A. Kraus, W. M. Reed & G. E. Fitzgerald, (2001) "The Effects Of Learning Style and Hypermedia 

Prior Experience On Behavioral Disorders Knowledge And Time On Task: A Case-Based Hypermedia 

Environment, Computers in Human Behavior", Vol. 17, No. 1, pp. 124-140. 
[12] I. Sahin & T. Findik, (2008) "Vocational and Technical Education in Turkey: Current Situation, problems 

and proposition for solutions", The Turkish Journal of Social Research Vol. 12, No. 3, pp. 66-86. 
[13] H. Ogut, A. A. Altun, S. A. Sulak & H. E. Kocer, (2004) "Computer-assisted, Interne Access, E-Learning 

with Interactive Training Cd", The Turkish Online Journal of Educational Technology, Vol. 3, No 1, pp. 

67-74. 
[14] B. Arslan, (2003) "Computer-assisted Education Secondary Students and the feedback from CAE of 

teachers which have had an educative role in this process", The Turkish Online Journal of Educational 

Technology, Vol. 2, No. 4, pp. 67-75. 
[15] F. Odabasi, (2006) "Computer-assisted education", Unit 8, Anadolu University, Open Education Faculty 

Press, Vol. 135, Eskisehir. 
[16] K. Cilenti, (1995) "Educational Technology and Importance", Kadioglu Printing, Ankara. 
[17] C. Alkan, D. Deryakulu & N. Simsek, (1995) "Introduction to Educational Technology: Discipline, 

Process, Product", Onder Printing, Ankara. 
[18] B. Akkoyunlu, (1998) "The place and role of the teacher in the Curriculum Programs of Computers", 

Hacettepe University Press, Ankara. 
[19] M. Buyukbayraktar, (2006) "The effect of computer assisted application of the Logical Circuit Design on 

the succes of students", Unpublished Master's Thesis, Sakarya University, Institute of Social Sciences, 

Sakarya. 
[20] H. Ekiz, Y. Bayam & H. Unal (2003) "Application of distance education on logical circuits", The 

Turkish Online Journal of Educational Technology, Vol. 2, No. 4, pp. 92-94. 
[21] A.H.K. Yuen (2006) "Learning to program through interactive simulation", Educational Media 

International, Vol. 43, No. 3, pp. 251-268. 
[22] H. Keser, (1988) "Proposed Model for Computer-assisted education", PhD Thesis, Ankara University, 

Institute of Social Sciences, Ankara. 
[23] G. Gleason, (1981) "Microcomputers in Education: The State of Art." Education Technology, Vol. 21, 

No. 3. 
[24] S. Usun, (2003) "The Views Of The Students On The Advantages And Important Using Elements In 

Education And Instruction Of Computers", Kastamonu University, Journal of Education, Vol. 1, No. 2, 

pp. 367-378. 
[25] E. Bayraktar, 1998 "Computer-assisted Teaching of Mathematics", PhD Thesis Ankara University, 

Institute of Social Sciences, Ankara. 



Authors 

Ismail Kayri is both a lecturer in Department of Electric Education in Batman University and 
a PhD scholar in Electric -Electronic Department in Firat University. He studies on electric 
systems installation, programming language, database management system and other software 
tools especially related to electric & electronic science. 



tfv* 



"60T 



Vol. 2, Issue 1, pp. 51-61 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Dr. Muhsin Tunay Gencoglu is an associate professor in Department of Electrical and 
Electronics Engineering, Firat University. Fields of interest: high voltage techniques, electric 
power transmission and disturbition, HV insulators, lighting techniques and renewable energy 
sources. He has lots of articles in theese fields. 




Dr. Murat Kayri is an assistant professor in Computer Science and Instructional Technology 
Department in Yuzuncu Yil University. Dr. Kayri interests in neural network, statistical 
modelling, and networking. He has lots of articles on statistical and artificial neural network. 






7T\ 



Vol. 2, Issue 1, pp. 51-61 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Fractal Characterization of Evolving 
Trajectories of Duffing Oscillator 



Salau, T. A.O. 1 and Ajide, O.O. 2 



19 

' Department of Mechanical Engineering, University of Ibadan, Nigeria. 



Abstract 

This study utilised fractal disk dimension characterization to investigate the time evolution of the Poincare 
sections of a harmonically excited Duffing oscillator. Multiple trajectories of the Duffing oscillator were solved 
simultaneously using Runge-Kutta constant step algorithms from set of randomly selected very close initial 
conditions for three different cases. These initial conditions were from a very small phase space that 
approximates geometrically a line. The attractor highest estimated fractal disk dimension was first recorded at 
the end of 15, 22, and 5 excitation periods for Case-1, Case-2 and Case-3 respectively. The corresponding 
scatter phase plots for Case-1 and Case-2 agreed qualitatively with stroboscopic-ally obtained Poincare 
sections found in the literature. The study thus established sensitivity of Duffing to initial conditions when 
driven by different combination of damping coefficient, excitation amplitude and frequency. It however showed 
a faster, accurate and reliable alternative computational method for generating its Poincare sections. 

KEYWORDS' Duffing oscillator, Fractal, Poincare sections, Trajectories, Disk dimension, Runge-Kutta and 
phase space 

I. Introduction 

Duffing oscillator can be described as an example of a periodically forced oscillator with a nonlinear 
elasticity [14]. This can be considered as chaotic system since it is characterized by nonlinearity and 
sensitivity to initial conditions. Available literature shows that Duffing oscillator has been highly 
studied and this is due to its wide modelling applications in various fields of dynamics. The dynamics 
of duffing oscillator has been studied using various tools. [9] investigated the dynamical behaviour of 
a duffing oscillator using bifurcation diagrams .The results of the study revealed that while bifurcation 
diagram is a resourceful instrument for global view of the dynamics of duffing oscillator system over 
a range of control parameter, it also shows that its dynamics depend strongly on initial conditions. 
[11] Investigated the dynamic stabilization in the double-well Duffing oscillator using bifurcation 
diagrams. The research paper identified an interesting behaviour in the dynamic stabilization of the 
saddle fixed point. It was observed that when the driving amplitude is increased through a threshold 
value, the saddled fixed point. It was observed that when the driving amplitude is increased through a 
threshold value, the saddle fixed point becomes stabilized with the aid of a pitchfork bifurcation. The 
findings of the authors revealed that after the dynamic stabilization, the double-well Duffing oscillator 
behaves as the single -well Duffing oscillator. This is because the effect of the central potential 
barrier on the dynamics of the system becomes negligible. 

A fractal generally refers to a rough or fragmented geometric shape which is capable of been divided 
into parts. Each part is an approximately reduced-size copy of the whole. This property is popularly 
referred to as 'self-similarity'. We can also describe fractal as geometric pattern that is repeated at 
ever smaller scales to produce irregular shapes and surfaces that cannot be represented by classical 
geometry. The complex nature of fractal is becoming to attract more researchers interest in the recent 
time. This is because it has become a major fundamental of nonlinear dynamics and theory of chaos. 



"62j 



Vol. 2, Issue 1, pp. 62-72 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Fractal structures and dynamical systems associated with phase plots are inseparable. The strong 
relationship between fractal structures and chaos theory will continue to remain the platform of 
success in nonlinear dynamics. Fractals are highly employed in computer modelling of irregular 
patterns and structures in nature. Though the theory of chaos and the concept of fractals evolved 
independently, they have been found to penetrate each other's front. The orbits of nonlinear 
dynamical system could be attracted or repelled to simple shape of nonlinear, near-circles or other 
shapes of Elucid[10].He furthered his explanation that,however,these are rare exceptions and the 
behaviour of most nonlinear dynamical systems tends to be more complictaed.The analysis of 
nonlinear dynamics fractal is useful for obtaining information about the future behaviour of complex 
systems [5] .The main reason for this is because they provide fundamental knowledge about the 
relation between these systems and uncertainty and indeterminism. [5] research paper focus on fractal 
structures in nonlinear dynamics. The work clearly describes the main types of fractal basin, their 
nature and the numerical and experimental techniques used to obtain them from both mathematical 
models and reap phenomena. [7] Research paper was on intermingled fractal arnold tongues. The 
paper presented a pattern of multiply interwoven Arnold tongues in the case of the single-well 
Duffing oscillator at low dissipation and weak forcing. It was observed that strips 2/2 Arnold tongues 
formed a truncated fractal and the tonguelike regions in between a filled by finely intermingled fractal 
like 1/1 and 3/3 Arnold tongues, which are fat fractals characterized by the uncertainty exponent alpha 
approximate to 0.7. The findings of authors showed that the truncated fractal Arnold tongues is 
present in the case of high dissipation as well, while the intermingled fractal pattern gradually 
disappears with increasing dissipation. [16] Research paper was on 1/3 pure sub-harmonic solution 
and fractal characteristic of transient process for Duffing' s equation. The investigation was carried out 
using the methods of harmonic balance and numerical integration. The author introduced assumed 
solution and was able to find the domain of sub-harmonic frequencies. The asymptotical stability of 
the sub-harmonic resonances and the sensitivity of the amplitude responses to the variation of 
damping coefficient were examined. Then, the subatomic resonances were analyzed by using 
techniques from the general fractal theory. The analysis reveals that the sensitive dimensions of the 
system time-field responses show sensitivity to the conditions of changed initial perturbation ,changed 
damping coefficient or the amplitude of excitation. The author concluded that the sensitive dimension 
can clearly describe the characteristics of the transient process of the subharmonic resonances. 
According to [15] , the studies of the phenomenon of chaos synchronization are usually based upon 
the analysis of transversely stable invariant manifold that contains an invariant set of trajectories 
corresponding to synchronous motions. The authors developed a new approach that relies on the 
notions of topological synchronization and the dimension for Poincare recurrences. The paper showed 
that the dimension of Poincare recurrences may serve as an indicator for the onset of synchronized 
chaotic oscillations. The hallmark of [12] paper in 2007 was to examine the application of a simple 
feedback controller to eliminate the chaotic behaviour in a controlled extended Duffing system. The 
reason was to regulate the chaotic motion of an extended Duffing system around less complex 
attractors, such as equilibrium points and periodic orbits. The author proposed a feedback controller 
which consists of a high -pass filler and a saturator. This gives the opportunity of simple 
implementation and can be made on the basis of measured signals. The authors sufficiently 
demonstrated this feedback control strategy using numerical simulations. [8] Study was on 
characterization of non stationary chaotic systems. The authors noticed that significant work has not 
been done in the characterization of these systems. The paper stated that the natural way to 
characterize these systems is to generate and examine ensemble snapshots using a large number of 
trajectories, which are capable of revealing the underlying fractal properties of the system. The 
authors concluded that by defining the Lyapunov exponent and the fractal dimension based on a 
proper probability measure from the ensemble snapshots, the Kaplan-Yorke formula which is 
fundamental in nonlinear dynamics can be shown. This finding remains correct most of the time even 
for non- stationary dynamical systems. 

Chaotic dynamical systems with phase space symmetries have been considered to exhibit riddle 
basins of attraction [l].This can be viewed as extreme fractal structures not minding how infinitesimal 
the uncertainty in the determination of an initial condition. The authors noticed that it is not possible 
to decrease the fraction of such points that will surely converge to a given attractor. The main aim of 



"63T 



Vol. 2, Issue 1, pp. 62-72 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

the authors' work was to investigate extreme fractal structures in chaotic mechanical systems. The 
authors investigated mechanical systems depicting riddle basins of attraction. That is, a particle under 
two-dimensional potential with friction and time -periodic forcing. The authors was able to verify this 
riddling by checking its mathematical requirements through computation of finite-time Lyapunov 
exponents as well as by scaling laws that explains the fine structure of basin filaments densely 
intertwined in phase space. A critical characterization of non-ideal oscillators in parameter space was 
carried out by [13]. The authors investigated dynamical systems with non-ideal energy source. The 
chaotic dynamics of an impact oscillator and a Duffing oscillator with limited power supply were 
analyzed in two-dimensional parameter space by using the largest Lyapunov exponents identifying 
self-similar periodic sets, such as Arnold tongues and shrim-like structures. For the impact oscillator, 
the authors identified several coexistence of attractors showing a couple of them, with fractal basin 
boundaries. According to the paper, these kinds of basins structures introduce a certain degree of 
unpredictability on the final state. The simple interpretation of this is that the fractal basin boundary 
results in a severe obstruction to determine what attractor will be a fine state for a given initial 
condition with experimental error interval. 

Fractal characterization of evolving trajectories of a dynamical system will no doubt be of immense 
help in diagnosing the dynamics of very important chaotic systems such as Duffing oscillator. 
Extensive literature search shows that disk dimension is yet to be significantly employed in fractal 
characterization of Duffing oscillator. The objective of this study is to investigate and characterize the 
time evolution of Poincare sections of a harmonically excited Duffing oscillator using fractal disk 
dimension. 

This article is divided into four sections. Section 1 gives the study background and brief review of 
literature. Section 2 gives the detail of methodology employed in this research. Subsection 2.1 gives 
the equation of harmonically excited duffing oscillators that is employed in demonstrating fractal 
characterization of evolving trajectories. Subsection 2.1 gives explanation on the parameter details of 
all the studied cases. Different combinations of damping coefficient and excitation amplitude 
considered are clearly stated. The methodology is concluded in subsection 2.3 where explanation is 
given on how attractor is characterized. Section 3 gives detail results and discussion. The findings of 
this work are summarized in section 4 with relevant conclusions. 

II. Methodology 

2.1 Duffing Oscillator 

The studied normalized governing equation for the dynamic behaviour of harmonically excited 
Duffing system is given by equation (1). 

• • • y 

x+yx--(l-x 2 ) =P Sin(ax) (1) 

• •• 

In equation (l)x, x and x represents respectively displacement, velocity and acceleration of the 
Duffing oscillator about a set datum. The damping coefficient is y. Amplitude strength of harmonic 

excitation, excitation frequency and time are respectively/^, CO and/\ [2], [3] and [6] proposed that 

combination of y = 0.168, P = 0.21, andG) = 1 .0 or y = 0.0168, P o = 0.09 andtf> = 1.0 parameters 

leads to chaotic behaviour of harmonically excited Duffing oscillator. This study investigated the 
evolution of 3000 trajectories that started very close to each other and over 25 excitation periods at a 

Excitation vcviocl 

constant step (At = ) in Runge-Kutta fourth order algorithms. The resulting 

500 

attractors (see [4]) at the end of each excitation period were characterized with fractal disk dimension 

estimate based on optimum disk count algorithms. 

2.2 Parameter details of studied cases 

Three different cases were studied using the details given in table 1 in conjunction with governing 
equation (1). Common parameters to all cases includes initial displacement range ( 0.9 < x < 1.1 ), 



"64j 



Vol. 2, Issue 1, pp. 62-72 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Zero initial velocity ( x ) ? excitation frequency ( ®) and random number generating seed value of 
9876. 

Table 1: Combined Parameters for Cases 



Cases 


Damping coefficient ( y ) 


Excitation amplitude ( P o ) 


Case-1 


0.1680 


0.21 


Case-2 


0.0168 


0.09 


Case-3 


0.0168 


0.21 



2.3 Attractor Characterization 

The optimum disk count algorithm was used to characterize all the resulting attractors based on fifteen 
(15) different disk scales of examination and over five (5) independent trials. 

III. Results and Discussion 

The scatter phase plots of figures 1, 2 and 3 shows the comparative attractors resulting from the time 
evolution of trajectories of Duffing oscillator for the studied cases. 



Initial attractor of all Cases 



1.00 
0.90 
0.80 
0.70 

> 0.60 
g 0.50 

> 0.40 
0.30 
0.20 \ 
0.10 
0.00 





85 0.90 0.95 1.00 1.05 

dissplacement 



1.10 



1.15 



Figure 1: Attractor of all cases at zero excitation period. 



Attractor of Case-1 at 2-excitation 
period 



0.30 
0.20 
0.10 
0.00 

80.10 ( 

at 

^0.20 

-0.30 
-0.40 
-0.50 




OOO 0.20 0.40 0.60 0.80 1 



displacement 





Attractor of Case-1 at 3-excitation period 












0.20 ' 


^"V 






0.10 ■ 


jr\ 






1 ' ' 0.00 


* 1 




£•-2 


00 -1.50 -1.00 -0.50" - 10 (1 


W ^0.50 1.fl 1 


50 


o 

0) 

> 


^k JT -0.50 ' 
^•■^^ -0.60 ■ 


/U 




=07711 






displacement 



Fig. 2 (a) 



Fig. 2 (b) 



~6i7 



Vol. 2, Issue 1, pp. 62-72 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Attractorof case-2 at 2-excitation 
period 



0.00 

( 

-0.05 

-0.10 
-0.15 
O0.20 
-0.25 
-0.30 
-0.35 
-0.40 



)0 0.10 0.20 0.30 0.40 0.50 




displacement 



Fig. 2 (c) 



Attractor of case-2 at 3-excitation period 



'0 




)0 0.50 1.00 M 1 50 



=frse- 

displacement 



Fig. 2 (d) 



Attractorof Case- 3 at 2-excitation period 




Attractor of Case- 3 at 3-excitation period 




■ttfO- 
displacement 



Fig. 2 (e) Fig. 2 (f) 

Figure 2: Comparison of attractors at 2 and 3 excitation periods. 





Attractorof Case-1 at 5-excitation period 




0.60 






1-2 

> 


9 0.20 ' 


s\ 


50 


00 -1J0 Hi .00 -0.50 A 


/fjl 




v$ 


>* 






1 =crso — ' — 

displacement 



Attractorof case-1 at 25-excitation period 



5-2 00 -1«0 K1.00 -0.50 



— imr 
displacement 




Fig. 3 (a) 



Fig. 3(b) 



"667 



Vol. 2, Issue 1, pp. 62-72 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Attractor of Case-2 at 5-excitation period 




displacement 



Fig. 3 (c) 



Attractor of case-3 at 5-excitation period 



o -2 



**^S^ft§ 



)M° 




\$£rM 



displacement 



Attractor of case-2 at 25-excitation 
period 




displacement 



Fig. 3 (d) 



Attractor of Case-3 at 25-excitation period 





3r0e 


^fc»5> 


o 
o 




JfgffQ 


-3 00 ^?° ^8 1 00 °' 


00 1 00^p 2.00 3.30 




v **3" 






~~ " -2.00 ' 





displacement 



Fig. 3 (e) Fig. 3 (f) 

Figure 3: Comparison of attractors at 5 and 25 excitation periods. 



Referring to figures 1, 2 and 3 the geometrical complexity of the attractors varied widely with cases 
and number of excitation periods. This is an affirmation of high sensitivity to initial conditions of 
Duffing oscillator behaviour if excited harmonically by some parameters combinations. The attractors 
of Case-1 and Case-2 approach qualitatively their respective stroboscopic-ally obtained Poincare 
section with increasing excitation period. 

The varied geometrical complexity of the attractors presented in figures 1, 2, and 3 can be 
characterized using fractal disk dimension measure. The algorithms for estimating the fractal disk 
dimension is demonstrated through presentation in table 2 and figure 4. 

Table 2: Disk required for complete cover of Case-1 attractor (Poincare section) at the end of 25 excitation 
periods. 



Disk scale 


Optimum 

Disk 
counted 


Disk counted in five (5) trials 


1 


2 


3 


4 


5 


1 


2 


3 


2 


2 


2 


2 


2 


4 


5 


4 


4 


5 


4 


3 


6 


8 


6 


8 


8 


8 


4 


11 


14 


12 


12 


12 


11 


5 


17 


19 


18 


18 


17 


17 


6 


21 


21 


21 


22 


21 


21 


7 


25 


25 


28 


27 


26 


28 


8 


28 


31 


31 


28 


30 


31 


9 


34 


38 


37 


34 


39 


37 


10 


40 


40 


42 


45 


41 


43 



«T 



Vol. 2, Issue 1, pp. 62-72 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



11 


45 


47 


47 


49 


46 


45 


12 


52 


54 


53 


55 


52 


54 


13 


60 


60 


62 


61 


60 


62 


14 


61 


65 


65 


67 


64 


61 


15 


68 


72 


69 


69 


72 


68 



Table 2 refers physical disk size for disk scale number one (1) is the largest while disk scale number 
fifteen (15) is the smallest. The first appearances of the optimum disk counted in five independent 
trials are shown in bold face through the fifteen scales of examination. Thus the optimum disk 
counted increases with decreasing physical disk size. The slope of line of best fit to logarithm plots of 
corresponding disk scale number and optimum disk counted gives the estimated fractal disk 
dimension of the attractor. Referring to figure 4 the estimated fractal dimension of the attractor of 
Case-1 at the end of 25-excitation periods is 1.3657 with an R 2 value of 0.9928. 





Estimated fractal disk dimension of Case-1 attractor 














^.uu - 

1.80 - 

"g 1-60 - 

| 1.40 - 

8 1-20 - 

ig 1.00 - 

^ 0.80 - 
i 0.60 - 
| 0.40 - 
o 0.20 - 




^^^^# 






►^ 


^^<^^ y = 1 .365x + 0.231 
^ R 2 = 0.992 






o U.UU "i 
o 0. 


00 


0.20 0.40 0.60 0.80 1.00 
Log of disk scale number 


1.20 



Figure 4: Fractal disk dimension of case-1 attractor at the end of 25 excitation periods. 

The variation of estimated fractal disk dimension of attractors for studied cases with increasing 
excitation period is given in figure 5. 



Attractors Characterization 



1.80 




-**^mI 



+-* 



Case-1 
Case-2 
Case-3 



0.00 5.00 10.00 15.00 

Excitation period 



20.00 



25.00 



Figure 5: Variation of estimated disk dimension of attractors with excitation period. 

Figure 5 refers a rise to average steady value of estimated fractal disk dimension was 
observed for all studied cases except Case-3. This observation with Case-3 may be due to its low 
damping value (^=0.0168 ) and relative very high excitation amplitude (P o =0.21). The attractor 

highest estimated fractal disk dimension of 1.393, 1.701 and 1.737 was recorded for the first time at 
corresponding excitation periods of 15, 23 and 5 for Case-1, Case-2 and Case-3 respectively. 



"68T 



Vol. 2, Issue 1, pp. 62-72 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Table 3: Estimated fractal disk dimension of Case-1 attractors at the end of 26-different excitation periods. 



Standard 
deviation 


Excitation 
period 


Case-1 attractor different estimated fractal disk dimensions 


Optimum 


Average 


Five different trials 


1 


2 


3 


4 


5 


0.02 





0.928 


0.903 


0.898 


0.896 


0.878 


0.924 


0.919 


0.02 


1 


0.927 


0.918 


0.917 


0.908 


0.920 


0.888 


0.956 


0.01 


2 


0.948 


0.938 


0.929 


0.949 


0.956 


0.933 


0.923 


0.03 


3 


1.170 


1.146 


1.128 


1.161 


1.137 


1.121 


1.182 


0.03 


4 


1.314 


1.259 


1.262 


1.285 


1.205 


1.261 


1.284 


0.03 


5 


1.376 


1.340 


1.351 


1.348 


1.308 


1.315 


1.380 


0.02 


6 


1.333 


1.305 


1.275 


1.315 


1.317 


1.293 


1.327 


0.01 


7 


1.292 


1.297 


1.307 


1.304 


1.292 


1.293 


1.290 


0.01 


8 


1.325 


1.327 


1.331 


1.344 


1.312 


1.323 


1.328 


0.02 


9 


1.355 


1.341 


1.358 


1.309 


1.351 


1.332 


1.357 


0.02 


10 


1.368 


1.333 


1.319 


1.377 


1.331 


1.323 


1.317 


0.02 


11 


1.341 


1.324 


1.323 


1.350 


1.348 


1.295 


1.306 


0.02 


12 


1.350 


1.335 


1.309 


1.326 


1.369 


1.349 


1.320 


0.02 


13 


1.344 


1.341 


1.330 


1.357 


1.361 


1.348 


1.310 


0.02 


14 


1.339 


1.314 


1.330 


1.296 


1.282 


1.333 


1.328 


0.03 


15 


1.394 


1.345 


1.324 


1.324 


1.325 


1.400 


1.351 


0.02 


16 


1.350 


1.332 


1.309 


1.324 


1.348 


1.361 


1.320 


0.02 


17 


1.374 


1.345 


1.361 


1.356 


1.362 


1.327 


1.320 


0.03 


18 


1.349 


1.332 


1.313 


1.356 


1.371 


1.332 


1.290 


0.01 


19 


1.343 


1.341 


1.325 


1.357 


1.341 


1.333 


1.352 


0.06 


20 


1.346 


1.319 


1.335 


1.357 


1.356 


1.216 


1.331 


0.04 


21 


1.368 


1.340 


1.344 


1.355 


1.270 


1.341 


1.390 


0.02 


22 


1.359 


1.342 


1.355 


1.318 


1.319 


1.344 


1.375 


0.05 


23 


1.356 


1.323 


1.342 


1.331 


1.335 


1.362 


1.242 


0.02 


24 


1.342 


1.331 


1.329 


1.358 


1.305 


1.315 


1.345 


0.06 


25 


1.366 


1.331 


1.229 


1.383 


1.361 


1.325 


1.356 



Table 4: Estimated fractal disk dimension of Case-2 attractors at the end of 26-different excitation periods. 


Standard 
deviation 


Excitation 
period 


Case-2 attractor different estimated fractal disk dimensions 


Optimum 


Average 


Five different trials 


1 


2 


3 


4 


5 


0.01 





0.889 


0.910 


0.924 


0.909 


0.898 


0.896 


0.923 


0.03 


1 


0.926 


0.906 


0.920 


0.890 


0.894 


0.883 


0.944 


0.06 


2 


0.975 


0.948 


0.955 


0.984 


0.845 


0.976 


0.979 


0.01 


3 


1.063 


1.058 


1.063 


1.059 


1.057 


1.041 


1.071 


0.02 


4 


1.347 


1.326 


1.330 


1.334 


1.308 


1.353 


1.308 


0.02 


5 


1.499 


1.463 


1.481 


1.495 


1.452 


1.449 


1.437 


0.02 


6 


1.552 


1.528 


1.513 


1.549 


1.515 


1.540 


1.520 


0.05 


7 


1.605 


1.558 


1.567 


1.621 


1.554 


1.480 


1.571 


0.01 


8 


1.638 


1.609 


1.596 


1.605 


1.598 


1.617 


1.626 


0.02 


9 


1.646 


1.630 


1.626 


1.653 


1.601 


1.643 


1.629 


0.02 


10 


1.669 


1.636 


1.616 


1.666 


1.622 


1.627 


1.647 


0.02 


11 


1.674 


1.648 


1.667 


1.650 


1.621 


1.651 


1.650 


0.01 


12 


1.644 


1.646 


1.630 


1.657 


1.642 


1.642 


1.656 


0.03 


13 


1.678 


1.653 


1.669 


1.690 


1.631 


1.637 


1.639 


0.02 


14 


1.683 


1.658 


1.671 


1.658 


1.658 


1.626 


1.676 


0.02 


15 


1.691 


1.664 


1.669 


1.650 


1.702 


1.661 


1.639 


0.01 


16 


1.697 


1.671 


1.665 


1.685 


1.659 


1.670 


1.673 


0.01 


17 


1.679 


1.664 


1.675 


1.653 


1.652 


1.653 


1.683 


0.05 


18 


1.696 


1.657 


1.695 


1.682 


1.577 


1.680 


1.654 


0.01 


19 


1.675 


1.655 


1.653 


1.657 


1.655 


1.641 


1.667 


0.02 


20 


1.682 


1.669 


1.676 


1.659 


1.635 


1.683 


1.694 


0.02 


21 


1.688 


1.675 


1.681 


1.707 


1.674 


1.667 


1.648 



~&r\ 



Vol. 2, Issue 1, pp. 62-72 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



0.02 


22 


1.688 


1.664 


1.637 


1.664 


1.664 


1.661 


1.695 


0.05 


23 


1.701 


1.656 


1.712 


1.583 


1.643 


1.681 


1.661 


0.01 


24 


1.656 


1.656 


1.665 


1.660 


1.664 


1.630 


1.661 


0.01 


25 


1.660 


1.652 


1.665 


1.660 


1.647 


1.650 


1.636 



Table 5: Estimated fractal disk dimension of Case-3 attractors at the end of 26-different excitatior 


l periods. 


Standard 
Deviation 


Excitation 
period 


Case-3 attractor different estimated fractal disk dimensions 


Optimum 


Average 


Five different trials 


1 


2 1 3 


4 


5 


0.03 





0.917 


0.915 


0.895 


0.905 


0.889 


0.945 


0.943 


0.01 


1 


0.881 


0.892 


0.895 


0.898 


0.910 


0.883 


0.875 


0.02 


2 


1.107 


1.094 


1.122 


1.079 


1.108 


1.090 


1.073 


0.02 


3 


1.441 


1.434 


1.457 


1.430 


1.436 


1.415 


1.434 


0.02 


4 


1.619 


1.593 


1.597 


1.620 


1.584 


1.583 


1.584 


0.05 


5 


1.682 


1.615 


1.592 


1.654 


1.683 


1.589 


1.558 


0.04 


6 


1.737 


1.648 


1.623 


1.635 


1.628 


1.724 


1.631 


0.05 


7 


1.704 


1.636 


1.564 


1.671 


1.651 


1.619 


1.675 


0.04 


8 


1.695 


1.598 


1.536 


1.610 


1.601 


1.632 


1.609 


0.04 


9 


1.527 


1.453 


1.434 


1.433 


1.429 


1.530 


1.440 


0.02 


10 


1.415 


1.408 


1.380 


1.411 


1.423 


1.419 


1.409 


0.03 


11 


1.432 


1.410 


1.361 


1.421 


1.412 


1.422 


1.434 


0.04 


12 


1.467 


1.432 


1.470 


1.409 


1.385 


1.440 


1.458 


0.02 


13 


1.504 


1.495 


1.506 


1.508 


1.497 


1.494 


1.469 


0.04 


14 


1.605 


1.514 


1.510 


1.501 


1.505 


1.576 


1.478 


0.04 


15 


1.540 


1.486 


1.495 


1.457 


1.437 


1.543 


1.496 


0.05 


16 


1.541 


1.490 


1.465 


1.445 


1.461 


1.541 


1.536 


0.02 


17 


1.562 


1.545 


1.552 


1.543 


1.508 


1.554 


1.566 


0.01 


18 


1.551 


1.538 


1.548 


1.556 


1.528 


1.530 


1.529 


0.03 


19 


1.565 


1.536 


1.489 


1.536 


1.543 


1.548 


1.566 


0.04 


20 


1.683 


1.571 


1.634 


1.545 


1.565 


1.565 


1.545 


0.02 


21 


1.592 


1.561 


1.574 


1.564 


1.564 


1.528 


1.575 


0.02 


22 


1.606 


1.590 


1.617 


1.577 


1.606 


1.569 


1.581 


0.06 


23 


1.687 


1.599 


1.586 


1.603 


1.576 


1.695 


1.534 


0.01 


24 


1.614 


1.603 


1.584 


1.618 


1.610 


1.599 


1.603 


0.03 


25 


1.623 


1.576 


1.607 


1.556 


1.606 


1.525 


1.584 



Tables 3, 4 and 5 refers the variation of optimum estimated fractal disk dimension with increasing 
excitation period is shown in figure 5. 



Attractors Characterization 




Case-1 
Case-2 
Case-3 



10.0 15.0 

Excitation period 



20.0 



25.0 



Figure 6: Variation of average estimated fractal disk dimension of attractors with excitation period. 



"70T 



Vol. 2, Issue 1, pp. 62-72 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

In addition the variation of average estimated fractal disk dimension based on five independent trials 
with increasing excitation period is shown in figure 6. Figures 5 and 6 are same qualitatively. 
However the average estimated fractal disk dimensions are consistently lower than the corresponding 
optimum estimated fractal disk dimension for all attractors characterized. Standard deviation 
estimated for five trial results lies between minimum of 0.01 and maximum of 0.06 for all the cases 
and the attractors. 

Figures 5 and 6 indicated that the attractors for different cases ultimately evolve gradually to steady 
geometric structure. 

IV. Conclusions 

The study has demonstrated the Duffing oscillator high sensitivity behaviour to set of very close 
initial conditions under the combination of some harmonic excitation parameters. Cases 1 and 2 
evolve gradually to unique attractors which are comparable to corresponding Poincare sections 
obtained in the literature. On the final note, this study establishes the utility of fractal dimension as 
effective characterization tool and a novel alternative computational method that is faster, accurate 
and reliable for generating Duffing attractors or Poincare sections. 

References 

[1]. Carmago, S.; Lopes, S.R. and Viana.2010.Extreme fractal structures in chaotic mechanical systems 

:Riddled basins attractor. XI Latin American workshop on nonlinear phenomena. Journal of physics: 

Conference series 246 (2010) 012001. Doi: 10. 1088/1742-6596/2461 1/012001. IOP publishing Ltd. 
[2]. Dowell, E.H. 1988. Chaotic oscillations in mechanical systems, Computational Mechanics, 3, 199-216. 
[3]. Francis, CM. 1987. Chaotic Vibrations-An Introduction for Applied Scientists and Engineers, John 

Wiley & Sons, New York, ISBN 0-471-85685-1 
[4]. Gregory, L.B and Jerry, P.G.I 990. Chaotic dynamics: An introduction. Cambridge university Press, 

New York, ISBN 0-521-38258-0 Hardback, ISBN 0-521-38897-X Paperback. 
[5]. Jacobo,A.;Ricardo,L.V. and Miguel, A.F.S. 2009. Fractal structures in nonlinear dynamics. American 

physical society's new journal. Vol.81, Issue 1. 
[6]. Narayanan,S. and Jayaraman,K. 1991. Chaotic vibration in a non-linear oscillator with coulomb 

damping. Journal of sound and vibration. Vol. 146, Issue 1, pg. 17-31. Published by Elsevier Ltd. 
[7]. Paar,V. and Pavin,N. 1998. Intermingle fractal Arnold tongues. Physical review.A.57 (2 part A) : 

1544-1549.ISSN 1050-2947. 
[8]. Ruth, S.;Ying-Cheng,L. and Qingfei,C2008. Characterisation of non stationary chaotic systems. 

American physical society new journal. Physical review E.Vol.77, Issue 2. 
[9]. Salau, T.A.O. and Ajide, O.0. 20 11. Investigating Duffing oscillator using bifurcation diagrams. 

International journal of mechanics structural.ISSN 0973 12X, Vol.2, No.2.pp. 57-68. 

house. http://www.irphousse.com. 
[10]. Salau, T.A.O. 2002.Fractal analyses and characterization of tread patterns of automobile 

tyres. Unpublished Doctor of philosophy (Ph.D) Thesis. Mechanical engineering department, University 

of Ibadan, Nigeria. 
[11]. Sang-Yoon,K. and Youngtae,K.2000.Dynamic stabilization in the double-well Duffing oscillator. 

American physical society new's journal. Physical review E.Vol.61,No.6 
[12]. Sihem, A.L.; Samuel, B.; Kakmeni, F. M. M.; Brahim, C. and Noureddine Ghouali.2007. Chaos 

control using small-amplitude damping signals of the extended Duffing equation. Communications in 

nonlinear science and numerical simulation.Vol.12, Issue 5. pg. 804-8 13. Copyright: Elsevier B.V. 
[13]. Silvio, L.T. ;Ibere, L.C .;Jose ,M.B. and Reyolando ,M.L.R.F.2010. Characterisation of non-ideal 

oscillators in parameter space. DINCON'10. 9 th Brazilian conference on dynamics control and their 

applications. Serra Negra,Sp-ISSN 2178-3667. Pg-95. 
[14]. Takashi, K. 200 8. Duffing oscillator dynamics. Source: Scholarpedia, 3(3):6327. Takashi is a lecturer 

at the Mechanical engineering department, Kogakuin University, Japan. 
[15]. Valentine, S.A.; Wen-Wei,L. and Nikolai,F.R.2000.Fractal dimension for Poincare recurrences as an 

indicator of synchronized chaotic regimes. International journal of bifurcation and chaos.Vol.10, 

No.l0.2323-2337.World scientific publishing company. 
[16]. Yu-Xi, X.; Hai-yan, H. and Bang-Chun, W.2000. 1/3 Sub-harmonic solution and fractal characteristic 

of transient process for Duffing' s equation. Applied mathematics and mechanics, vol. 27, No. 9, 

Springer.1171 1176, DOI: 10. 1007/s 10483-006-0903-1. 



n\ 



Vol. 2, Issue 1, pp. 62-72 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

AUTHORS BIOGRAPHY 




SALAU Tajudeen Abiola Ogunniyi is a senior Lecturer in the department of Mechanical 

of Engineering, University of Ibadan, Nigeria. He joined the services of the University of 

Ibadan in February 1993 as Lecturer II in the department of Mechanical Engineering. By 

virtue of hard work, he was promoted to Lecturer 1 in 2002 and senior Lecturer in 2008. He 

had served the department in various capacities. He was the coordinator of the department 

for 2004/2005 and 2005/2006 Academic sessions. He was the recipient of M.K.O Abiola 

postgraduate scholarship in 1993/1994 academic session while on his Ph.D research 

programme in the University of Ibadan. Salau has many publications in learned journals 

and international conference proceedings especially in the area of nonlinear dynamics. He had served as external 

examiner in departments of Mechanical Engineering of some institutions of higher learning in the country and a 

reviewer/rapporteur in some reputable international conference proceedings. His area of specialization is solid 

mechanics with bias in nonlinear dynamics and chaos. Salau is a corporate member, Nigerian Society of 

Engineers (NSE). He is a registered Engineer by the council for Regulations of engineering in 

Nigeria. (COREN). He is happily married and blessed with children. 




AJIDE Olusegun Olufemi is currently a Lecturer II in the department of Mechanical 

Engineering, University of Ibadan, Nigeria. He joined the services of the University of 

Ibadan on 1 st December 2010 as Lecturer II. He had worked as the Project Site 

Engineer/Manager of PRETOX Engineering Nigeria Ltd, Nigeria. Ajide obtained B.Sc 

(Hons.) in 2003 from the Obafemi Awolowo University, Nigeria and M.Sc in 2008 from the 

University of Ibadan, Nigeria. He received the prestigious Professor Bamiro Prize (Vice 

Chancellor Award) in 2008 for the overall best M.Sc student in Mechanical Engineering 

(Solid Mechanics), University of Ibadan, Nigeria. He has some publications in learned 

journals and conference proceedings. His research interests are in area of Solid Mechanics, 

Applied Mechanics and Materials Engineering. Ajide is a COREN registered Engineer. He is a corporate 

member of the Nigerian Society of Engineers (NSE) as well as corporate member of the Nigerian Institution of 

Mechanical Engineers (NIMechE). 



w*m 



~tl\ 



Vol. 2, Issue 1, pp. 62-72 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



SANKEERNA: A Linear Time, Synthesis AND Routing 

Aware, Constructive VLSI Placer to Achieve 

Synergistic Design Flow 

1 o 

Santeppa Kambham and Siva Rama Krishna Prasad Kolli 
^NURAG, DRDO, Kanchanbagh, Hyderabad, India 
ECE Dept, National Institute of Technology, Warangal, India 



Abstract 

Standard cell placement is a NP complete open problem. The main objectives of a placement algorithm are to 
minimize chip area and the total wire length of all the nets. Due to interconnect dominance, Deep Sub Micron 
VLSI design flow does not converge leading to iterations between synthesis and layout steps. We present a new 
heuristic placement algorithm called Sankeerna, which tightly couples synthesis and routing and produces 
compact routable designs with minimum area and delay. We tested Sankeerna on several benchmarks using 
0.13 micron, 8 metal layer, standard cell technology library. There is an average improvement of 46.2% in 
delay, 8.8% in area and 114.4% in wire length when compared to existing placement algorithms. In this paper, 
we described the design and implementation of Sankeerna algorithm and its performance is illustrated through 
a worked out example. 

KEYWORDS: Placement, VLSI Design flow, Synthesis, Routing, Area and delay minimization 



I. Introduction 

VLSI chip complexity has been increasing as per the Moore's law, demanding more functionality, 
high performance, but with less design time. Producing compact layouts with high performance in 
shorter time is required, in order to meet the time to market needs of today's VLSI chips. This calls 
for tools, which run faster and also which converge without leading to design iterations. Placement is 
the major step in VLSI Design flow which decides the area and performance of the circuit. Detailed 
Routing is another time consuming step, which is performed after placement. If placement is not wire 
planned, routing may lead to heavy congestion resulting in several Design Rule Check (DRC) 
violations. It is required to iterate again with a new placement. If the wiring is not planned properly 
during placement, circuits routed may not meet the timing goals of the design. So there is a need for 
placers which are faster, produce compact layouts, meet the timing requirements and make the 
routing converge without DRC violations. The back end process of VLSI Design flow, that is, 
preparation of layout, is also to be tightly coupled with the front end synthesis process to avoid 
design iterations between synthesis and layout steps. It has been found that even after several 
iterations, this two step process does not converge and using wire load models this timing closure 
problem [1,2] will not be solved. 

In general, the standard cell placement problem can be stated as: Given a circuit consisting of 
technology mapped cells with fixed height and variable width, and a netlist connecting these cells, 
and Primary Inputs and Primary Outputs, construct a layout fixing the position of each cell without 
overlap with each other. The placement when routed should have minimum area, wire length, delay 
and should be routable. Minimum area is the area close to the sum of mapped standard cell areas. 



w 



Vol. 2, Issue 1, pp. 73-89 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Minimum wire length is the sum of all nets in the circuit when placed and routed. Delay is the delay 
of worst path in the routed circuit. Routability indicates that the layout should not be congested; wires 
routed should respect the Design Rules of the particular technology such that the routing is 
completed. Standard cell placement is known to be a NP complete open problem [3]. 
A Synergistic approach towards Deep Sub Micron (DSM) design, coupling logic synthesis and 
physical design is the need of the day [4, 1,5]. There have been efforts to integrate synthesis and 
layout steps [6, 7, 8, 9]. All these efforts try to estimate wire delays, with the hope that they will be 
met finally, which is not happening. Wire delays are inevitable. The problem is not with wire delays, 
but with the non convergence and unpredictability. What we need is a quick way of knowing the final 
delay and a converging design flow. We have developed a design flow and a placer called Sankeerna 
targeted to produce compact routable layouts without using wire load models. 

In Section 2, we briefly review the existing methods of placement and their limitations with respect to 
achieving a tightly coupled convergent design flow. Section 3 gives the basis for the Sankeerna 
algorithms. With this background, a new placer called Sankeerna was developed which is described 
in Section 4. The new placer Sankeerna is illustrated with an example in Section 5. The experimental 
setup to evaluate Sankeerna is described in Section 6. Results are tabulated and improvements 
obtained are discussed in Section 7. Conclusions of research work carried and future scope are given 
in Section 8. 

II. Related work 

Classical approaches to placement are reviewed in [10, 11, 3, 12, 13] and recent methods in [14, 15, 
16]. The placement methods are classified based on the way the placement is constructed. Placements 
methods are either Constructive or Iterative [13]. In constructive method, once the components are 
placed, they will never be modified thereafter. An iterative method repeatedly modifies a feasible 
placement by changing the positions of one or more core cells and evaluates the result. Because of the 
complexity, the circuits are partitioned before placement. The constructive methods are (a) 
Partitioning-based which divide the circuit into two or more sub circuits [17] (b) Quadratic 
assignment which formulates the placement problem as a quadratic assignment problem [18, 19, 20] 
and (c) Cluster growth which places cells sequentially one by one in a partially completed layout 
using a criteria like number of cells connected to a already placed cell [21]. Main iterative methods 
are (a) Simulated annealing [22, 23,35], (b) Simulated evolution [15, 24] and (c) Force-directed [25, 
20]. 

Another classification based on the placement technique used, was given in [20]. The placers were 
classified into three main categories namely (a) Stochastic placers which use simulated annealing 
which find global optimum with high CPU time, (b) Min-cut placers which recursively divide the 
netlist and chip area and (c). Analytical placers which define an analytical cost function and minimise 
it using numerical optimization methods. Some placers may use a combination of these techniques. 
These methods use only component cell dimensions and interconnection information and are not 
directly coupled to the synthesis. 

The methods which use structural properties of the circuit are (a) Hierarchical placement [27] (b) Re- 
synthesis [9] and (c) Re-timing. There are algorithms which use signal flow and logic dependency 
during placement [28, 29]. In [28], critical paths are straightened after finding the zigzags. When 
placement is coupled with synthesis, this extra burden of finding criss-crosses is not required. In [30], 
using the structure of the interconnection graph, Placement is performed in a spiral topology around 
the centre of the cell array driven by a Depth First Search (DFS) on the interconnection graph. The 
algorithm has linear time complexity to the number of cells in the circuit. 

To obtain delay optimized placements, timing driven placement methods are used [31, 32, 33, 34, 36, 
37, 38, 39, 40]. The idea is to reduce the wire length on certain paths instead of total wire length. 
These methods are either path based or net based. The longest path delays are minimized in path 
based methods. Finding longest paths exponentially grows with the complexity of the design. Timing 
constraints are transformed into net-length constraints in the net based algorithms. Then a weighted 
wire length minimized placement is done iteratively until better timing is achieved. The drawbacks of 



"74T 



Vol. 2, Issue 1, pp. 73-89 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

this method are (a) delay budgeting is done without physical placement feasibility and (b) it is 
iterative. At the end of the iteration, the solution produced is evaluated. 

To control congestion and to achieve routability, white spaces are allocated at the time of placement 
[41, 26, 42, 43]. The problem with this approach is, it increases area which in turn will increase wire 
length and delay. In [44], it was shown that minimising wire length improves routability and layout 
quality. Allocating white space may not be the right approach to achieve routability. It is better to 
minimise the wire length instead of allocating white spaces. The white space allocated may not be the 
right place required for the router. 

The studies in [45] have shown that existing placement algorithms produce significantly inferior 
results when compared with the estimated optimal solutions. The studies in [12] show that the results 
of leading placement tools from both industry and academia may be up to 50% to 150% away from 
optimal in total wire length. 

The design flow convergence is another main requirement of Synergistic approach towards DSM 
design [4]. Placement plays a major role in this. As mentioned in [4], there are three types of design 
flows for Deep Sub Micron (DSM) namely (a) Logic synthesis drives DSM design (b) Physical 
design drives DSM design (c) Synergistic approach towards DSM design. In the last method (c), it is 
required to create iteration loops which tightly couple various levels of design flow. The 
unpredictability of area, delay, and routability of the circuits from synthesis to layout, is another 
major problem. The study in [46, 47] indicated that the non-convergence of design process is due to 
non-coupling of synthesis [48, 49, 50, 51] to placement process. We need a faster way of estimating 
area and delay from pre-place to post-route. If we fail to achieve this, we may not have a clue to 
converge. 

From the above analysis of the state of the art placement algorithms, we feel that there is still scope 
for improvement and the need for better placement algorithms meeting the requirements as mentioned 
in section 3. In the next Section, we describe the basis for our new algorithms, which try to solve 
some of the problems mentioned above. 

III. Basis for the New Algorithms 

The new placement algorithm should have the following features (a) linear or polynomial time 
complexity with respect to number of cells in the circuit, (b) awareness of synthesis and routing 
assumptions and expectations, that is, tight coupling of synthesis and routing as mentioned in [4, 46, 
47], (c) achieving minimum area and delay, (d) produce routable layouts without Design Rule Check 
(DRC) violations, by proper wire planning during placement, (e) delay of final layout should be 
predictable with trial routes and (f) should smoothly interface with synthesis and routing tools. 
In this section, we explain the basis for the Sankeerna algorithms. Since the circuits are to be placed 
to achieve minimum area and delay, we first try to find out what is the minimum area and delay 
which are achievable for a given circuit and technology library. The minimum area achievable is the 
sum of widths of all cells multiplied by the height of the standard cells. For a given standard cell 
technology library, the height is same for all cells. 

To find out the minimum delay achievable, we use Figure 1(a) to explain the delay calculation 
process. 



t 




I 


gi-l 


T 




'1 




T 




1 


g3 -l 



Level i 



Level i+1 











6, 








NW 




N T 

T 


S NE 




H 
W 


— r* — 


^7 
8 


E 


H — -" 

sw 


^~ 


H •* 

S 




SE 















(a) Logic and Timing dependency 



(b) Possible locationsto place a cell 



Figure 1 (a) Logic & timing dependency (b) possible locations to place a cell 



w 



Vol. 2, Issue 1, pp. 73-89 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

There are 4 gates marked as gl, g2, g3 and G. The gates gl, g2 and g3 are at level i and G is at level 
i+1 as per the logic of the circuit. Let d is delay of gate G, a is maximum of arrival times of (gl, g2, 
g3) at the input of G, b is block delay of G,/is fan-out delay of G and w is wire delay of G. Then d is 
equal to sum of a, Z?,/and w. The d is dependent on the arrival times of inputs gl, g2 and g3 which 
are in transitive fan-in of G. The gates gl, g2 and g3 in turn will have inputs either from Primary 
Inputs (Pis) or from other gates. So delay of G is dependent on the transitive fan in of all gates 
connected to gl, g2 and g3. To minimise d, the only thing that can be done at placement time is to 
minimize the wire delay w. Wire delay w depends on the wire lengths si, s2 and s3, and several other 
factors. We will consider only wire length here. To minimize wire lengths si, s2 and s3 which are 
outputs of gates gl, g2 and g3, they are to be placed at physically closest location to G. The possible 
places for a cell H which are nearer to a cell G are shown in Figure 1(b). H can be placed in any of 
the eight positions indicated by NW, N, NE, W, E, SW, S and SE. The distance from H output pin to 
G input pin for all these 8 possible locations depends on the width and height of H, G, and pin 
locations on these two cells. The lines 1, 2, 3, 4, 5, 6, 7 and 8 in Figure 1 show the Euclidean distance 
for all 8 positions. The same procedure can be adopted to calculate the Manhattan distance. The 
Physically Shortest Place (PSP) is the location which has minimum Manhattan distance. In real 
technology libraries, the cell positions are specified by a rectangle or a set of rectangles, not just as a 
point. So, we can choose to connect anywhere on the rectangle based on the closeness to the 
destination cell. Out of available locations, the one with the minimum Manhattan distance is on the 
Physically Shortest Path (PSP). 

Let r be the required time of a gate, and a be the arrival time, then the slack s at gate G is r minus a. 
Out of all slacks of inputs to a gate G, the Worst Negative Slack (WNS) indicates that the cell is on 
the critical path. The inputs gl, g2 and g3 which are more critical are to be placed closer to gate G 
when compared to others. This argument has to be recursively applied to all gates which are in 
transitive fan in of gl, g2 and g3. That is, placing gl nearer to G means, placing all the cells which 
are in transitive fan in of gl nearer to G. All gates which are in transitive fan in of gl are to be placed 
on PSP giving priority to cells which have higher WNS. Let WNS of gl, g2 and g3 be -2, -1 and -3 
respectively. Placement priority is g3 first, then gl and last g2. The minimum delay is achieved when 
gl, g2 and g3 are placed in PSP from Primary Outputs (POs) to Primary Inputs (Pis). 
Sankeerna uses constructive method of placement. Starting from the Primary Output (PO), cells are 
placed on PSP as explained above. The height to width ratio of the smallest drive capability inverter 
is 4 for the standard cell library we have used for conducting experiments in this paper. So the row of 
a standard cell becomes the Physically Shortest Path (PSP). WNS at each node input, decides Delay- 
wise Shortest Path (DSP). Sankeerna combines PSP and DSP concepts explained above to produce 
routable placements minimizing area, delay and wire length in linear time. These concepts are further 
illustrated with a fully worked out example in Section 5. The next Section explains the algorithms 
used in Sankeerna. 

IV. Algorithms used in SANKEERNA 

We have modified the algorithms of ANUPLACE [46] to produce delay optimized placements using 
a constructive method. Sankeerna reads the benchmark circuit which is in the form of a netlist, taken 
from "SIS" synthesizer [52], builds trees with Primary Outputs (POs) as roots. In SIS, we specify zero 
as the required time at the POs and print the worst slack at each node. This slack information is read 
along with the netlist into Sankeerna. The inputs are sorted based on this slack, with descending order 
of time criticality at each node. Starting from the root to the leaf node, nodes are placed on the layout 
after finding the closest free location. At each node, most time critical node is placed first using a 
modified Depth First Search (DFS) method. Priority is given to time when compared to depth of the 
tree. It was proved that placement of trees can be done in polynomial time [7]. A Depth First Search 
(DFS) algorithm was used in [30] which has linear time complexity to the number of connections. 
Tree based placement algorithms reported in literature have either linear time or O (n log n) time 
complexity [7, 53, 54, 55]. 



"£T 



Vol. 2, Issue 1, pp. 73-89 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

We have used benchmark circuits from SIS [52] synthesizer in "BLIF" format which are then 
converted into Bookshelf [56] format using converters provided in [59, 60, 41, 61]. The normal 
placement benchmark circuits [57, 45] are not useful because they give only cell dimensions and 
interconnect information. Timing, cell mapping, logic dependency and other circuit information from 
synthesizer are not available in these placement benchmarks. These converters do not use technology 
library information for cell dimensions and pin locations. Sankeerna reads the technology 
information consisting (a) cell names, (b) cell dimensions height and width, (c) pin locations on the 
cells, (d) timing information, and (e) input load from a file. Using this technology information, 
Sankeerna generates the benchmark circuit in bookshelf format with actual cell dimensions and pin 
locations. Once the trees are created the slack information is read into the tree data structure. 
Sankeerna algorithm is shown in Figure 2. 



Main 

•Read technology library 

•Read the benchmark circuit. 

•Build trees with primary outputs as roots. 

•Read cell mapping and delay file 

•Print verilog 

•Sort inputs of each node of the tree based on time criticality. 

•Sort Trees of Primary Outputs (PO) based on time criticality. 

•Put level information in each node of the tree. 

•Print the benchmark with technology cell dimensions and pin locations. 

•Run Public Domain Placer (PDP) 

•Read PDP placement 

•Calculate the layout width and height using standard cell area & aspect ratio. 

•Number of rows = layout height / standard cell height. 

•Initialize row tables to keep track of the placement as it is constructed. 

•Place circuit . 

•Place Primary Inputs (Pis) and Primary Outputs (POs) 

•Print ".def " files of PDP and Sankeerna.. 



void place_ckt () 

{ next_PO = pointer to list of trees 

pointed by Primary Outputs(POs); 
no=number of PO cells; 
for ( i=0; i<no; i++ ) 
{ place_cell ( next_PO ); 
next PO=next PO->next; 

} 
} 


void fin d_best_p lace ( gate) 

{ checkavailability on the same row, and 

above and below current row; 
Out of available rows, find the row 

which gives minimum wire length; 
return coordinates of minimum location; 

} 




void checkavailability_on_row (row, width) 

{ x 1 = row_table_xl [row] ; x2=row_table_x2[row] ; 

if (( fabs ( x2-xl ) +width ) <= layout_width ) 
return (p o s sible, x2 ) 

else return(not_possible); 

} 




void place_cell ( gate ) 
{ next_pin = pointer to 

list of input pins; 

place_one_cell ( gate); 

for ( i=0; i< ( gate->no of inputs ); i++ ) 

{ place_cell ( next_pin ); 
next pin=next pin->next; 

} 
} 




void place_one_cell ( gate ) 
{ find_best_p lace (gate); 
place cell on layout surface ( gate); 

} 



Figure 2 Sankeerna algorithms 

As shown in the Figure 2, the "place_ckt" function places the trees pointed by Primary Outputs (POs) 
one after the other using "place_ceH" function starting at row 1 at x=0 and y=0. The "place_ceH" 
function works as follows. 

• Place the cell pointed by root using "place_one_ceH". 

• For each input, if it is a primary input, place it using "place_one_ceH", if not; call 
"place_ceH" with this input recursively. 



w tT[ 



Vol. 2, Issue 1, pp. 73-89 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

The "place_one_cell" function finds the best place to place the cell using "find_best_place" function 
and places the cell at this location. The "find_best_place" function works as follows. As the 
placement progresses, the "used space" and "available space" are marked, C is the current cell and 
the next cell will be placed closer to this cell wherever space is available. The current cell placement 
is depicted in Figure 3. 

The "find_best_place" function checks the availability of space using "check_availability_on_row" 
function on the same row of C and on the rows above and below the current row of C. The possible 
places for a cell H which are nearer to a cell G are shown in Figure 3. Out of available locations, the 
one with the minimum distance from the parent cell is chosen. The cell is placed at this location. The 
"check_availability_on_row" function keeps two pointers xl and x2 for each row. Initially xl and x2 
are initialised to zero before the start of placement. When this function is invoked, it gets these two 
pointers xl and x2 from the table corresponding to this row. It then calculates whether in the 
available space, this cell of "width" can be placed such that the row width will be less than or equal 
to the layout width. If space is available, it returns x2 as the available location on this row. 




Row c+2 


Used 


Available 




x1 


x2 


Row c+1 


Used 


Available 




xl 


X?. 


Row c 


Used 


C 


Available 




x1 


x2 


Row 2 


Used 


Available 




Xl 


Y 9 


Row 1 


Used 


Available 



L 



xl x2 

~~ Layout width 
Layoutseen by Find_best_place 



Figure 3 Find_best_place possible locations and the layout seen by Find_best_place 

Complexity of the algorithm 

If n is the number of the cells in the circuit, "place_one_ceH" function is invoked n times. 
"Place_one_ceH" calls "find_best_Place" which uses "check_availability_on_row". So 
"find_best_Place" is executed n times. Each time, it calculates wire lengths for possible locations to 
choose the best one. "check_availability_on_row" performs one comparison operation. So number of 
operations is linearly proportional to n, after construction of the tree. So the complexity of algorithm 
is of the order of n. 



V. Illustration of SANKEERNA with an Example 

The algorithms used in Sankeerna are illustrated with an example whose logic equation is given 
below, taken from [49]. 

—d(uj + dfft -h dey -h hh + hi -h ch + ci 

The logic diagram with technology mapped cells and the tree built by Sankeerna for the above logic 
diagram with the slacks are shown in Figure 4 and 5 along with the sequence of placement based on 
the time criticality. 



"787 



Vol. 2, Issue 1, pp. 73-89 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



0.154191 



0.160878 



I 



0.167536 

0.160878 

b 0.229631 

■ > a 0.167536 

* 0^347891 O* ~L 

0.347891 0.347891 
Uri3& 



-Mm 




1KU 

0.229631 

■MM 

0.339S16 
0347891 



LJM21 

i) 

0.229631 



UfFUl 



0.347891 



(^54191 
.MI 

0.330634 

2^ 

0^160878 

.IKS 

0^160878 

$> 

£167536 

l|> 

0.167536 

4 » 

0.347891 



0.154191 



0.330634 



£> 

0.330634 

utaHy 
■19% 

■ ^ 

0.160878 

■3% 



UT 

i% 

0.347891 



Y 



J.I-N5 

0.347891 



Figure 4 Logic Diagram with technology mapped cells for the example equation 

The sequence of placement is indicated by the numbers 1-26 shown at the each node of the tree. 
There are 9 primary inputs marked as a, b, c, d, e, f, g, h, i and there is one primary output marked as 
Y. Sankeerna places the Primary Output cell Y first at x=0 in row 1. Then it looks at its leaf cells 
Ua356 and Ua358. From the time criticality given in Figure 5, it places cell Ua356 after finding the 
best place. The algorithm is then recursively invoked to place the tree with root as Ua356 which 
places the cells and the inputs in the sequence 3 to 21. Once the placer completes the placement of 
tree pointed by Ua356 as root, it starts placing the tree pointed by cell Ua358. Now the cells marked 
22 to 26 placed. This completes the placement of complete circuit. Primary Inputs and Primary 
Outputs are re-adjusted after placing all the cells. The final placement is shown in Figure 6(a). The 
cell name, sequence numbers and pin locations of the cell are shown in the Figure 6(a). These 
diagrams are taken from Cadence® SOC Encounter® back end tool [58]. The placement given by the 
Public Domain Placer (PDP) [59, 60, 41, 61] for this example is also shown in Figure 6(b). The 
layouts of these placements after carrying out detailed routing with Cadence® SOC Encounter® [58] 
are shown in Figure 7. The results are shown in table 1, Serial Number 24, as "example". 











1 S 


Yl Primal 


-y Output 








[UY] ^\ 












0.347891 J 


















2 / 


[Ua356] ^N 
0. 347891 ) 


/[Ua358] >l 
V 0.330634 ) 


22 












19_ 




^^T~~3^ 


22H^^- 




25 










x [Ua349] " > Y"~ f 


[Ua347]\ 


MUa434M 


/[Ua444] "^ 






15~~ "~17 


, O, 


160878 J^^Jy 


O- 347891 ) 


\0. 330634 ) 


V O 


.154191 J 




^-~~~~<f® 


/^[Ua442]\ 


/[Ua43 2] > 


\ / / [Ua446] 




[Ua445] ^.(^ 


TUa360]N 


/^[Ua433] 






[Ua443] ^l 


V 0.160878 ) 


I 9.160878 y 


' V 0. 167536 

^[Ua430] 
V 0.229631 




0.167S36 J \ 

[Ua431]^N /* 


O- 347891 ) 

TUa351] >, 
0.347891 J 


\^ 0.330634 






0.154191 J 








X2^ 

[Ua364]^\ / 
.3398 16 ) { 


[Ua362]N 
0. 347891 J 














{ [Ua439] 




rznji 


[Ua43S]^^\ 












^[Ua438] ^N / 






V 0.339816 




0.323096 J\ 

[Ua435] \/ 
0.323096 J( 


0. 347891 ) 

[Ua437]^N 
0. 347891 J 


V 0.339262 y 

/^[Ua440] 
\l 0.339262 


10 









Figure 5 Tree built by Sankeerna for the example 



"79T 



Vol. 2, Issue 1, pp. 73-89 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



15. 



16. 



17,, 



18 

•^wt^s! Luiito.a 

5 



&*, 



. 9 
20-. 



. . .-,,-. 



22 



11- 



UiltU 



23 



J^- 



24 



bUUfclK 



12 



■ ■ .--.I. i n\ 



25. 



j^sfca.aaL IvhirtijSjSL iho-5- 



13 



14, 



26, 



8 



19 



9 



20. 



iV"ss lorn-i-*: 



21 



10-, 



(a) Sankeerna placement for example 



.7 



-Ml " ' V ■ 



21, 



*>*^*aja.......lA>«Ka-4,;?. LwSfca* 



19 



22 



18- 



17, 



23 



16 



0«'I4 Jo«»*»-»<5 K>n« - 1 '-It Ct KVn<iy 1 fi^ai. 



24- 



^-.-i 



15, 



, , -.. .-... 



25-, 



14, 



fcs*. 



26, 



- .... ...I 



13 



w«V> - 



12 



11 



10 



8 



(b) PDP placementfor example 



Figure 6 Sankeerna and Publice Domain Placer (PDP) Placements of example 




Figure 7 Sankeerna & PDP layout of example 

The experimental set up to evaluate Sankeerna using benchmark circuits is explained in the next 
section. 

VI. Test setup 

In this section, we describe the test set up used to evaluate Sankeerna. The test set up is shown in 
Figure 8. 



"ioT 



Vol. 2, Issue 1, pp. 73-89 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Benchmark circuit in 
PLA format 



Technology Library 0.13 pi 
standard cell in genlib format 



E 13 



SIS Synthesizer 



Cell Mapping file 
obtained by 
"print-gate" 



Delay file having slack at 

every node obtained by 

"print-delay" 



Technology Library 0.13 11 

standard cell in Sankeerna 

format 



3Z 



Synthesized circuit in 
BLIF format 



BLIF to Bookshelf 

converter. Output in 

Bookshelf format 



Sankeerna flow 



Sankeerna 



Netlist in structural 
veri log format. ".v"file 



Placement file in DEF 
format. ".def" file 



Technology files 
in ".lef" format 



C aden ce S C en co u nter . Read Verilog, lef and def 
files. Route and Calculate Delay 



x 



T 



Delay WNS 
&TNS 

— J7" 



Area & wire 
length 

— ■ir — 



Placement 
Information 



CPU 
Time 



PDP flow 



Technology Library 0.13 li 

standard cell Sn Sankeerna 

format 



Bookshelf to Bookshelf conversion with 

technology library cell dimensions and pin 

locations (part of Sankeerna) 



Public Domain Placer (PDP) 



Aspect Ratio same 
as Sankeerna 



Placement in Bookshelf format 



Bookshelf to DEF & verilog 
conversion (part of Sankeerna) 



Net list in 

structural 

verilog. ".v"file 



Placement in 

DEF format. 

".def" file 



~£ 



Technology files 
in ".lef" format 



CadenceSOCencounter. Read Veri log, lef anddef 
files. Route and Calculate Delay 



Delay WNS 
&TNS 



Area & wire 
length 



Placement 
Information 



.^■-------^ :_ ^ 



CPU 

Time 



Comparison Table 



Figure 8 Test setup showing Sankeerna VLSI Design Flow 

The benchmark circuits are taken in the form of a PLA. Some of them are from MCNC benchmarks. 
We also added few other circuits like multiplexers, adders and multipliers. We have used 0.13 
micron, 8 metal layers, standard cell technology library for these experiments. Public domain SIS 
synthesizer [52] is used for synthesizing the benchmark circuits. 

We created the library file required for SIS in genlib format using the information from the data 
sheets and ".lef files of this particular technology. Three files namely (a) delay file (b) Cell 
mapping file (c) BLIF file are generated from SIS. The delay file consists of node name and slack at 
each node. Only block delays and fan out delays are considered. No wire delay or wire load models 
are used. This delay file is created using the information generated by "print_delay" command of SIS. 
The cell mapping file consists of node name and mapped library cell name. This file is created using 
the information generated by "print_gate" command of SIS. The BLIF file is created using 
"write_blif command of SIS. The BLIF output is then converted into Bookshelf format using the 
public domain tools available at the web site [59, 60, 41, 61] using the utility "blif2book-Linux.exe 
filename.blif filename". Using 0.13 micron standard cell technology files, Sankeerna generates a file 
in bookshelf format using the cell dimensions and pin locations of the given technology library. This 
file is used for placement by Sankeerna and also by Public Domain Placer (PDP) [59]. 
Bench marks are placed in case of PDP flow using "time LayoutGen-Lnx32.exe -f filename. aux -AR 
1.5 -saveas filename " [59]. The width to height ratio is 3:2 which is same for Sankeerna. Sankeerna 
gives the placement output in ".def format [62] (".def file"). The mapped netlist is given out in the 
form of structural verilog file (".v file"). Cadence® SOC Encounter® v06.10-p005_l [58] is used for 
routing and calculating the delay of the placement produced. The verilog (.v file) and placement (.def 
file) are read into SOC Encounter®. The 0.13 micron standard cell technology files, consisting of 



ITT 



Vol. 2, Issue 1, pp. 73-89 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

".lef" files and timing library files ".tlf ' are read into SOC encounter. We did a trial route and then 
detailed routing using Cadence NanoRoute® v06.10-p006 [40]. The delays Worst Negative Slack 
(WNS) and Total Negative Slack (TNS) "from all inputs to all outputs" are noted. The CPU time for 
detailed routing is also noted. Various other characteristics of placed circuits namely standard cell 
area, core area, wire length and Worst Negative Slack (WNS) are noted. Results for various 
benchmarks are shown in Table 1. We compared the results produced by Sankeerna with a Public 
Domain Placer (PDP). The PDP flow is also shown in Figure 8. The PDP uses the benchmark 
circuits in bookshelf format. The BLIF file generated by SIS is converted into bookshelf format with 
cell dimensions and pin locations as per the 0.13 micron standard cell library. We have used same 
aspect ratio for Sankeerna and PDP. The aspect ratio for these experiments was 0.4 for height and 0.6 
for width. To have a fair comparison, the Primary Inputs and Primary Outputs are placed at the 
boundary of the core for both placements produced by Sankeerna and PDP. The output from PDP is 
in bookshelf format (.pi, .scl files). This is converted into ".def ' format. The netlist is generated in the 
form of structural verilog file. All the utilities used to convert the above file formats are developed as 
software package of Sankeerna. The verilog, ".def, the technology file (.lef file) and timing files (.tlf 
files) are read into Cadence® SOC Encounter® [58]. The detailed routing and delay calculations are 
carried out. A Linux machine with dual Intel® Pentium® 4 CPU @ 3.00GHz and 2 GB memory was 
used for running Sankeerna and PDP. For running SOC Encounter® [58], Sun Microsystems SPARC 
Enterprise® M8000 Server with 960 MHz CPU and 49 GB memory was used. The results are 
tabulated in Table 1. 

VII. Results and Discussion 

The Table 1 shows the results of the placed circuits using existing Public Domain Placer (PDP) [59] 
and Sankeerna. 

There is an average improvement of 24% in delay (Worst Negative Slack WNS), 8% in area and 75% 
in wire length after detailed routing with Nano-route of Cadence [58] when compared to Public 
Domain Placer (PDP). Sankeerna uses only 3% of area extra over the standard cell area as shown 
under "S" in Table 1 where as PDP uses 12%. In case of bigger bench marks, namely, alu4, e64, 
mul5510 and add889, there are thousands of DRC violations in case of PDP. This is due to increased 
usage of wire length. So those placements are not useful. 

To compare Sankeerna with a commercial tool, we conducted the following experiment. We placed 
the benchmark circuit "alu4" using Sankeerna and did the detailed routing and delay calculation as 
mentioned earlier. Then the results given by Sankeerna are noted. We specified the same dimensions 
of width and height in the Commercial Tool Placer (CTP) after reading the verilog file into the tool. 
We then ran timing driven placement of commercial tool. We then carried out detailed routing. CPU 
time used for Sankeerna and Commercial Tool Placer (CTP) are given in Table 2. SOC Encounter® 
took 2:27 CPU time for detailed routing using Nanoroute® [58] for Sankeerna placement without any 
DRC violations. SOC Encounter® took 6:45 CPU time for detailed routing using Nanoroute for 
CTP's timing driven placement with 144 DRC violations in Metal layer 1. The layouts produced for 
Sankeerna placement and CTP's placement are shown in Figure 9. The black marks in CTP layouts 
in the Figure 9 are the DRC violations. These are shown separately in the same Figure (extreme right 
bl ock). Since there are so many DRC violations, the layout is not useful. 




Figure 9 Layouts of ALU4 with Sankeerna & CTP 



"iTf 



Vol. 2, Issue 1, pp. 73-89 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Table 1 Test Results 





Name 


Std 
Cell 
Area 


Pre- 

place 

WNS 


Sankeerna 


PDP 


% of area 
increase 
over Std 


improvement s 
over PDP 


Area 


\TO5 


Wire 
Len- 
gth 


DPX 
violations 


Area 


WX5 


Wiie 
Length 


DRC 
violations 


Cell 


Ar 
ea 


Wire 
Lens 
th 




S 


PDP 


1 


5xpl 


912 


-106 


927 


-1.274 


2102 





1C13 


-1.752 


403 7 


O 


T 




S 


92 


3S 


t 


9s.ym 


14S2 


-1 19 


1512 


-1.56B 


5256 





1647 


-1.858 


8125 


o 


2 




S 


55 


IS 


3 


alu4 


6723 


-241 


6773 


-3.237 


33296 





7470 


-5.5 28 


895 2G 


13434 


: 




9 


169 


71 


4 


bl2 


536 


-0_71 


be: 


-Z 3SZ 


1195 


c 


5B6 


-O.B91 


1476 


O 


3 




3 


24 


5 


5 


clip 


:2::£ 


-1 51 


1290 


-1.7&2 


3371 





1409 


-2.099 


6170 


o 


2 




S 


S3 


19 


6 


cmS2a 


156 


^0.54 


173 


-C.SC3 


:c- 





173 


-0.598 


233 


o 


11 







14 


-1 




i o mp 


1190 


-124 


1222 


-1.467 


29- : 


r- 


1322 


-l.BC^ 


5391 


o 


3 




s 


S3 


30 


S 


conl 


139 


^0.42 


14S 


-C.4B9 


19S 





155 


^D.515 


249 


o 


6 




5 


26 


5 


9 


cordic 


732 


-CSS 


754 


-1.152 


1406 





813 


-1.250 


2645 


z 


3 




~ 


SS 


9 


10 


count 


1275 


-0.97 


1290 


-1.23B 


3535 





1416 


-1,466 


5563 


z 


■ 




9 


66 


LS 


11 


e64 


7O02 


-Z.9c 


7031 


-1.657 


44024 





7780 


-3.316 


94434 


17461 







10 


115 


100 


12 


ex5 


3057 


-0.97 


3133 


-1.334 


131S1 





3357 


-1.899 


23916 


o 


2 




S 


SI 


42 


13 


mi&exl 


497 


^0.76 


509 


^0.9O2 


999 





553 


-0.967 


153 6 


o 


T 




s 


54 


7 


14 


misex2 


SS4 


^0 64 


927 


-0.765 


22S4 





983 


-0.941 


3475 


o 


5 




6 


52 


23 


15 


muxS-i 


175 


-0.59 


1SB 


-C.655 


-i -i ■ 


o 


194 


-:.7ie 


31D 


o 


S 




3 


33 


9 


16 


c6- 


1514 


-0.6S 


15 5B 


-O.S16 


2603 





1682 


-1.026 


7562 


z 


3 




7 


191 


26 


17 


rd53 


341 


^0.71 


35 & 


-Z.S27 


640 





3 79- 


-O.S54 


739 


z 


4 




6 


15 


3 


:S 


rd73 


1044 


-1_22 


ICES 


-1.4^1 


2~:5 


D 


1160 


-1.E34 


4622 


D 


■ 




p 


~ : - 


2"^ 


19 


rdS4 


949 


-1.06 


974 


-1.347 


2513 


O 


1C54 


-1..702 


4351 


O 


3 




3 


73 


26 


20 


sao2 


1O0S 


-1.26 


1G59 


-1.39D 


2564 





1120 


-1.792 


4359 


z 


5 




5 


70 


29 


-i ■ 


*quar5 


475 


-0.S5 


492 


^0.962 


943 





528 


-0.995 


131^ 


z 


4 




" 


39 


3 


22 


:-?: 


217 


-0.65 


231 


-2.699 


22. S 





241 


-C.7E2 


285 


o 


6 




4 


25 


9 


23 


Z9s.ym 


745 


-1.23 


7S4 


-1495 


1976 





828 


-1.836 


295 2 


o 


5 




5 


49 


23 


24 


example 


115 


-0.60 


127 


-C&5 • 


137 





168 


-0.605 


241 


o 


10 


45 


24 


76 


-i 


25 


add556 


1516 


-1 .23 


I55S 


-1.5SC 


4737 





1684 


-1.984 


7529 


o 


3 




" 


59 


27 


26 


mul55IO 


S553 


-3.02 


3626 


^.765 


44913 





9504 


-7.325 


1144SO 


17586 


1 




9 


155 


54 


27 


mul44S 


2161 


-1.S5 


2203 


-2.520 


7O0S 





2-Z1 


-3.039 


14457 


o 


T 




3 


106 


21 


2S 


add667 


234S 


-1.55 


2390 


-2.140 


S079 





2608 


-2.741 


14472 


- 


T 




3 


79 


2S 


29 


addSS9 


5 70S 


-1.S7 


: _ -2 


-3.06S 


26907 





6342 


-4.172 


62149 


2398 


1 




9 


131 


36 


30 


add? IS 


3422 


-l.S" 


34S6 


-2.7S6 


12462 





3794 


-3,294 


22303 


z 


T 


11 


3 


79 


IS 




.Average 3 


|T 


3 


75 


2- 



Table 2 shows results of other benchmarks comparing with Commercial Tool timing driven Placer 
(CTP). 

Table 2 Comparing Sankeerna with Commercial Tool Placer (CTP) 



SI 
No 


Name 


Std 
Cell 
Area 


Gore 
Area 


Floor Plan 


Sankesrn a 


CTP 


Dimension 


CPU 
Time 


WNS 


TNS 


DRC 
viola- 
tions. 


CPU 
Time 


WNS 


TNS 


DRC 
viola- 
tions 


Width 


Heifiht 


1 


alu4 


&723 


6773 


96. 60 


7C.11 


2:27 


-3.24 


-23.22 


C 


6:45 


-3.35 


-21.53 


144 


2 


e64 


7CC2 


7031 


ICC. 23 


7C.11 


3:21 


-1.66 


-93.93 


C 


12:57 


-1.33 


-7 3. CI 


127 


3 


mLi 5510 


3553 


3626 


111.32 


77.49 


1:17 


-4.77 


-39. 70 


C 


1C:C^ 


-4.47 


-35.11 


233 


4 


addSS9 


57 C 3 


5742 


91.54 


62.73 


2:42 


-3.07 


-21.97 


O 


10:03 


-2.67 


-19.29 


103 



For the same floor plan dimensions, CTP took more CPU time and could not produce final usable 
placement due to several DRC violations as shown in the last column of Table 2. Sankeerna 
placements are always routed without DRC violations, because wire planning is already done using 
shortest wires. This avoids iterations between placement and detailed routing steps. Even though, 
WNS and TNS values shown in the Table are comparable, because of DRC violations, the layout 



"iTf 



Vol. 2, Issue 1, pp. 73-89 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

produced by CTP is not useful; hence delay values have no meaning in case of CTP. So our method 
took much less time and produced better results when compared to Commercial Tool Placer (CTP). 
Pre-place delay is computed without taking wire delays into considerations. We have not used wire 
load models in SIS, because wire load models are not good estimates of final delay [2]. The post 
route delay includes delay due to wires and is the final delay of the circuit. We have compared pre- 
place delay with post route delay in Table 3. 

Table 3 Pre-place delay versus post routed delay comparison 



SI 


FMam e- 


Prepl 3 ce 


.i'-crrr Jt-e- &ft*ct 


PDP 


■S'-er.rF Jt e- er jtkx 


PDP 


WNS 


WNS 


WNS 


WNS Ditt 

3£ 


wr-j.5 

□ iff ?£ 


i 


a <_i^ 


-2..^1 


-3.3& 


-S.5Z 


39.39 


12S.5C 


2. 


e&^ 


-^.36 


-1.72 


-3.3C 


7B.&1 


2-« . S S 


3 


e>c5 


-:.97 


-1.32 


-1.B.5 


36.37 


31.19 


— 


nu 55i: 


-3.0Z 


-^."7"? 


-S.B-* 


57.57 


126.12 


.5 


e — ■ ■_■ ^^-S 


-1.S3 


-2.5 2 


-3.ZB 


33.69 


62.3 3 


& 


addESS 


-l.B.7^ 


-3.0"? 


-- ^. 3. 3 


e-^2^ 


1 20. B2 


V 


add778 


- 1 JBTT 


-2.7-5 


-3.^1 


-^9": 2 


B2.6 5 




51.30 


122.09 



The percentage difference of Worst Negative Slack of Sankeerna placements is much less (51.30% 
versus 122.09%) when compared to Public Domain Placer's (PDP's) values. This is due to the fact 
that PDP uses more wire when compared to Sankeerna. 

Cadence® SOC Encounter® [58] has the facility called "Trial route", which performs quick global 
and detailed routing creating actual wires, estimating routing related congestion and capacitance 
values. We did trial route of the benchmarks and noted wire lengths, WNS and TNS values for 
Sankeerna and PDP placements. The results are compared in Table 4. 





Table 4 Trial route Versus Detail (Nano) route 


, wire length and delay comparison 




SI 
No 


Name 


Trial Route 


Nano Route 


StrjTJteerrncr 

Trial Vs Nano 


PDP TriaF vs 
Nano 


Sarriceerrnji 


PDP 


So-niireenrwr 


PDP 


Wire- 
length 
Diff ^ 


WNS 
Diff 


Wire 
Length 
Diff B& 


WNS 
Diff 


Wire- 
length 


WNS 


Wire 
len^Eh 


WNS 


W/ire 
length 


WNS 


Wire 
length 


WNS 


1 


alu4 


33-^-12 


-3.26 


736S3 


-4_90 


33296 


-3.2 4- 


S952C 


-5.5 3 


-O. 3 & 


-Z.61 


21_« 


12.S2 


2 


e&4 


i_3 ^ 3Z 


-1.69 


7915 5 


-2.SC 


^^C2^ 


-i.ee 


9^i3i 


-3.3 2 


L.14 


-2. IS 


15.3D 


IS. 43 


3 


ex5 


13^£6 


-1.3D 


23 531 


-IS" 


13131 


-1.33 


23916 


-1.90 


-2. 27 


2.3S 


1.6^ 


-DCS 


A 


mu 551" 




-^.6 3 


B7S2B 


-E.3C 


^^913 


-4.7 7 


11 — SC 


-7.33 


l.OS 


3.D3 


16.9C 


16.27 




5 


r-iu ^^S 


71^7 


- 2 ' '" 


2 -■'■ ' ■■' 2 


- 3 Z 3 


7CCS 


-2.52 


1-^57 


-3.Ci 


-I_94 


3.11 


C.l~ 


0.20 


6 


addESS 


271C2 


-3. LT1 


5i. 723 


-—.Z3 


269C7 


-3.07 


B2 _-? 


--. _7 


-0.74 


1.B3 


13.57 


3.65 


V 


add 77B 


1Z7^S 


-2.7"7 


22371 


-3.26 


12 462 


-2_7^ 


223C3 


-3.3 


-2.62 


0.76 


-0.30 


1.1D 


Average 3& 


-O.S2 


1.15 


1C.3S 


7.45 



The percentage differences from trial route to detailed (Nano) route are shown in the last 4 columns 
of Table 4 for both Sankeerna and PDP. There is decrease in wire length by 0.82%, 1.19% increase in 
WNS in case of Sankeerna. The PDP placements took 10.39% more wire, WNS increased by 7.49%. 
So the Trial Route produced delays are good estimates in case of Sankeerna when compared to PDP 
placements. So we can get quick estimate of delay for Sankeerna produced placements. All these 
advantages add towards tighter coupling of VLSI design flow as envisaged in [4] to evolve 
Synergistic Design Flow. Sankeerna scales well as the complexity increases, because we are using 
constructive placement. The placement was found to converge after routing in all the test cases we 
have tried, that is, layout generated was without DRC violations after routing. To prove this point, we 
have taken bigger benchmarks out of Table 1 and improvements obtained are shown in Table 5. 
As can be seen from the Table 5, Sankeerna took only 1.3% extra area over the standard cell area 
where as PDP took 11% extra area. The 1.3% extra area is left at the right edges of all rows. This is 
the bare minimum area required to make the placement possible. The area improvement of 
Sankeerna over PDP is 8.8%. The most interesting fact is that wire length is 114.4% more for PDP 
when compared to Sankeerna. WNS improved by 46.2%. There are several DRC violations after 
detailed routing in case of PDP. Hence those layouts are not useful. Sankeerna used bare minimum 
area and produced better timings which were routable. 



"iTT 



Vol. 2, Issue 1, pp. 73-89 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 









Table 5 As complexity 


increases, Sankeerna performs better over PDP. 






SL 


INTame 


Std 
Cell 

Area 


Prs- 

place 

WXS 


Sfafifce&rnci 


PEKP 


% Area 
Incxease 
over Std 
Cell area 


*o Improvements. 


Area 


WXS 


Wire 

ECll 


ORG 
■vT.ola.ti 

OTLS 


Arsa 


V.SS 


Wire 

L-STLEtll 


DRC 
viola tio 


At 
ea 


Wire 
LenEth 


WNS 


S 


PDP 


1 


alu4 


6723 


-241 


&T73 


-3.Z37 


332-96 


O 


74-70 


-5 . 5 ZB 


S5SZO 


13434 


0.7 


111 


9.3 


-cS.9 


70.S 


2. 


e64 


7O02 


-0 96 


"7Q31 


- 1 . 65 7 


44024 


O 


7"7SO 


-3.316 


S4434 


174 61 


O.O 


HO 


9.6 


114.5 


lOO 1 


3 


eiO 


^C5" 


-0.97 


3133 


-1.33^ 


131S1 


O 


3357 


-1.S99 


Z3516 


O 


2.0 


no 


7.S 


SI 4 


42.4 


4 


rtml5 5 1 


S553 


-3 .02 


S626 


^1.765 


44913 


O 


9504 


-V. 3Z5 


1144BO 


175&6 


1.0 


ILO 


9.2 


1549 


53.7 


5 


mul^-S 


2161 


-L.3£ 


2203 


-2.520 


700S 


O 


Z4D1 


-3.^3S 


1,445 7 


o 


2.0 


no 


S.2 


1063 


206 


6 


add&&^ 


234S 


-1.55 


23 90 


-2 140 


S079 


O 


z&oe 


-Z."741 


144-7Z 


o 


2.Z 


no 


S.4 


79.1 


2S.1 


7 


addSSS- 


570S 


-1.S7 


5 742 


-3.06S 


26907 


O 


63^Z 


-4 r 1"7Z 


szi^g- 


Z39S 


1 O 


no 


9.5 


131 O 


360 


3 


add^~S 


3422 


-1.S7 


34S6 


-2. 7 36 


12462 


O 


3 7&4 


-3.Z54 


223^3 


o 


2.0 


ILO 


3.1 


79.0 


1S.2 




A^-erage 1.3 


no 


S.S 


114 4 


462 



Area, wire length and delay are interrelated. If we use minimum area, as is the case with Sankeerna, 
we require less length wire. This in turn leads to minimum delay. So using Sankeerna flow avoids 
design iterations, because the flow is tightly coupled from synthesis to layout. Thus Sankeerna 
produced compact layouts which are better in delay when compared to Public Domain Placer (PDP) 
and Commercial Tool timing driven Placer (CTP). 

VIII. Conclusions and Future scope 

We now summarise the features of Sankeerna for fitting it into a Synergistic Design Flow (SDF) as 
mentioned in Section 3. 

The first requirement was (a) linear time complexity with respect to number of cells in the circuit. 
The Sankeerna placement algorithms have linear time complexity after construction of the tree. In a 
tightly coupled Synergistic Design Flow (SDF), trees are already built by synthesizer which can be 
directly used by the placer. So there is no need to construct them separately. Due to linear time 
complexity, Sankeerna scales well as the circuits become bigger. 

The second requirement was (b) awareness of synthesis and routing assumptions and expectations, 
that is, tight coupling of synthesis and routing as mentioned in [4]. We have used logic dependency 
and timing information from synthesizer. Using this information, it properly guides the placement as 
mentioned in [46, 28, 29, 30, 47]. As shown in Table 3, Pre-place to post routed delay variation for 
Sankeerna was 51.30% when compared to 122.09% for Public Domain Placer (PDP). The values 
vary from 39.39% to 78.6% for Sankeerna. Where as the variation for PDP was, from 62.33% to 
242.99% based on the circuit. So Sankeerna is more coupled to synthesizer's estimates when 
compared to PDP. Sankeerna placements were always routed without DRC violations as shown in 
Table 1 and 2. Where as PDP has thousands of violations for bigger circuits even after using 11% 
extra space when compared to Sankeerna which used only 0% to 1% extra over the area calculated by 
the synthesizer, which is bare minimum over the standard cell area. For the same floor plan 
dimensions, Commercial Tool's timing driven Placer (CTP) produced hundreds of DRC violations as 
shown in Table 2 when compared to zero DRC violations for Sankeerna. In Sankeerna routability is 
achieved without white space allocation, because placements produced by Sankeerna use minimum 
length wires. As mentioned in Section (2) white space allocation increases area which in turn increase 
wire length. In conclusion, the placements produced by Sankeerna were always routable because it 
uses minimum wire when compared to PDP and CTP. 

The third requirement was (c) achieving minimum area and delay. Area increase was only 1.3% over 
the standard cell area which is calculated by the synthesizer. This value for PDP was 11%. As shown 
in Table 5, Sankeerna performed better as the complexity increased when compared to Public 
Domain Placer (PDP). Wire length improved by 1 14.4% and delay by 46.2% when compared to PDP. 
The fourth requirement was (d) placement produced should be routable without Design Rule Check 
(DRC) violations, that is, wire planning has to be done during placement. As shown in Table 5, PDP 
could not produce usable placements due congestion and resulted in thousands of DRC violations in 
four cases out of 8 test cases. So design flow did not converge in case of PDP. It is unpredictable, 
because, in some cases it converged. In case of Sankeerna, it always converged and convergence is 



"iFf 



Vol. 2, Issue 1, pp. 73-89 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

predictable due to minimum wire length and proper wire planning done during placement. As shown 
in Table 2, same non-convergence and unpredictability was noticed in case of Commercial Tool's 
Placer (CTP). 

The fifth requirement was (e) delay of final layout should be predictable with trial routes. As shown 
in Table 4, for Sankeerna, wire length decreased by 0.82% from trial route to detailed route, where as 
it increased by 10.39 % for PDP. The delay increase was 1.19% for Sankeerna, where as it is 7.49% 
for PDP. Thus, the wire planning done by Sankeerna was maintained after routing, where as it varied 
in the case of PDP. Trial route was good estimate for Sankeerna. So there is convergence and tight 
coupling between placements and routing in case of Sankeerna. 

The sixth requirement was (f) placer should smoothly interface with synthesis and routing tools. As 
shown in Figure 8, the test set up showing Sankeerna design flow, Sankeerna smoothly interfaces 
with the existing synthesizer and router. We have not used any wire load models during synthesis as 
it was demonstrated that they were not useful [2]. The slack at each node and cell mapping 
information were already available with the synthesizer. The router was interfaced through verilog 
netlist and "def ' [62] file for placement information. Pre-place and trial route delay calculations were 
done by the commercial tool [58], which were good estimators in case of Sankeerna for a real 
standard cell technology library. As can be seen from the experiments, there were no design iterations 
among synthesis, placement and routing in case of Sankeerna to achieve the results shown. 
Area, wire length and delay calculations of Pre-place, trial and post route were done by the 
Commercial tool. This validates that there is no error in measuring these values while conducting 
these experiments. 

The features and effectiveness comparison of Sankeerna with other published placement techniques 
is elaborated here. In Sankeerna the cells which are logically dependent are placed closer to each 
other [29], whereas in other placement algorithms the cells are randomly scattered and create zigzags 
and criss-crosses that leads to increase in congestion, wire length and delays. The random scattering 
of the cells even leads to the unpredictability in the final layout that result into non-convergent 
iterations. This will result in non-convergent iterations. Because wires are shorter in our placement 
and wires are planned by selecting closest locations during placement, congestion is less and detailed 
routing always gets completed using minimum area for wires. This automatically leads to minimum 
delays. The most critical paths automatically get higher priority, without going in for path based 
placement, which grows exponentially with circuit complexity and is computationally expensive. As 
it can be seen from Figure 5, Sankeerna first establishes the most critical path and rest of the logic is 
placed around it based on the logic dependency. This is similar to the denser path placement of [30]. 
So the most critical path is placed, along physically and delay wise shortest path as mentioned in 
Section 3. Since our method is constructive, it scales well for bigger circuits. We are planning to test 
with bigger test cases in future. The Circuit is naturally partitioned when trees are built, rooted by 
Primary Outputs (POs) by Sankeerna. So there is no additional burden of extracting cones as in [27, 
29] or partitioning the circuit as is the case in most of the placers. Global signal flow is kept in mind 
all through the placement by using Depth First Search (DFS) for all sub trees rooted at various levels 
of logic, unlike other placement methods, which randomly scatter the cells. Trial route can be used 
for quick estimate of delay which will be good estimate in case of Sankeerna as explained earlier. As 
mentioned in [2], using wire load models misleads whole design process resulting in non- 
convergence. So Sankeerna flow does not use wire load models. Sankeerna flow is always 
convergent and tightly coupled, which gives estimates of area, wire length and delay using existing 
layout tools like Trial route of Cadence's SOC Encounter® [58], which are not far away from the 
values obtained after detailed routing. Thus Sankeerna approach is useful towards evolving 
Synergistic Design Flow (SDF), which is to create iteration loops that are tightly coupled at the 
various levels of design flow as mentioned in [4]. 

Acknowledgements 

We thank Dr. K.D. Nayak who permitted and guided this work to be carried out in ANURAG. We 
also thank members of ANURAG who reviewed the manuscript. Thanks are due to Mrs. D. 
Manikyamma and Mr. D. Madhusudhan Reddy for preparation of the manuscript. 



"iTf 



Vol. 2, Issue 1, pp. 73-89 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

References 

[I] Randal E. Byrant, et al., (2001), "Limitations and Challenges of Computer-Aided Design Technology 
for CMOS VLSI", Proceedings of the IEEE, Vol. 89, No. 3, pp 341-65. 

[2] Gosti, W., et al., (2001), "Addressing the Timing Closure Problem by Integrating Logic Optimization 

and Placement" , ICCAD 2001 Proceedings of the 2001 IEEE/ACM International Conference on 

Computer-aided design, San Jose, California , pp 224-231. 
[3] Shahookar K & Mazumder P, (1991), "VLSI cell placement techniques" ACM Computing Surveys, 

Vol. 23, No. 2. 
[4] Kurt Keutzer., et al., (1997), "The future of logic synthesis and physical design in deep-submicron 

process geometries", ISPD '97 Proceedings of the international symposium on Physical design, ACM 

New York, NY, USA, pp 218-224. 
[5] Coudert, O, (2002), "Timing and design closure in physical design flows", Proceedings. International 

Symposium on Quality Electronic Design (ISQED '02), pp 511 - 516. 
[6] Wilsin Gosti , et al., (1998), "Wireplanning in logic synthesis", Proceedings of the IEEE/ACM 

international conference on Computer-aided design, San Jose, California, USA, pp 26-33. 
[7] Yifang Liu, et al., (2011), "Simultaneous Technology Mapping and Placement for Delay 

Minimization ", IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 

Vol. 30 No. 3, pp 416-426. 
[8] Pedram, M. & Bhat, N, (1991), "Layout driven technology mapping", 28th ACM/IEEE Design 

Automation Conference, pp 99 - 105. 
[9] Salek, A.H., et al., (1999), "An Integrated Logical and Physical Design Flow for Deep Submicron 

Circuits", IEEE Transactions on Computer- Aided Design of Integrated Circuits and Systems, Vol. 

18,No. 9,pp 1305-1315. 
[10] Naveed A. Sherwani, (1995), "Algorithms for VLSI Physical Design Automation", Kluwer Academic 

Publishers, Norwell, MA, USA. 

[II] Sarrafzadeh, M., & Wong, C.K., (1996), "An introduction to VLSI Physical Design", The McGraw-Hill 
Companies, New York. 

[12] Jason Cong, et al., (2005), "Large scale Circuit Placement" , ACM Transactions on Design Automation 

of Electronic Systems, Vol. 10, No. 2, pp 389-430. 
[13] Yih-Chih Chou & Young-Long Lin, (2001), "Performance-Driven Placement of Multi-Million-Gate 

Circuits", ASICON 2001 Proceedings of 4th International Conference on ASIC, Shanghai, China, pp 1- 

11. 
[14] Yao-Wen Chang, Zhe-Wei Jiang and Tung-Chieh Chen, (2009), "Essential Issues in Analytical 

Placement Algorithms", Information and Media Technologies, Vol. 4, No. 4, pp.8 15-836 
[15] Bunglowala, A. & Singhi, B.M., (2008), "Performance Evaluation and Comparison and Improvement 

of Standard Cell Placement Techniques in VLSI Design", First international conference on Emerging 

Trends in Engineering and Technology, Nagpur, Maharashtra, 468 - 473. 
[16] B. Sekhara Babu, et al (2011), "Comparison of Hierarchical Mixed-Size Placement Algorithms for 

VLSI Physical Synthesis", CSNT '11 Proceedings of the 2011 International Conference on 

Communication Systems and Network Technologies, IEEE Computer Society Washington, DC, USA, 

pp 430-435. 
[17] Mehmet Can Yildiz & Patrick H. Madden, (2001), "Global objectives for standard cell placement" , 

GLSVLSI '01 Proceedings of the 11th Great Lakes symposium on VLSI, ACM New York, NY, USA, pp 

68-72. 
[18] C. J. Alpert, et al., (1997), "Quadratic Placement Revisited" , 34th ACM/IEEE Design Automation 

Conference, Anaheim, pp 752-757 '. 
[19] N. Viswanathan et al., (2007), "FastPlace 3.0: A Fast Multilevel Quadratic Placement Algorithm with 

Placement Congestion Control", ASP-DAC '07 Proceedings of the 2007 Asia and South Pacific Design 

Automation Conference, IEEE Computer Society Washington, DC, USA, pp 135-140. 
[20] Spindler, P, et al., (2008), "Kraftwerk2 — A Fast Force-Directed Quadratic Placement Approach Using 

an Accurate Net Model", IEEE Transactions on Computer-Aided Design of Integrated Circuits and 

Systems, Vol. 27 no. 8, 1398 - 1411. 
[21] Rexford D. Newbould & Jo Dale Carothers , (2003), "Cluster growth revisited: fast, mixed-signal 

placement of blocks and gates", Southwest Symposium on Mixed Signal Design, pp 243 - 248. 
[22] Carl Sechen & Alberto Sangiovanni-Vincentelli, (1985), "The TimberWolf Placement and Routing 

Package", IEEE Journal of Solid- State Circuits, vol. SC-20, No. 2, pp 510-522. 



"STf 



Vol. 2, Issue 1, pp. 73-89 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[23] Khorgade, M. et al., (2009), "Optimization of Cost Function with Cell Library Placement of VLSI 

Circuits Using Simulated Annealing", 2nd International Conference on Emerging Trends in 

Engineering and Technology (ICETET), Nagpur, India, pp 173 - 178. 
[24] Yoshikawa, M. & Terai, H., (2005), "A GA-based timing-driven placement technique", Sixth 

international conference on Computational Intelligence and Multimedia Applications, pp 74 - 79. 
[25] Andrew Kennings & Kristofer P. Vorwerk, (2006), "Force-Directed Methods for Generic Placement", 

IEEE Transactions on Computer Aided Design of Integrated Circuits and Systems, Vol. 25, NO. 10, pp 

2076-2087. 
[26] Chen Li et al., (2007), "Routability -Driven Placement and White Space Allocation", IEEE 

Transactions on Computer-Aided Design of Integrated Circuits and Systems, Volume: 26 Issue:5, pp 

858-871. 
[27] Yu-Wen Tsay, et al., (1993), "A Cell Placement Procedure that Utilizes Circuit Structural Properties" , 

Proceedings of the European Conference on Design Automation, pp 189-193. 
[28] Chanseok Hwang & Massoud Pedram, (2006), "Timing-Driven Placement Based on Monotone Cell 

Ordering Constraints" , Proceedings of the 2006 Conference on Asia South Pacific Design Automation: 

ASP-DAC 2006, Yokohama, Japan, pp 201-206. 
[29] Cong, J. & Xu, D, (1995), " Exploiting signal flow and logic dependency in standard cell placement" , 

Proceedings of the Asian and South Pacific Design Automation Conference, pp 399 - 404. 
[30] Ioannis Fudos et al,( 2008), "Placement and Routing in Computer Aided Design of Standard Cell 

Arrays by Exploiting the Structure of the Interconnection Graph", Computer- Aided Design & 

Applications, CAD Solutions, LLC, Canada, http://www.cadanda.com/, Vol. 5(1-4), pp 325-337. 
[31] Andrew B. Kahng & Qinke Wang, (2004), "An analytic placer for mixed-size placement and timing- 
driven placement" , Proceedings of International Conference on Computer Aided Design, pp 565-572. 
[32] Jun Cheng Chi, et al., (2003), "A New Timing Driven Standard Cell Placement Algorithm", 

Proceedings of International Symposium on VLSI Technology, Systems and Applications, pp 184-187. 
[33] Swartz, W., & Sechen, C, (1995), "Timing Driven Placement for Large Standard Cell Circuits", Proc. 

ACM/IEEE Design Automation Conference, pp 211-215. 
[34] Tao Luo, et al., (2006), "A New LP Based Incremental Timing Driven Placement for High 

Performance Designs", DAC '06 Proceedings of the 43rd Annual Design Automation Conference, 

ACM New York, NY, USA, pp 1 1 15-1 120. 
[35] Wern-Jieh Sun & Carl Sechen, (1995), "Efficient and effective placement for very large circuits",. 

IEEE Transactions on CAD of Integrated Circuits and Systems, Vol. 14 No. 3, pp 349-359. 
[36] Marek-Sadowska, M. & Lin, S.P. (1989), "Timing driven placement" IEEE international conference 

on Computer- Aided Design, Santa Clara, CA , USA , pp 94 - 97 
[37] Wilm E. Donath, et al., (1990), "Timing driven placement using complete path delays", Proceedings of 

the 27th ACM/IEEE Design Automation Conference, ACM New York, NY, USA, pp 84 - 89. 
[38] Bill Halpin, et al, (2001), "Timing driven placement using physical net constraints" , Proceedings of the 

38th annual Design Automation Conference, ACM New York, NY, USA, pp 780 - 783. 
[39] Xiaojian Yang, et al., (2002), "Timing -driven placement using design hierarchy guided constraint 

generation" Proceedings of the 2002 IEEE/ACM international conference on Computer-aided design, 

ACM New York, NY, USA, pp 177-180. 
[40] Riess, B.M. & Ettelt, G.G, (1995), "SPEED: fast and efficient timing driven placement" , IEEE 

International Symposium on Circuits and Systems, Seattle, WA , USA, vol.1, pp 377 - 380. 
[41] Saurabh Adya and Igor Markov, (2003), "On Whitespace and Stability in Mixed-size Placement and 

Physical Synthesis", International Conference on Computer Aided Design (ICCAD), San Jose, pp 311- 

318. 
[42] Taraneh Taghavi, et al, (2006), "Dragon2006: Blockage-Aware Congestion-Controlling Mixed- Size 

Placer", ISPD '06 Proceedings of the 2006 international symposium on Physical design, ACM New 

York, NY, USA, pp 209 - 21 1. 
[43] Yi-Lin Chuang, et al., (2010), "Design-hierarchy aware mixed-size placement for routability 

optimization" , IEEE/ACM International Conference on Computer-Aided Design (ICCAD), San Jose, 

CA, USA, pp 663 - 668. 
[44] Xiaojian Yang et al., (2002), "A standard- cell placement tool for designs with high row utilization", 

Proceedings of IEEE International Conference on Computer Design: VLSI in Computers and 

Processors, pp 45-47. 
[45] C. Chang, J. Cong, et al., (2004), "Optimality and Scalability Study of Existing Placement Algorithms", 

IEEE Transactions on Computer-Aided Design, Vol.23, No.4, pp.537 - 549. 



"sTf 



Vol. 2, Issue 1, pp. 73-89 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[46] Santeppa Kambham & Krishna Prasad K.S.R, (2011), "ANUPLACE: A Synthesis Aware VLSI Placer 

to minimize timing closure", International Journal of Advances in Engineering & Technology, Vol. 1, 

Issue 5, pp. 96-108. 
[47] Santeppa Kambham et al., (2008), "New VLSI Placement algorithms to minimize timing closure 

problem", International conference on Emerging Microelectronics and Interconnection Technology 

(EMIT-08), IMAPS, Bangalore, India. 
[48] Brayton R K, et al., (1990), "Multilevel Logic Synthesis", Proceedings of the IEEE, Vol. 78, No. 2, pp- 

264-300. 
[49] Brayton R K, et al.,(1987), "MIS: A Multiple-Level Logic Optimization System", IEEE Transactions on 

Computer Aided Design, Vol.6, No. 6, pp- 1062- 1081. 
[50] Rajeev Murgai, et al.,(1995), "Decomposition of logic functions for minimum transition activity", 

EDTC '95 Proceedings of the European conference on Design and Test, pp 404-410. 
[51] Fujita, M. & Murgai, R, (1997), "Delay estimation and optimization of logic circuits: a survey", 

Proceedings of Asia and South Pacific Design Automation Conference, Chiba, Japan, pp 25 - 30. 
[52] Sentovich, E.M., et al., (1992), "SIS: A System for Sequential Circuit Synthesis", Memorandum No. 

UCB/ERL M92/41, Electronics Research Laboratory, University of California, Berkeley, CA 94720. 
[53] M. Fischer & M. Paterson, (1980), "Optimal tree layout (preliminary version, ", STOC '80 Proceedings 

of the twelfth annual ACM symposium on Theory of computing, ACM New York, NY, USA, pp. 

177-189. 
[54] S. Chatterjee, et al., (2007) "A linear time algorithm for optimum tree placement", Proceedings of 

International Workshop on Logic and Synthesis, San Diego, California, USA 
[55] M. Yannakakis, "A polynomial algorithm for the min-cut linear arrangement of trees", Journal of 

ACM, vol. 32, no. 4, pp. 950-988, 1985. 
[56] Andrew Caldwell, et al., (1999), "Generic Hypergraph Formats, rev. LI", from 

http://vlsicad.ucsd.edu/GSRC/bookshelf/Slots/Fundamental/HGraph/HGraphl . 1 .html. 
[57] Jason Cong, et al, (2007), "UCLA Optimality Study Project", from 

http://cadlab.cs.ucla.edu/~pubbench/. 
[58] Cadence®, (2006), "Encounter® Menu Reference", Product Version 6.1, Cadence® Design Systems, 

Inc., San Jose, CA, USA. ^Encounter® is the trademark of Cadence Design Systems, Inc., San Jose, 

CA,USA. 
[59] Saurabh Adya & Igor Markov, (2005), "Executable Placement Utilities" from 

http://vlsicad.eecs.umich.edu/BK/PlaceUtils/bin. 
[60] Saurabh N. Adya, et al., (2003), "Benchmarking For Large-scale Placement and Beyond", 

International Symposium on Physical Design (ISPD), Monterey, CA, pp. 95-103. 
[61] Saurabh Adya and Igor Markov, (2002), "Consistent Placement of Macro-Blocks using Floorplanning 

and Standard-Cell Placement" , International Symposium of Physical Design (ISPD), San Diego, 

pp.12-17. 
[62] Cadence®, (2004), "LEF/DEF Language Reference", Product Version 5.6, Cadence® Design Systems, 

Inc., San Jose, CA, USA. 

Authors 

Santeppa Kambham obtained B.Tech. in Electronics and Communication engineering from 

J N T U and M Sc (Engg) in Computer Science and Automation (CSA) from Indian Institute of 

Science, Bangalore. He worked in Vikram Sarabhai Space Centre, Trivandrum from 1982 to 

1988 in the field of microprocessor based real-time computer design. From 1988 onwards, he 

has been working in the field of VLSI design at ANURAG, Hyderabad. He received DRDO 

Technology Award in 1996, National Science Day Award in 2001 and "Scientist of the Year 

Award" in 2002. He is a Fellow of IETE and a Member of IMAPS and ASI. A patent has been 

granted to him for the invention of a floating point processor device for high speed floating point arithmetic 

operations in April 2002. 




Siva Rama Krishna Prasad Kolli received B.Sc. degree from Andhra University, DMIT in 
electronics from MIT, M.Tech. in Electronics and Instrumentation from Regional Engineering 
College, Warangal and PhD from Indian Institute of Technology, Bombay. He is currently 
working as Professor at Electronics and Communication Engineering Department, National 
Institute of Technology, Warangal. His research interests include analog and mixed signal IC 
design, biomedical signal processing and image processing. 



T£ 



~89] 



Vol. 2, Issue 1, pp. 73-89 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



A New Variant of Subset-Sum Cryptosystem over 

RSA 

Sonal Sharma 1 , Saroj Hiranwal 2 , Prashant Sharma 3 
M Tech Student, Sri Balaji College of Engineering & Technology, Jaipur, Rajasthan 
2 Reader (CSE Dept), Sri Balaji College of Engineering & Technology, Jaipur, Rajasthan 



Abstract 

RSA is an algorithm for public-key cryptography that is based on the presumed difficulty of factoring large 
integers, the factoring problem. RSA stands for Ron Rivest, Adi Shamir and Leonard, who first publicly 
described it in 1978. A user of RSA creates and then publishes the product of two large prime numbers, along 
with an auxiliary value, as their public hey. The prime factors must be kept secret. In RSA if one can factor 
modulus into its prime numbers then the private key is also detected and hence the security of the cryptosystem 
is broken. The Subset-Sum cryptosystem (Knapsack Cryptosystem) is also an asymmetric cryptographic 
technique. The Merkle-Hellman system is based on the subset sum problem (a special case of the knapsack 
problem): An instance of the Subset Sum problem is a pair (S, t), where S = fxj , x 2 , ..., xj is a set of positive 
integers and t (the target) is a positive integer. The decision problem asks for a subset of S whose sum is as 
large as possible, but not larger than t. This problem is NP -complete. However, if the set of numbers (called the 
knapsack) is super increasing, that is, each element of the set is greater than the sum of all the numbers before 
it; the problem is easy and solvable in polynomial time with a simple greedy algorithm. So in this paper we 
present a new algorithm (Modified Subset-Sum cryptosystem over RSA) which is secure against Mathematical 
attack, Brute-force attack, Factorization attack and Chosen- cipher-text attack on RSA as well as Shamir attacks. 
This paper also presents comparison between Modified Subset - Sum Cryptosystem and RSA cryptosy stems in 
respect of security and performance. 

KEYWORDS' Cryptography, Subset Sum, Public key, Private Key, RSA, Merkle-Hellman, Super increasing 
and Complexity 

I. Introduction 

To solve the problem of secure key management of Symmetric key cryptography, Diffie and Hellman 
introduced a new approach to cryptography and, in effect, challenged cryptologists to come up with a 
cryptographic algorithm that met the requirements for public-key systems. Public key cryptography 
uses a pair of related keys, one for encryption and other for decryption. One key, which is called the 
private key, is kept secret and other one known as public key is disclosed and this eliminate the need 
for the sender and the receiver to share secret key. The only requirement is that public keys are 
associated with the users in a trusted (authenticated) manner through a public key infrastructure (PKI). 
The public key cryptosystems are the most popular, due to both confidentiality and authentication 
facilities [l].The message is encrypted with public key and can only be decrypted by using the private 
key. So, the encrypted message cannot be decrypted by anyone who knows only the public key and 
thus secure communication is possible. 

In a public-key cryptosystem, the private key is always linked mathematically to the public key. 
Therefore, it is always possible to attack a public-key system by deriving the private key from the 
public key. The defense against this is to make the problem of deriving the private key from the public 
key as difficult as possible. Some public-key cryptosystems are designed such that deriving the 



"90T 



Vol. 2, Issue 1, pp. 90-97 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

private key from the public key requires the attacker to factor a large number. The Rivest-Shamir- 
Adleman (RSA) and Subset-Sum (Knapsack) public key cryptosystems [2] are the best known 
examples of such a system. This paper presents a hybrid cryptography algorithm which is based on 
the factoring problem as well as Subset-Sum problem called a Modified Subset-Sum over RSA public 
key cryptosystem (SSRPKC). 

1.1 Euler's Phi-Function 

Euler's phi-function, (p(n), which is sometimes called Euler's Totient Function; find the number of 
integer that are both smaller than n and relatively prime to n. This function (p(n) calculate the number 
of element in given set [3]. 

In this paper, we compare and evaluate the RSA cryptosystem and modified sub-set sum cryptosystem 
by implementing and running them on a computer. We investigate the issues of complexity, efficiency 
and reliability by running the algorithm with different sets of values. Moreover, comparisons will be 
done between these two algorithms given the same data as input. 

The rest of paper is organized as follows- section 2, describes the RSA cryptosystem which depends 
on the factoring large integer numbers. In section 3, describe the introduction of sub-set sum 
cryptography which depends on the super-increasing order also called NP-complete problem. In 
section 4, we present our modified algorithm. In section 5, we compare both the algorithm- RSA and 
modified sub-set sum cryptosystem. A conclusion is shown in section 6. 

II. RSA Cryptosystem 

RSA is based on the principle that some mathematical operations are easier to do in one direction but 
the inverse is very difficult without some additional information. In case of RSA, the idea is that it is 
relatively easy to multiply but much more difficult to factor. Multiplication can be computed in 
polynomial time where as factoring time can grow exponentially proportional to the size of the 
number. RSA consist of three steps [4]:- 
Step 1) Key Generation Process 

1. Generate two large random primes, p and q, of approximately equal size such that their 
product n = p x q is of the required bit length, e.g. 1024 bits. 

2. Compute n = p x q and (p = (p-1) x (q-1). 

3. Choose an integer e, satisfying 1 < e < (p, such that gcd (e, (p) = 1. 

4. Compute the secret exponent d, 1 < d < (p, such that e x d = 1 (mod (p). 

5. The public key is (n, e) and the private key is (n, d). Keep all the values d, p, q and (p secret. 

6. n is known as the modulus. 

7. e is known as the public exponent or encryption exponent or just the exponent. 

8. d is known as the secret exponent or decryption exponent. 

Public key (n, e) is published for every one and private key (p, q, d) must be kept secret. Then by 
using these keys encryption, decryption, digital signing and signature verification are performed. 
Step 2) Encryption Process 
Sender A does the following: - 

1. Obtains the recipient B's public key (n, e). 

2. Represents the plaintext message as a positive integer m. 

3. Computes the cipher text c = m e mod n. 

4. Sends the cipher text c to B. 
Step 3) Decryption Process 

Recipient B does the following: - 

1. Uses private key (n, d) to compute m = c d mod n. 

2. Extracts the plaintext from the message representative m. 

2.1 Security of RSA 

The security of RSA cryptosystem is also broken by two attacks based on factorization attack and 
chosen-cipher text attacks [9]. 

A) Factorization Attack: The security of RSA is based on the idea that the modulus is so large that it 
is infeasible to factor it in a reasonable time. Bob select p and q and calculate n=p*q. Although n is 

91 | Vol. 2, Issue 1, pp. 90-97 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

public, p and q are secrete. If Eve can factor n and obtain p and q, she can calculate (p = (p-1) x (q-1). 

Eve then calculate d = 1 (mod (p) because e is public. The private exponent d is the trapdoors that Eve 

can use to decrypts any encrypted message [9]. 

B) Chosen-Cipher text Attack: A potential attack on RSA is based on the multiplicative property of 

RSA[9]. 

III. Sub-set Sum Cryptography 

In computer science, the subset sum problem is an important problem in complexity theory and 
cryptography. The problem is this: given a set of integers, does the sum of some non-empty subset 
equal exactly zero. For example, given the set {-7, -3, -2, 5, 8}, the answer is yes because the subset 
{-3, -2, 5} sums to zero. The problem is NP-Complete. There are two problems commonly known as 
the subset sum problem. The first is the problem of finding what subset of a list of integers has a given 
sum, which is an integer relation problem where the relation coefficients ai are or 1. The second 
subset sum problem is the problem of finding a set of n distinct positive real numbers with as large 
collection as possible of subsets with the same sum [4]. 

The subset sum problem is a good introduction to the NP-complete class of problems. There are two 
reasons for this [4]- 

• It is a decision and not an optimization problem 

• It has a very simple formal definition and problem statement. 

IV. Modified Algorithm 

In this section we introduce a new approach for public key cryptosystem. Modified Subset Sum 
(MSS) is an asymmetric-key cryptosystem in which two keys are required: a public key and a private 
key. Furthermore, unlike RSA, it is one-way, the public key is used only for encryption, and the 
private key is used only for decryption. Thus it is useless for authentication by cryptographic signing. 
Modified algorithm consist of three steps- 
Step 1) Key Generation Process 

1. Generate two large random primes, p and q, of approximately equal size such that their 
product m = p x q is of the required bit length, e.g. 1024 bits. (From Big Integer library 
function of Java) 

2. Compute m = p x q and (p = (p-1) x (q-1). 

3. Choose an integer e, satisfying 1 < e < (p, such that gcd (e, (p) = 1. 

4. Compute the secret exponent d, 1 < d < (p, such that e x d = 1 (mod (p). 

5. Choose a superincreasing set A = (ai, ..., a n ) 

6. Choose an integer M with M > SUM i=L .. n (ai). M is called the modulus. 

7. Choose a multiplier W such that gcd(M, W) = 1 and 1 <= W < M This choice of W 
guarantees an inverse element U: UW = 1 (mod M) 

8. To get the components bi of the public key B, perform bi = ai*W mod M, I = 1 ... n 
The superincreasing property of A is concealed by modular multiplication. 

The public key is (B, n, e) and the private key is (A, M, W, n, d). Keep all the values d, p, q and (p 

secret. 

Public key is published for every one and private key must be kept secret. Then by using these keys 

encryption and decryption are performed. 

Step 2) Encryption of Message 

Sender A does the following: - 

1. The length of a message to be encrypted is fixed by the parameter n prior to encryption; a 
possibly larger message p has to be divided into n-bit groups. 

2. Let p = (pi, p 2 ... p n ) the message to be encrypted. 

• The ciphertext c is obtained by computing c = bipi + b 2 p 2 + ... + b n p n 

• Computes the cipher text Ci = c e mod n. 

• Sends the cipher text Ci to B. 
Step 3) Decryption of Message 

Recipient B does the following: - 



~9l\ 



Vol. 2, Issue 1, pp. 90-97 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

1. Uses private key and first compute n^ = c/ mod n. 

2. First compute c' = Urn! mod M = W _1 c mod M 

3. Now solve (A, c'). Because A is superincreasing, (A, c') is easily solvable. Let X = (xi... 
x n ) be the resulting vector and pi = Xj and p = (pi... p n ) is the plaintext. 

V. Comparison of RSA and SSRPKC Cryptosystems 

5.1 Simulation Results 

For the simulation purpose, SSRPKC cryptosystem is implemented as a user-friendly GUI. This GUI 
application is implemented using Java Big Integer library functions [5]. In this application, one can 
either enter the prime numbers or can specify the bit length of the prime numbers to generate 
automatically. Big Integer library provides operations for modular arithmetic, GCD calculation, 
primarily testing, prime generation, bit manipulation, and a few other miscellaneous operations. The 
simulation of the algorithm, implemented in JAVA [5], running on a 2 GHz P-IV Processor and 512 
MB RAM, using a 1000 characters long message for encryption/decryption. The PKC algorithms 
(RSA & SSRPKC) have some important parameters affecting its level of security and speed [6]. The 
complexity of decomposing the modules into its factors is increased by increasing the module length. 
This also increases the length of private key and hence difficulty to detect the key. Another parameter 
is the number of items in set A. As the number of items in set A increases, the size of the message 
which is encrypted at a time also increases, hence the security also increases as well as difficulty of 
detecting the private set A from public set B also increases. The RSA and SSRPKC parameters are 
changed one parameter at a time and the others are kept fixed to study the relative importance. The 
key generation, encryption, decryption time is depends on the speed of the processor and the RAM. 
Table 5.1 shows the simulation results of both the algorithms. 

5.1.1 Changing the modulus length: 

Changing the modulus affects the other parameters of the algorithms as shown in Table 5.1. It is clear 
here that increasing the modulus length (bits) increases the bit length of their factors and so the 
difficulty of factoring them into their prime factors. Moreover, the length of the secret key (d) 
increases at the same rate as n-bit increases. As a result, increasing the n-bit length provides more 
security. On other hand by increasing the n-bit length increases the values of key generation time, 
encryption time and decryption time. Hence increasing the n- bit length increases the security but 
decreases the speed of encryption, decryption and key generation process as illustrated by Figure 5.1 
and 5.2. 



O 
O 

o 



E 



1000000 




1000 2000 3000 
Modulus (n) size ( bit) 



h^RSA 
execution 
time 

-■-SSRPKC 
execution 
time 



Figurel. Modulus size v/s RSA & SSRPKC algorithm's execution time, taking number of items in set A 128 
and size of Public Key 128 bit. 



"93T 



Vol. 2, Issue 1, pp. 90-97 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Table 1: Effect of changing the modulus length and size of set A on the size of Private key, Key generation 
time, Encryption time and Decryption time, while the size of Public key has kept constant (128 bit) 



Size of 
N (bit) 


Size of 
d (bit) 


Number 

of 
element 
in set A 


SSRPKC 


RSA 


key 
generation 
time (ms) 


Encryption 
time (ms) 


Decryption 
time (ms) 


Total 
Execution 
time(ms) 


Key 
Generati 
on time 

(ms) 


Encryp 
tion 
time 

(ms) 


Decryption 
time (ms) 


Total 
Execution 
time(ms) 


128 


128 


32 


16 


235 


78 


329 


16 


94 


62 


172 


128 


128 


64 


15 


94 


47 


156 





16 


47 


63 


512 


512 


32 


125 


1203 


1766 


3094 


109 


563 


1719 


2391 


512 


512 


64 


63 


344 


875 


1282 


63 


141 


859 


1063 


512 


512 


128 


78 


172 


453 


703 


47 


78 


422 


547 


1024 


1024 


32 


688 


5407 


11328 


17423 


688 


1719 


12172 


14579 


1024 


1024 


64 


453 


6593 


5735 


12781 


453 


2968 


5688 


9109 


1024 


1024 


128 


562 


516 


2859 


3937 


515 


219 


3344 


4078 


1024 


1024 


512 


12812 


187 


781 


13780 


281 


47 


735 


1063 


2048 


2048 


32 


3735 


9563 


85140 


98438 


3719 


3688 


85672 


93079 


2048 


2048 


64 


1563 


3625 


42437 


47625 


1563 


1688 


43234 


46485 


2048 


2048 


128 


7125 


6266 


20734 


34125 


7078 


3829 


21406 


32313 


2048 


2048 


512 


17797 


797 


5281 


23875 


7703 


375 


6172 


14250 


2048 


2048 


1024 


29704 


422 


2797 


32923 


2891 


203 


3406 


6500 




1000 2000 3000 
Modulus (n) size (bit) 



*♦- RSA Key 
generation 
time 

-■-SSRPKC Key 
generation 
time 



Figure 2. Modulus size v/s Key generation time, taking size of Public key 128 bit and number of items in set A 
are 128 

5.1.2 Changing the number of items in set A: 

On the basis of simulation results of Table 5.1, following Figure 5.3 shows the effect of number of 
items on encryption and decryption time of both the algorithms. Here key generation time of SSRPKC 
algorithm depends on the number of items in set A and as the number of items increases key 
generation time also increases that's why for more then 500 items in set A, execution time of 
SSRPKC algorithm also increases. RSA algorithm's execution time doesn't depend on set-A items. 



ITT 



Vol. 2, Issue 1, pp. 90-97 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



■o 

c 


150000 


o 
o 

0) 
(/) 


100000 






J, 


50000 


a> 

E 











500 1000 1500 

Number of items in set A 



Figure 3. Number of items in set A v/s SSRPKC algorithm's execution time, taking size of modulus 2048 bit 
and size of private 128 bit. 

5.2 Complexity analysis of SSRPKC cryptosystem 

The RSA cryptosystem is well-known asymmetric cryptosystem in which the computational 
complexity for both of the encryption and the decryption are of the order of k 3 , where k is the number 
of bits of the modulus n [7]. The computational complexity of subset sum problem is O(k), if a super- 
increasing set with greedy approach is used where k is the number of items in set-A [8]. So in this 
way, computational complexity of SSRPKC cryptosystem is on the order of O (k + k 3 ) i.e. O (k 3 ) or 
encryption and on the order of k 3 for decryption. So the computational complexity of SSRPKC is 
equivalent to RSA cryptosystem. The simulation results of both the algorithms shows that the 
execution time of SSRPKC is and 1.2 times more than RSA. 

5.3 Security analysis of SSRPKC cryptosystem 

Two possible approaches to attacking the SSRPKC / RSA algorithms are as follows [1,9]: 

1. Brute force: This involves trying all possible private keys. 

2. Mathematical attacks: These are based on factoring the product of large primes such as factor 
n into its prime factors p and q respectively then calculating (p, which, in turn, enables 
determination of d = e" 1 mod (p. 

Estimated resources needed to factor a number within one year are as follows [9]. 
Table 2: Recourses needed to factor a number within one year 



Size of 
number 

(Bits) 


PCs 


Memory 


430 


1 


128MB 


760 


215,000 


4 GB 


1020 


342 xlO 6 


170 GB 


1620 


1.6 xlO 15 


120 TB 



*T 



Vol. 2, Issue 1, pp. 90-97 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

As the SSRPKC cryptosystem is based on the hybrid combination of the subset sum problem and 
RSA cryptosystem, so one have to factor the modulus into its primes as well as find the secret set-A to 
break the SSRPKC algorithm [2, 7]. If RSA which is based on single modulus, is broken in time x and 
Subset sum based algorithms is broken in time y then the time required to break SSRPKC algorithm is 
x*y. So the security of SSRPKC algorithm is increased as compare to RSA algorithm and it shows that 
the SSRPKC algorithm is more secure for Mathematical attacks [1]. 

As in SSRPKC double decryption is performed and unlike RSA that is not only based on private key 
but also based on the subset sum problem so one can't break SSRPKC only guessing the private key 
only. So it shows that SSRPKC algorithm is more secure as compare to RSA for Brute force attack 
[l].The RSA cryptosystem is based on the Integer Factorization which is break by the Pollard rho 
method [4], Pollard p-1 method [4] and Fermat method [4]. Because of these methods the security of 
RSA cryptosystem is broken by the Factorization attack [3] and Chosen Cipher Text attack [3]. 

As the SSRPKC cryptosystem uses dual modulus, so for breaking this one have to factor both the 
modulus. That's why our cryptosystem provides the far better security against the factorization 
methods and attacks on RSA cryptosystem. 

VI. Conclusion 

This paper presents an extended version of Subset-Sum problem over RSA algorithm called Extension 
of cryptosystem through subset-sum over RSA. It relies on facts of mathematics that indicates that 
given a very large number it is quite impossible in today's aspect to find two prime numbers whose 
product is the given number. The size of number increases, the possibility for factoring the number 
decreases. In RSA, if one can factor modulus into its prime numbers then he can get the private key 
too. To improve the security, SSRPKC cryptosystem is developed. The disadvantage of new 
cryptosystem is that, unlike RSA it can not be used for authentication as it is based on the one way 
function. Another disadvantage is the slow down of execution process as compare to RSA. But it is 
clear from the simulation results that it is more secure than the RSA algorithm and our cryptosystem 
provides the far better security against the factorization methods and attacks on RSA cryptosystem. 

References 

[I] William S tailings, "Cryptography and Network Security", ISBN 81-7758-011-6, Pearson Education, Third 
Edition, pages 42-62,121-144,253-297. 

[2] Ralph C. Merkle, Martin E. Hellman. "Hiding Information and Signatures in Trapdoor Knapsacks", IEEE 

Transactions on Information Theory, vol. IT-24, 1978, pp. 525-530. 

[3] Behrouz A. Forouzan, Debdeep Mukhopadhyay, "Cryptography and Network Security", 2 nd Edition, TMH. 

[4] Alfred J Menezes, Paul C van Oorschot, Scot A Vanstone, Handbook of Applied Cryptography, CRC Press, 

1997. 

[5] Neal R. Wagner, "The Laws of Cryptography with Java Code", Technical Report, 2003, pages 78-1 12. 

[6] Allam Mousa , "Sensitivity of Changing the RSA Parameters on the Complexity and Performance of the 

Algorithm", ISSN 1607 - 8926, Journal of Applied Science, Asian Network for Scientific Information, pages 

60-63,2005. 

[7] RSA Laboratory (2009), "RSA algorithm time complexity", Retrieved from 

http://www.rsa.com/rsalabs/node. asp ?id=2215 (4 April 2011). 

[8] Adi Shamir, A Polynomial Time Algorithm for Breaking the Basic Merkle-Hellman Cryptosystem. 

CRYPTO 1982, pp279-288. 

[9] CHUK, Chinese university (2009), "RSA Algorithm security and Complexity", Retrieved from 

http://www.cse.cuhk.edu.hk/~phwl/mt/public/archives/old/ceg50 10/rsa.pdf (26 Jan. 2011) 

[10] Wenbo Mano, "Modern Cryptography Theory and Practice," Prentice Hall. 

[II] Bryan Poe, "Factoring the RSA Algorithm", Mat / CSC 494, April 27, 2005, pages 1-6. 

[12] Adi Shamir, A Polynomial Time Algorithm for Breaking the Basic Merkle-Hellman Cryptosystem. 

CRYPTO 1982, pp279-288 

[13] A Method for Obtaining Digital Signatures and Public-Key Crypto Systems by R.L. Rivest, A. Shamir, and 

L. Adleman. 

[14] Menezes, A. J., Van Oorshot, and Vanstone, P.C. S.A. Handbook of Applied Cryptography, CRC press, 

1997. 

[15] B. Schneizer, Applied Cryptography, New York: John Wiley, 1994. 



"96T 



Vol. 2, Issue 1, pp. 90-97 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[16] http://www.certicom.com, "The Elliptic Curve Cryptosystem," September 1997, (dated 02-04-2010) 

[17] J H Moore, Protocol failures in Cryptosystems, Contemporary Cryptology, The science of Information 

Integrity, Ed. G J Simmons, 541-558, IEEE Press 1992. 

[18] Diffie, M Hellman, "New Directions in Cryptography", IEEE Transactions on Information Theory, Vol 22, 

1976. 

Author's Biography 

Sonal Sharma was born on 23 May 1984. She is M.Tech. Student in SBCET, Jaipur 
(Rajasthan). She has completed B.E. (I.T.) in 2006 from University of Rajasthan, Jaipur. 




Saroj Hiranwal was born on 20 Jan 1982. She has done B.E.(I.T.) in 2004 and MTECH(CSE) in 2006. Her 
teaching Experience is 6.5 year in the organization - Sri Balaji College of Engineering and Technology, Jaipur 
with the designation of Reader and HEAD. 



Prashant Sharma was born on 04 June 1985. He is the M.Tech. Student in SBCET, Jaipur 
(Rajasthan). He has completed B.E. (I.T.) in 2007 from University of Rajasthan, Jaipur. 




TF] 



Vol. 2, Issue 1, pp. 90-97 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



A Compact Dual Band Planar RMSA for 
WLAN/WIMAX Applications 

C. R. Byrareddy 1 , N. C. Easwar Reddy 2 , C. S. Sridhar 3 
Assistant Prof., Dept of E&C. Engg., Bangalore Institute of Technology, Bangalore, India 
2 Prof., Dept of E&C. Engg., S.V. U. College of Engg., Tirupathi, India 
Professor, Dept of E&C. Engg., Bangalore Institute of Technology, Bangalore, India 



Abstract 

Presentation of a compact dual band planar rectangular microstrip antenna (RMSA) antenna for a WLAN 
(2.4GHz IEEE standards 802.1 lb/g)AViMAX(2.6GHz IEEE standards802.16e)applications. The two resonant 
modes of the presented RMSA antenna are associated with various length and width of the planar strips in 
which a centre strip contributes for the lower resonant frequency 2.4GHz(2.26-2.4GHz with impedance 
bandwidth 240MHz) and two lateral strips contributes for the higher resonant frequency2.8GHz(2.73-2.95GHz 
with impedance bandwidth220MHz). By proper adjustment of the coupling between the two lateral strips and 
embedded centre strip enables the operation of dual band with a -WdB return loss, a near directive radiation 
pattern and a good antenna gain with sufficient bandwidth. The antenna is simulated using Ansoft HESS and 
fabricated on an ER4 substrate with dielectric constant 4.4 and thickness 1.6mm occupies an area of 65mm 
x50mm.The simulation results are found to be in good agreement with the measured results. The proposed 
antenna is suitable for wireless communication applications requiring a small antenna. 

KEYWORDS: Rectangular Microstrip Antenna (RMSA), Wireless Local Area Network ( WLAN), WiMAX, 
Strips, monopole dual band. 

I. Introduction 

Rapid progress in wireless communication services have led to an enormous challenge in antenna 
design. Patch antennas for dual and multi frequency band operation has increasingly become 
common, mainly because of many advantages such as low profile, light weight, reduced volume and 
compatibility with microwave integrated circuits(MIC) and monolithic microwave integrated circuit 
(MMIC).WLAN is one of the most important applications of the wireless communication technology 
that takes advantage of licence free frequency bands [ISM] due to high speed connectivity between 
PCs, laptops, cell phones and other equipments in environments. In the near future WiMAX 
technology with different standards is going to occupy the market. Wireless data services have 
evolved and continues to grow using various technologies, such as 2G/3G. The impact of such diverse 
technologies is on the use of frequency band in different technologies will need to occupy different 
frequency allocations, Such as WLAN/WiMAX , it likely to be prominent candidate to serve for 
wireless data in near future. Therefore there is a need to develop a dual band antenna for both 
WLAN/WiMAX applications occupying 2.4/2. 6GHz frequency bands. 

Above several papers on dual band antennas for IEEE standards have been reported. [1-2] proposed 
printed double T-monopole antenna can cover the 2.4/5.2 GHz WLAN bands and offers narrow band 
width characteristics and planar branched monopole antenna for DCS/2.4GHz. For WLAN it can 
provide excellent wide frequency band with moderate gain .[3] The proposed planar monopole 
antenna is capable of generating good Omni directional monopole with radiation in all the frequency 
bands. [4-5]proposed printed dipole antenna with parasitic element and Omni-directional planar 



"987 



Vol. 2, Issue 1, pp. 98-104 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

antenna for WiMAX applications, can operate either in wide band or dual band , which cover 3.25- 
3.85,3-3-3.8 and 5.15-5.85 GHz with return loss of -10dB.[6] Broad band printed L- shaped antenna 
for wireless communication is reported with good radiation patterns and better return loss. [7] physical 
design features proper geometry and dimension for microstrip antenna array using transformer A/4 for 
the feed line matching technique. [8] proposes compact terminal antenna incorporates open-end slots 
in the ground plane , which reduces size and operates at acceptable band width. [10] use of various 
feeding techniques can give dual or multiband operations. 

In this paper a compact dual band antenna structure for WLAN and WiMAX are proposed. The 
proposed antenna is simple to design and offer an effective control over two operating bands by 
controlling the dimensions of three rectangular strips. The antenna can easily be fed using a 50Q 
probe feed with transformer A/4 technique for impedance matching. Also the planar RMSA structure 
antenna is attractive from the package point of view. The advantage of X/4 technique feeding method 
is to match the transmission line characteristics impedance to the input impedance. 

II. Antenna Geometry and Design 

The geometry of the proposed antenna structure is shown in figure 1. It is etched on a substrate of 
dielectric constant 8 r =4.4 and thickness h= 1.6mm with tangent loss 0.09. The antenna has ground 
plane dimension of length L g =50mm and width W g =65mm. The radiating structure consists of three 
rectangular strips with dimensions of length l p =28.5mm and centre strip width w p i=18mm, lateral 
strips width w p2 =10mm with a slot gap width g=0.5mm.The centre strip is fed by a designed 50Q 
microstrip line width 0.5mm,the optimum feed point antenna is X/4Transformer method with 3mm 
width and 0.2mm height for good impedance matching. Thus it can be connected with a SMA 
connector. The resulting antenna resonates at 2.4 GHz and 2.8 GHz. From simulation and 
experimental studies, it is found that the dimensions of the middle rectangular strip are optimized to 
resonate at 2.4GHz while dimensions of the lateral symmetrical strips are optimized to resonate at 2.8 
GHz. Thus the proposed antenna provides effective control of the two operating bands. In addition, 
ground plane dimensions are also optimized to achieve the desired dual band operation, as it affects 
the resonant frequencies and operating band widths in two bands. 



/ 



Ground 
plane 



*— 


W P 


► 




^ F 






W P2 


3^- 


W P1 


<T 


> 

i m k 




i\ 


^ W 


^ W 


» 


% W 




















i 
r 








■ 




■ 






IF 




Radiating 
patch 



7 

Feed dimension 
3mmx0.2mmxg 



Figure.l. Geometry of the proposed planar RMSA, a) side view, b) top view, c) simulated view. 



"997 



Vol. 2, Issue 1, pp. 98-104 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

III. Simulation and Experimental Results 

Ansoft HFFS is used for simulation and analysis of the structure. Simulation process involves setting 
up of the optimum geometric dimension to satisfy the desired centre frequency, as well as the 
bandwidth requirement for specified return loss requirements in each band. The simulation was 
iteratively conducted until the desired results were found. The proposed antenna is fabricated by using 
photolithographic process which gives goods accuracy for the etched patterns. The fabricated antenna 
parameters are measured by experimental set up characterization which consists of vector network 
analyzer from Hewlet Packard, with S-parameter test-set and an anechoic chamber. Radiation 
patterns, E-field &H-field, S-parameters and gain were measured. The following sections describe the 
details of measured and simulated results. Measurement of return loss is most important because our 
main interest in this research is to produce dual band characteristics within the specified centre 
frequency with sufficient impedance bandwidth requirement. 

3.1 Return loss 

Figure. 2 shows the simulated and experimental return loss of the proposed dual band antenna. From 
the simulation, the impedance bandwidth of the lower frequency band determined by -lOdB return 
loss approximately 240MHz of bandwidth (2.26-2.50GHz), which is 13% of bandwidth for the 
frequency band of 2.4 GHz. For the upper frequency band the impedance bandwidth is approximately 
220MHz (2.73-2.95GHz), i.e. about 10% for the frequency band of 2.8GHz. The centre frequencies 
the two bands are determined by adjusting the rectangular dimensions of strips. To achieve the 
maximum results the gap distance between the two strips adjusted and length of the microstrip feed 
line 50 ohm need to be controlled. The experimental curve shows that a dual band is obtained for both 
the resonance with good matching. 




-24 



■Me a 
■ Simul 



1.0 1.5 2.0 2.5 

Frequency (GHz) 



3.0 



3.5 



Figure.2. Simulated and measured return loss characteristics of antenna structure. 

3.2 Radiation Patterns 

The simulated and measured radiation patterns of the proposed antenna operating at 2.4GHz 
and2.8GHz are shown in figure 3 & 4 respectively. It is found that the antenna has relatively 
stable radiation patterns over its operating band, a near Omni directional pattern is obtained 
in the two bands. Because of symmetry in the antenna structure the radiation patterns are as 
good as those of a conventional monopole. In the elevation plane (azimuth angle)as shown in 
the plots, asymmetrical radiation patterns are seen in the x-y and x-z planes. The measured 
radiation patterns are stable and quasi -Omni directional in the entire operational band which 
is highly suitable for the proposed modern wireless communication bands. 



100 



Vol. 2, Issue 1, pp. 98-104 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Radiation pattern 



ffuditfion Partem 






S«^^ 


.1P- 


■ 


M4j/ J 




| JD - \ 

■K- 


\ \* 


p 4 1< 


Il« .» 1 


p. -SO .^fe '£6 -2S 


£ \ *: -14 0- (f 


V 








M»\ 




Nw— 


/ /taa 




ElO 1 ^ 


■9 - 


■■■ 






Ridlalton Pattern 





» £f90fr€tro»'tft«nf 






Figure.3. Simulated co-polar and cross-polar radiation patterns of the planar RMSA AT 2.4GHzand 2.8GHz. 





Figure.4. Measured radiation patterns at 2.4GHz and 2.8GHz. 



101 | 



Vol. 2, Issue 1, pp. 98-104 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

3.3 Current Distribution Characteristics and 3-D plot 

A better understanding of the antenna behaviour can be obtained by analyzing the current 
distributions at peak resonant frequency 2.4 GHz, as shown in figure 5. It is evident from that at 
2.4GHz the central strip acts as a quarter wave monopole where as for the higher resonance the 
predominant effect is seen at the edges of the lateral strips. 

Figurer.6 shows the 3D simulated radiation pattern at 2.4GHz. It is found that the planar antenna 
provides almost Omni directional radiation coverage and can be used for WLAN applications. 



Jsurf[A_| 




E Field[V_per_n 

I 2,58906+804 
2. 42736+004 

2,2655e+004 

| 2,10386+004 

1,94216+004 

1,78036+004 

1,61866+004 

I 1,45686+004 

| 1,29516+004 

1,13346+004 

| 9,71626+003 

8,09886+003 

I 6,48146+003 



3,24666+003 
1,62926+003 
1,17796+001 




Figure.5 current distribution and e-field distribution of the planar RMS A 



rETotal[mV] 

9.3548e+003 
3.3266e+003 
8.29856+003 
' 7.77046+003 
7.24236+003 

16.71426+003 
6.13606+003 
5.65796+003 
5.12986+003 
4.60176+003 
4.07356+003 
3.54546+003 
3.01736+003 
2.43926+003 
1.96116+003 
1.43296+003 
9.04816+002 




Figure.6 Simulated 3D radiation pattern of the planar RMS A 

3.4 Gain characteristics 

The peak gain of the antenna measured at each frequency points by comparison method. Figure.7 
shows the measured antenna gain versus frequency. Average gain at different frequencies of both 
bands are shown where as at 2.4GHz frequency band is approximately 2.95dBi and at 2.8 GHz 
frequency band is approximately 3.8dBi and then continue for higher frequencies.Figure.8 shows 
photo of the fabricated planar RMSA and tested by measuring its parameters particularly return loss 
and antenna gain, to validate the simulation result as well as to verify the antenna design specification. 



102 | 



Vol. 2, Issue 1, pp. 98-104 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



5 






Z.qj CjHz C-U1VC 


4 




I 

w 3 


^^^ 


(A 
O 

2 
1 


i i i i i i i i i i i i i 



2.15 2.20 2.25 2.30 | 2.35 2.40 2.45 2.50 











■ 










t"1 



2.50 



2.75 
Frequency (GHz) 



3.0 



Figure.8 Photo of fabricated planar RMS A 



Figure.7 Measured peak antenna gain for the 
dual band planar RMSA at 2.4GHz and 2.8GHz. 

IV. Conclusion 

A study of construction and experimental verification of compact dual band planar RMSA structure 
for the operation in the 2.4/2. 8GHz WLAN/WiMAX bands is presented. The dual band operation is 
achieved by using planar three rectangular strips. A good band width characteristics is observed in 
each strips which covers 2.26- 2.5GHz and 2.73-2.95GHz bands with an impedance bandwidth of 
nearly 240MHz at lower band and at the upper band the impedance bandwidth 220MHz when 
compared with the simulation , measured results exhibits good agreement, except for small variation 
due to measurement error. From our design it is found that the lower resonant band is due to centre 
strip and the higher resonant band due to lateral strips. This is because good impedance matching, 
gain and radiation pattern can be obtained by tuning the coupling between the rectangular strips and 
the gap, as well as the feed length of the feed line, Hence the proposed antenna is suitable for wireless 
local area network (WLAN) and multichannel multipoint distribution service (MMDS)WiMAX 
communication applications. 

Acknowledgement 

The authors are thankful to CUSAT, Cochin for providing lab facilities to fabricate and test. 

References 

[1] Y.L.Kuo and K.L.Wong, "Priented doble -T monopole antenna for 2.4/5.2 GHz dual band WLAN 

operations", IEEE Transaction antenna and propagation, vol.51, no 9,pp. 21 87-2192, sept.2003. 
[2] Suma,M.N., Raj.R.K, Joseph.M, Bybi.P.C, Mohanan.p, "A Compact dual band planar branched 

monopole antenna for DCS/2.4 GHz WLAN applications". Microwave and wire less components 

letters, IEEE,vol.l6,issue.5,pp 275-277,2006. 
[3] Das.S.K, Gupthta.R.K, Kumar.G, "Dual band planar monopole antenna ", IEEE antenna and 

propagation international symposium 2006,pp. 1705-1708, July 2006. 
[4] Chen,W.-S., Yu, Y.-H. "Dual band printed dipole antenna with parasitic element for WiMAX 

applications". Electronics letters vol.44, issue. 23, pp. 1338-1339 JET journals, 2008. 
[5] Ting-Ming Hsueh,Heng-Tung Hsu, His-TsengChou, Kwo-Lun Hung, "Dual band Omni- directional 

planar antenna for WiMAX applications". IEEE Antenna and propagation society international 

symposium, 2008, PP.NO. 1-4, AP-S 2008, 
[6] Qinjiang Rao and Tayeb A. Denidi, "New broad band Dual printed inverted L- shaped monopole 

antenna for Tri-band wireless applications", microwave and Optical technology letters, vol.49, no. 2, 

pp.278-280 Feb.2007 
[7] Surjati, I et al, "Dual Band Triangular Micros trip Antenna Using Slot Feed By Electromagnetic 

Coupling", Proceeding Quality In Research, Faculty Of Engineering, University Of Indonesia,2009 . 



103 | 



Vol. 2, Issue 1, pp. 98-104 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[8] R.Hossa, .Byndas,and M.E.Bialkowski, "Improvement of compact terminal antenna performance by 

incorporating open-end slots in ground plane", IEE E Microwave and wireless components letters, 
voL14,no.6,pp.283-285,2004. 

[9] Pozar, David M., Microwave Engineering, Second Edition, Wiley, New York 1998. 

[10] S. Rhee. & G. Yun "CPW fed slot antenna for triple-frequency band operation", Electronics Letters, 
Vol. 42, No. 17, pp. 952-953, 2006. 

[11] HFSS V-12 tool used, HFSS user manual, Ansoft Corporation, USA. 

Author Profiles 



C. R. BYRA REDDY born in Karnataka, India in 1967. He graduated from Bangalore 
University with B.E. degree in Instrumentation technology and the M.E. degree in Electronics 
in 1990 and 1999 respectively. He is currently Assistant professor in department of Electronics 
& Communication engineering, Bangalore institute of technology. He is a Ph.D. candidate at 
SV University of engineering college, Tirupathi. Research interest area is Microwave 
communication, Antennas and Wireless communication, with a keen interest Includes Analysis 
and design of patch antenna for Wireless communication. He has published 5 papers in 
national/international and in journals. He has presented paper at IconSpace2011 in Malaysia. 

N. C. ESWAR REDDY born in Andhra Pradesh, India in 1950. He received the B.E degree 
from Andra University in 1972. He did his M.Tech. Degree from IIT Delhi in 1976. He did his 
Ph.D. from S.V. University in 1985. He joined S.V. College of Engineering as a lecturer in 
1976, he has served as reader, professor and principal in the same college. His area of interest 
Microwave Engineering, Microprocessor, Bio signal possessing and antennas. He has guided 
three Ph.D. candidates. He has published more then32 papers both national and international 
journals, and has attended more than 20 international conference. He is Member of ISTE, 
IETE and Expert member of AICTE, Selection committee member of selection committee for 
all the university in and around Andhra Pradesh. 



C. S. SRIDHAR born in Bellary, Karnataka and graduated with B.E degree from Andhra 
university in 1962, M.Sc Engineering from Madras University in 1966 and PhD from IIT 
Madras in 1975. He has been teaching since 1962 and has research interest in microwave 
antennas, signal processing architecture and VLSI Designs. He has published more than 35 
papers national and international journals; He attended more than 60 international 
conferences. He is a life member of IETE and a member of IEEE. 



— 

^ 





* 



104 



Vol. 2, Issue 1, pp. 98-104 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



VLSI Architecture for Low Power Variable 

Length Encoding and Decoding for Image 

Processing Applications 

Vijaya Prakash. A.M 1 & K.S. Gurumurthy 2 
1 Research Scholar, Dr. MGR University, Chennai, Faculty Department of ECE, BIT, 

Bangalore, India. 
Professor, Department of ECE, UVCE, Bangalore, India. 



Abstract 

The image data compression has been an active research area for image processing over the last decade [1] 
and has been used in a variety of applications. This paper investigates the implementation of Low Power VLSI 
architecture for image compression, which uses Variable Length Coding method to compress JPEG signals [1]. 
The architecture is proposed for the quantized DCT output [5]. The proposed architecture consists of three 
optimized blocks, viz, Zigzag scanning, Run-length coding and Huffman coding [17]. In the proposed 
architecture, Zigzag scanner uses two RAM memories in parallel to make the scanning faster. The Run-length 
coder in the architecture, counts the number of intermediate zeros in between the successive non-zero DCT 
coefficients unlike the traditional run-length coder which counts the repeating string of coefficients to compress 
data [20]. The complexity of the Huffman coder is reduced by making use of a lookup table formed by arranging 
the [run, value] combinations in the order of decreasing probabilities with associated variable length codes 
[14], The VLSI architecture of the design is implemented [12] using Verilog HDL with Low Power approches . 
The proposed hardware architecture for image compression was synthesized using RTL complier and it was 
mapped using 90nm standard cells. The Simulation is done using Modelsim. The synthesis is done using RTL 
compiler from CADENCE. The back end design like Layout is done using IC Compiler. Power consumptions of 
variable length encoder and decoder are limited to 0.798mW and 0.884mW with minimum area. The 
Experimental results confirms that 53% power saving is achieved in the dynamic power of huffman decoding 
[6] by including the lookup table approach and also a 27% of power saving is achieved in the RL-Huffman 
encoder [8]. 

KEYWORDS.' Variable Length Encoding (VLE), Variable Length Decoding (VLD), Joint Photographic 
Expert Group (JPEG), Image Compression, Low Power Design, Very Large Scale Integration (VLSI). 

I. Introduction 

Image data compression refers [4] to a process in which the amount of data used to represent image is 
reduced to meet a bit rate requirement (below or at most equal to the maximum available bit rate) 
while the quality of the reconstructed image satisfies a requirement for a certain application and the 
complexity of computation involved is affordable for the application [18]. The image compression 
can improve the performance of the digital systems by reducing the cost of image [22] storage and the 
transmission time of an image on a bandwidth limited channel, without significant reduction in the 
image quality [15]. 

This paper investigates the implementation of Low Power VLSI architecture for image compression 
[8] which uses variable length coding method for image data compression, which could be then used 
for practical image coding systems to compress JPEG signals [1]. 



105 



Vol. 2, Issue 1, pp. 105-120 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Variable length coding [2] that maps input source data on to code words with variable length is an 
efficient method to minimize average code length. Compression is achieved by assigning short code 
words to input symbols of high probability and long code words to those of low probability [14]. 
Variable length coding can be successfully used to relax the bit-rate requirements [21] and storage 
spaces for many multimedia compression systems. For example, a variable length coder (VLC) 
employed in MPEG-2 along with the discrete cosine transform (DCT) results [16] in very good 
compression efficiency. 

Since early studies have focused only on high throughput variable length coders [6], low-power 
variable length coders have not received much attention. This trend is rapidly changing as the target of 
multimedia systems is moving toward portable applications like laptops, mobiles and iPods etc [15]. 
These systems highly demand low-power operations, and, thus require low power functional units. 
The remaining paper is organized as follows, Section 2 explains about the practical needs, 
principles and types of image compression, Section 3 explains the Variable Length Encoding 
Process. Section 4 describes Variable Length Decoding Process, Section 5 includes the 
Interpretation of Results. Section 6 finally concludes the paper. 

II. Image Compression 

A. Practical Needs for Image Compression 

The need for image compression becomes apparent when number of bits per image is computed 
resulting from typical sampling rates and quantization methods [4]. For example, the amount of 
storage required for given images is (i) a low resolution, TV quality, color video image which has 
512x512 pixels/color, 8 bits/pixel, and 3 colors approximately consists of 6xl0 6 bits; (ii) A 24x36 
mm negative photograph scanned at 12xl0" 6 mm: 3000x2000 pixels/color, 8 bits/pixel, and 3 colors 
nearly contains 144xl0 6 bits; (iii) a 14x17 inch radiograph scanned at 70xl0" 6 mm: 5000x6000 
pixels, 12 bits/pixel nearly contains 360xl0 6 bits. Thus storage of even a few images could cause a 
problem. As another example of the need for image compression [15], consider the transmission of 
low resolution 512x512x8 bits/pixelx3- color video image over telephone lines. Using a 96000 bauds 
(bits/sec) modem, the transmission would take approximately 11 minutes for just a single image [22], 
which is unacceptable for most applications. 

B. Principles behind Compression 

Number of bits required to represent the information in an image can be minimized by removing the 
redundancy present in it. There are three types of redundancies. 

1. Spatial redundancy, which is due to the correlation or dependency between neighboring 
pixel values. 

2. Spectral redundancy, which is due to the correlation between different color planes or spectral 
bands. 

3. Temporal redundancy, which is present because of correlation between different frames in 
images. 

Image compression research [18] aims to reduce the number of bits required to represent an image by 
removing the spatial and spectral redundancies as much as possible [22]. 

C. Types of Image Compression 

Compression can be divided into two categories [1], Lossless and Lossy compression. In lossless 
compression schemes, the reconstructed image after compression is numerically identical to the 
original image. However lossless compression can only achieve a modest amount of compression. 
Lossless compression is preferred for archival purposes like medical imaging [22], technical 
drawings, clip art or comics. This is because lossy compression methods, especially when used at low 
bit rates [21], introduce compression artifacts. An image reconstructed following lossy compression 
contains degradation relative to the original. Often this is because the compression scheme 
completely discards redundant information. However the lossy schemes are capable of achieving 
much higher compression. Lossy methods are especially suitable for natural images such as 
photos in applications where minor (sometimes imperceptible) loss of fidelity is acceptable to 



106 



Vol. 2, Issue 1, pp. 105-120 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

achieve a substantial reduction in bit rate. The lossy compression is produces imperceptible 
differences can be called visually lossless. 

D. Image Compression using Discrete Cosine Transform 



8X8 
Input Image 



:> 



DCT 



^> 



Quantizer 



7Y 



c> 



Quantized 

DCT 
Coefficients 



Quantization 
Tables 



Figurel. Block diagram of DCT Process 

In the DCT process input image is divided into non- overlapping blocks of 8x8 pixels [4], and 
input to the baseline encoder. The pixel values are converted from unsigned integer format to signed 
integer format, and DCT computation is performed on each block. DCT transforms the pixel 
data into a block of spatial frequencies that are called the DCT coefficients. Since the pixels in the 
8x8 neighbourhood typically have small variations in gray levels, the outputs of the DCT will result 
in most of the block energy being stored in the lower spatial frequencies [15]. On the other hand, the 
higher frequencies will have values equal or close to zero and hence, can be ignored during 
encoding [9] without significantly affecting the image quality. The selection of frequencies based on 
which frequencies are most important [15] and which ones are less important can affect the quality of 
the final image. 

The selection of quantization values is critical since it affects both the compression efficiency [4], and 
the reconstructed image quality. High frequency coefficients have small magnitude for typical video 
data, which usually does not change dramatically between neighbouring pixels. Additionally, the 
human eye is not as sensitive to high frequencies as to low frequencies [5]. It is difficult for the 
human eye to discern changes in intensity or colors that occur between successive pixels. The human 
eye tends to blur these rapid changes into an average hue and intensity. However, gradual changes 
over the 8 pixels in a block are much more discernible than rapid changes. When the DCT is used for 
compression purposes, the quantizer unit attempts to force the insignificant high frequency 
coefficients to zero while retaining the important low frequency coefficients. The 2-D DCT 
transforms an 8x8 block of spatial data samples into an 8x8 block of spatial frequency components 
[15]. These DCT coefficients are then used as input to the Variable Length Encoder which will further 
compress the image data [9]. The compressed image data can be decompressed using Variable Length 
Decoder [10] and then IDCT transforms spatial frequency components back into the spatial domain 
[15] to successfully reconstruct the image. 

III. Variable Length encoding 

Variable Length Encoding (VLE) is the final lossless stage of the video compression unit. VLE is done 
to further compress the quantized image [13]. VLE consists of the following three steps: 

• Zigzag scanning 

• Run Length Encoding (RLE), and 

• Huffman coding. 



107 | 



Vol. 2, Issue 1, pp. 105-120 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Quantized 
DCT input 



C> 



Zigzag 
S caning 



C> 



Run Length 
Encoding 



t> 



Huffman 
Encoding 



Compressed 
Output 

=> 



Figure 2. Variable Length Encoder 



A. Zigzag Scanning 



Zigzag_en_in E 
elk " 

rst l 



ZIGZAG 
SCANNING 



Zigzag_en_out 



rdy_out 



Figure 3. Block diagram of Zigzag Scanner 

The quantized DCT coefficients obtained after applying the Discrete Cosine Transformation to 8x8 
block of pixels they are fed as input to the Variable Length Encoder (VLE). These quantized DCT 
coefficients will have non-zero low frequency components in the top left corner of the 8x8 block and 
higher frequency components in the remaining places [17]. The higher frequency components 
approximate to zero after quantization. The low frequency DCT coefficients are more important than 
higher frequency DCT coefficients. Even if we ignore some of the higher frequency coefficients, we 
can successfully reconstruct the image from the low frequency coefficients only. The Zigzag Scanner 
block exploits this property [7]. In zigzag scanning, the quantized DCT coefficients are read out in a 
zigzag order, as shown in the figure 4. By arranging the coefficients in this manner, RLE and 
Huffman coding can be done to further compress the data. The scan puts the high-frequency 
components together. These components are usually zeroes. 




Increasing 

Vertical 

Frequency 



► Increasing Horizontal Frequency 

Figure 4. Zigzag Scan Order 

Since the zigzag scanning requires that all the 64 DCT coefficients are available before scanning, we 
need to store the serially incoming DCT coefficients in a temporary memory. For each of the 64 DCT 
coefficients obtained for each 8x8 block of pixels we have to repeat this procedure. So at a time either 
scanning is performed or storing of incoming DCT coefficients is done. This will slow down the 
scanning process. So in order to overcome this problem and to make scanning faster, we propose a 
new architecture for zigzag scanner. In the proposed architecture, two RAM memories will be used in 
the zigzag scanner [17]. One of the two RAM memories will be busy in storing the serially incoming 
DCT coefficients while scanning is performed from the other RAM memory. So except for first 64 



108 



Vol. 2, Issue 1, pp. 105-120 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

clock cycles i.e., until 64 DCT coefficients of first 8x8 blocks become available, the zigzag scanning 
and storing of serially incoming DCT coefficients is performed simultaneously [11]. So by using two 
RAM memories we will able to scan one DCT coefficient in each clock cycle except for first 64 clock 
cycles. 



ZIGZAGIN^ 



CLOCK > 

RESET > 



>=> 



ZIGZAG 



t/ REGISTER 1 



2:1 
MUX 



IQu 



iz. 



2:1 
MUX 

■7* 



V 



ZIGZAG 

REGISTER 2 



MEMORY 

FOR 

SCANNING 

ORDER 



SWITCH 
MEMORY 



MEMORY 
READY 



clx COUNTER 64 

RST 



>^ 



ZIGZAG OUT 



-^ READY_OL"T 



Figure 5. Internal Architecture of the Zigzag Scanner 



B. Run Length Encoding (RLE) 



rle_in 

rdy_in 
elk 

rst 



RUN LENGTH 
ENCODING 



rle_out 



rdy_out 



Figure 6. Block diagram of Run- Length Encoder 

The quantized coefficients are read out in a zig-zag order from DC component to the highest 
frequency component. RLE is used to code the string of data from the zig-zag scanner. The 
conventional Run length encoder codes the coefficients in the quantized block into a run length (or 
number of occurrences) and a level or amplitude. For example, transmits four coefficients of value 
"10" as: {10,10,10,10}. By using RLE [8], the level is 10 and the run of a value of 10 is four. By 
using RLE, {4,10} is transmitted, reducing the amount of data transmitted. Typically, RLE [10] 
encodes a run of symbols into two bytes, a count and a symbol. By defining an 8 x 8 block without 
RLE, 64 coefficients are used. To further compress the data, many of the quantized coefficients in the 
8x8 block are zero. Coding can be terminated when there are no more non-zero coefficients in the 
zig-zag sequence [9]. Using the "end-of -block" code terminates the coding. 



109 



Vol. 2, Issue 1, pp. 105-120 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



















"0- 












jfr 


, Comparator 

(=07) 




INR Zeroes 

Counter 
CLR 






rle_ini=3: 

RDY IX » 


'to* 












* 




CJ V 






TH 


Load 
Count 






L J_-1Sl 




1 








Run 


Level 




h p nv ("IT TT 


DCT 






► K.U I UU1 


IS.Z} 1 


< 































Figure 7. Internal architecture of run-length encoder. 

But normally in a typical quantized DCT matrix the number of zeroes is more [5] compared to non- 
zero coefficients being repeated [4]. So in the proposed architecture for run-length encoder we exploit 
this property of more number of zeroes in the DCT matrix. In the proposed architecture the number of 
intermediate zeros in between non-zero DCT coefficients are counted [10] unlike the conventional 
run-length encoder where number of occurrences of repeated symbols are counted. 



Conventional 
RLE 


Proposed 
RLE 


31 


31 


1,0 


1,1 


1,1 


1,2 


1,0 


0,1 


1,2 


5,2 


1,1 


EOB 


5,0 




1,1 




EOB 





Figure 8. Comparison between conventional and proposed RLE 
C. Huffman encoding 



Huffman en in 



rdy_in ^ 

elk ► 

rst ► 



HUFFMAN 
ENCODING 



Huffman_en_out 



rdy_out 



Figure 9. Block diagram of Huffman Encoder 



no | 



Vol. 2, Issue 1, pp. 105-120 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Huffman coding is used to code values statistically according to their probability of occurrence 
[11]. Short code words are assigned to highly probable values and long code words to less probable 
values. The procedure for Huffman coding involves the pairing of run/value combinations. The input 
run/value combinations are written out in the order of decreasing probability. The run/value 
combination with the highest probability is written at the top, the least probability is written down 
last. The least two probabilities are then paired and added. A new probability list is then formed with 
one entry as the previously added pair. The least run/value combinations in the new list are then 
paired. This process is continued till the list consists of only one probability value. The values "0" and 
"1" are arbitrarily assigned to each element in each of the lists. 



Huffman Encoder 



Run Value 




*- Output bit stream 



Figure 10. Internal architecture of Huffman encoder 

In the proposed architecture, the Huffman encoding is done by making use of a lookup table [3]. The 
lookup table is formed by arranging the different run-value combinations in the order of their 
probabilities of occurrence with the corresponding variable length Huffman codes [6]. When the 
output of the run- length encoder in form of run- value combination is fed to the Huffman encoder, the 
run-value combination received will searched in the lookup table, when run-value combination is 
found its corresponding variable length Huffman code [14] will be sent to output. This approach of 
designing Huffman encoder not only simplifies the design but also results in less power consumption 
[6]. Since we are using lookup table approach, the only part of encoder corresponding to the current 
run-length combination will be active and other parts of the encoder will not be using any power. So 
turning off the inactive components of a circuit in the Huffman encoder, results in less power 
consumption. 

IV. Variable Length Decoding 



The variable length decoder is the first block on the decoder side. It decodes the variable length 
encoder output to yield the quantized DCT coefficients [10]. The variable length decoder consists of 
three major blocks, namely, 

1. Huffman Decoding. 

2. Run- length Decoding. 

3. Zigzag Inverse Scanning. 



Compressed 
Data 



Huffman 
decoding 



Run-Length 
Decoding 



Zigzag 

Inverse 

Scanning 



Quantized 
> DCT 
Coefficient 



Figurell. Variable Length Decoder 



TTTT 



Vol. 2, Issue 1, pp. 105-120 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

A. Huffman Decoding 



Hu ffman_de_in = 








Huffman_de_out 


rdy in 


^ 


HUFFMAN 








w 






^ 


DECODING 






elk 








rdy_out 


rst 


w 









Figure 12. Block diagram of Huffman decoder 

The huffman decoder forms the front end part of the variable length decoder. The internal architecture 
of the huffman decoder is same as the huffman encoder. The same VLC huffman coding table which 
was used in the huffman encoder is also used in the huffman decoder [17]. The input encoded data is 
taken and a search is done for the corresponding run/value combination in the VLC table. Once the 
corresponding run/value combination [14] is found, it is sent as output and huffman starts decoding 
next coming input. 

The VLC huffman coding table which we are using in both the huffman encoder and the huffman 
decoder, reduces the complexity of the huffman decoder. It is not only reduces the complexity [7] 
but also reduces the dynamic power in the huffman decoder, since only the part of the circuit is active 
at a time. 
B. FIFO 



Data In 




F_Data 



► F_EmptyN 



Figure 13. Block diagram of FIFO 

The First In First Out (FIFO) also forms the part of the decoder part, The FIFO is used between the 
huffman decoder and the run-length decoder. The FIFO is used to match the operating speed between 
the huffman decoder and run-length decoder [11]. The huffman decoder sends a decoded output to the 
run-length decoder in the form of run/value combination. The run-length combination takes this as 
input and starts decoding [12]. Since here the run in the run/value combination represents the number 
of zeroes in between consecutive non-zero coefficients, the zero '0' is sent as output for next 'run' 
number of clock cycles. Until then the run-length decoder can't accept other run/value combination. 
And we know that the huffman decoder decodes one input to one run/value combination in every 
clock cycle. So huffman decoder can't be connected directly to run-length decoder. Otherwise the run- 
length decoder can't decode correctly. So to match the speed between the huffman decoder and run- 
length decoder, the FIFO is used. The output of the huffman decoder is stored onto the FIFO, the run- 
length decoder takes one decoded output of huffman decoder from the FIFO when it is finished with 
decoding of the present input to it. So after run-length decoder finishes decoding of the present input, 
it has to send a signal to the FIFO to feed it a new input. This signal is sent to the FOutN pin, which is 
read out pin of the FIFO. The FInN pin is used to write onto FIFO, the huffman decoder generates the 
signal for this while it has to write a new input onto the FIFO. So the FIFO acts as a synchronizing 
device between the huffman decoder [9] and the run-length decoder. 



112 | 



Vol. 2, Issue 1, pp. 105-120 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

C. Run-Length Decoder 



rld_in c 






^ rld_out 


rdy in 




RUN-LENGTH 






w 


DECODER 




elk 


^ 






w 




^ f(iy oilt 


rst 







Figurel4. Block diagram of Run- Length Decoder 

The Run-length decoder forms the middle part of the variable length decoder [10]. It takes decoded 
output from huffman decoder through FIFO. When the huffman decoder decodes one input and stores 
the decoded output onto the FIFO, then the FIFO becomes non-empty (the condition when at least one 
element is stored on the FIFO). The FIFO then generates the signal F_EmptyN. This signal is used as 
rdy_in signal to the run-length decoder. So when huffman decoder decodes one input and stores it 
onto the FIFO [7], then a ready signal is generated to the run-length decoder to initiate the decoding 
process. 

The run-length decoder takes the input in the form of a run/value combination, then separates run and 
value parts. The run here represents number of zeroes to output before sending out the non-zero level 
'value' in the run/value combination. So for example if {5,2} is input to the run-length decoder then it 
sends 5 zeroes (i.e., 5 '0') before transmitting the non-zero level '2' to the output. Once the run-length 
decoder sends out a non-zero level, then it means that it is finished with the decoding of the present 
run/value combination, and it is ready for the next run/value combination. So for this it generates the 
rdy_out signal to the FIFO, to indicate that it has finished decoding of present input and ready for 
decoding the next run/value combination. This rdy_out is connected to the FOutN pin of the FIFO, 
which is read out pin of the FIFO [16]. Upon receiving this signal the FIFO sends out a new run/value 
combination to the run-length decoder, to initiate run-length decoding process for the new run/value 
combination. 



D. Zigzag Inverse Scanner 



zigzag_de_in 

elk 



rst 





ZIGZAG 

INVERSE 

SCANNING 




^ 






^ 


w 


w 



Zigzag _de_out 



rdy_in 



Figurel5. Block diagram of Zigzag Scanner 

The zigzag inverse scanner forms the last stage in the variable length decoder [17]. The working and 
architecture, everything is similar to the zigzag scanner, except that the scanning order will be 
different. The zigzag inverse scanner gets the input from the run-length decoder, starts storing them in 
one of the two RAMs [13], until it receives all 64 coefficients. Once it receives all the 64 coefficients, 
it starts inverse scanning to decode back the original DCT coefficients. Meanwhile the incoming DCT 
coefficients are getting stored in another RAM [2]. Once scanning from one RAM is finished, it starts 
scanning from another RAM and meanwhile the incoming DCT coefficients gets stored in first RAM 
[5]. So this process is repeated until all the DCT coefficients are scanned. There will be delay of 64 
clock cycles before the output appears. Once after that, for every clock cycle an element will scanned 
continuously. 



113 | 



Vol. 2, Issue 1, pp. 105-120 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

V. Interpretation of Results 
1. Simulation Results 

A. Zigzag Scanning 

The simulation of zigzag scanning is done using the following test sequence. 



31 01000001200 .... 



12 3 4 5 6 7 



9 10 11 



62 63 



The figure 16 shows the simulation waveform of the zigzag encoding block for the above test 
sequence. 

B. Run Length Encoding 

The simulation of the run-length encoding block is done using the output sequence obtained in zigzag 
scanning process, which appears at the input of the run-length encoder as below. 



31 





1 





2 


1 

















2 















1 



16 9 



10 



17 24 32 25 



The figure 17 shows the simulation waveform of the run-length encoding block for the above test 

input sequence. 

C. Huffman Encoding 



31 


1,1 


1,2 


0,1 


5,2 


EOB 



The output of the run-length encoder is used as the test sequence to the Huffman encoder. The output 
of the run-length encoder will appear as below. The figure 18 shows the simulation waveform of the 
huffman encoding block for the above test input sequence. 




Figurel6. Simulation waveform of Zigzag Scanning 




Figurel7. Simulation waveform of Run-length Encoder 



62 63 



114 | 



Vol. 2, Issue 1, pp. 105-120 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 









+ / 


















■ 


BUM 

B 


■f! 


I 


TJ1 


I 


^■Z3HlZ1HHHHHIUI 


g^ufcan_test_v/huffinan_out 


00002 


SiliH^HillH'llH^H 






h 















Figurel8. Simulation waveform of Huffman Encoder 
D. Huffman Decoding 

The compressed data from the variable length encoder is fed to the Huffman decoding block. The 
output obtained from the Huffman encoding block is used as a test bench to the Huffman decoding 
block. The fig 19 shows the simulation waveform for the Huffman decoding block. 

E. Run Length Decoding 

After Huffman decoder decodes the compressed bitstream, the decoded input is fed to run-length 
decoder block. The fig 20 shows the simulation waveform of run-length decoder. 

F. Zigzag Inverse Scanner 

The output of the run-length decoder is given as input to the zigzag inverse scanner, which will output 
the quantized DCT coefficients. The fig 21 shows the simulation waveform of Zigzag Inverse 
Scanner. 

2. Synthesis Results 

Once all the behavioral verifications (simulation) are done, syntheses of blocks are performed. The 
Cadence RTL compiler is used for synthesis and design is implemented on 90nm standard cells. The 
layout is generated using VLSI Backend EDA tool. After synthesis of each blocks the area and 
power analysis are performed for each of the block used in the design. The Layout of the Run Length 
Encoder is shown in the figure 22. 





Msgs 




0-^ |h#jfc>t_v/h,,, 


00002 


OOOlf BOON DOOM 


0000a 


00015 


00002 


]—— 














* /huff_fifb_test_v/dk 
k /huff_fifb_test_v/rst 
k /huff_fifb_test_v/r.. 1 
J /huff jfc Jest jf,,, 

D- v jhuff JbJestjjL 

* jMFJbjBtjjL 




: 

31 
Stl 


m 






1B1B1BB 


_j — 






















■E H U H |§ 







Figurel9. Simulation waveform of Huffman Decoder 



115 | 



Vol. 2, Issue 1, pp. 105-120 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 









|+al 

+ f 




















y 














WEM ■(illi! MliCTlffi MEifi 


g 


BBH 


m 


i 


B 


B 


g 


g 


rail 


■ 

























Figure20. Simulation waveform of Run-length Decoder 




Figure21. Simulation waveform of Zig- Zag Inverse Scanner 



]|cS h|]H1T^ © s. Q,< 



Finishing ECO 



i-t— Input mode C Rectangle Rectangle Intersect Selection 

'* Psmart r Line | T Enable | [Replace 



+ ■ Route .. 
-Global.. 
B- Route 




click objects or- drag a box 



Figure22. Layout of Variable Length Encoder 

The power and area characteristics of each of the blocks in the design are tabulated as shown in Table 
1, and the characteristics are represented in the form of graph are also show in graph 1. 



116 | 



Vol. 2, Issue 1, pp. 105-120 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Table 1: Power & Area Characteristics 



Designs \ Features 


Power(in mW) 


Area 


Zigzag Scanner 


0.7319 


0.3349 


Run Length Encoder 


0.0208 


0.0103 


Huffman Encoder 


0.0451 


0.0171 


Zigzag Inverse Scanner 


0.7285 


0.3359 


un Length Decoder 


0.1110 


0.0071 


Huffman Decoder 


0.0451 


0.0171 


FIFO 


0.2744 


0.1287 




I Power (in mW) 
I Area (in mm 2 ) 



Zigzag Run Huffman Zigzag Run Huffman FIFO 

Scanner Length Encoder Inverse Length Decoder 
Encoder Scanner Decoder 



Figure 23. Representation of Power & Area Characteristics 

3. Power Comparisons 

The power comparison of the proposed Architecture is as shown below. 
3.1 Power comparison of Huffman Decoder 

Table 2: Power Comparison for Huffman decoders 



Table Size 


Huffman decoder type 


Power (in uW) 


100 


Power Analysis of the Huffman 
Decoding Tree, by Jason McNeely [6] 


95 


Proposed Architecture 


45 



3.2 Power comparison of RL- Huffman Encoder Combination 

Table 3: Power Comparison for RL-Huffman Encoders 



RL-Huffman Encoding Type 


Power (in uW) 


RL-Huffman Encoding for Test Compression and Power Reduction in 
Scan Applications [8] 


90 


Proposed Architecture 


65.9 



117 | 



Vol. 2, Issue 1, pp. 105-120 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



XOO 

so 

■BO 
^lO 

o 




McNeely 
RroinoEed 



H uff nra n 
:c o ci i ng R o v-ye r 
i| i n |_l W 1 



Figure 24. Power Comparison for Huffman decoders 



xoo 
so 
&o 
^0 
20 
o 




P>1 . Nourani 
Rro loosed 



R O UVe 



r (in M-VWJ 



Figure 25. Power Comparison for RL-Huffman 

3.3 Percentage of Power Saving 

The percentage of power savings from proposed design are calculated and are tabulated as shown in 
following table 4. 

Table 4: Percentage of Power Savings 



Encoding 
Type 


Comparison with 


Percentage of Power 
Saving 


Huffman 
decoder 


Power Analysis of the Huffman Decoding Tree [6] 


52.63% 


RL- 
Huffman 
Encoder 


RL-Huffman Encoding for Test Compression and Power Reduction 
in Scan Applications [8] 


26.77% 



VI. Conclusion 

In this paper we described a Low Power Architecture for Variable Length Encoding and Variable 
Length Decoding for image processing applications. The designing and modeling of the all the blocks 
in the design is done by using synthesizable Verilog HDL with Low Power approach. The proposed 
architecture is synthesized using RTL complier and it is mapped using 90 nm standard cells. The 
Simulation of all the blocks in the design was done using Modelsim. A detailed analysis for power 
and area was done using RTL compiler from CADENCE. Power consumptions of variable length 
encoder and decoder are limited to 0.798mW and 0.884mW with minimum area. A 53% power 
saving is achieved in the dynamic power of huffman decoding [6] by including the lookup table 
approach and also a 27% of power saving is achieved in the RL-Huffman encoder [8]. 



118 I 



Vol. 2, Issue 1, pp. 105-120 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

References 

[1] G. Wallace, "The JPEG still-image compression standard" Commun. ACM, vol. 34, pp. 30-44, Apr. 

1991.ACM, vol. 34, pp. 30-44, Apr. 1991. 
[2]Low Power Look-Up Tables for Huffman Decoding, by Jason McNeely and Magdi Bayoumi, The 

Centre of Advanced Computer Studies, University of Louisiana, Lafayette. 
[3] Direct Huffman Coding and Decoding using the Table of Code- Lengths, by Reza Hashemian, Senior 

Member, IEEE Northern Illinois University DeKalb, Illinois 601 15, USA. 
[4] A Novel Fast DCT Coefficient Scan Architecture, by Da An, Xin Tong, Bingqiang Zhu and Yun He, 

State Key Laboratory on Microwave and Digital Communications Tsinghua National Laboratory for 

Information Science and echnology Department of Electronic Engineering, Tsinghua University, 

Beijing 100084, China. 
[5] A Novel VLSI Architecture for Image Compression using DCT and Quantization. By Vijaya Prakash 

A M and K. S Gurumurthy, IJCSNS Vol .10 No.9 September 2010. 
[6] "Power Analysis of the Huffman Decoding Tree",] ason McNeely, Yasser Ismail, Magdy A. 

Bayoumi and Peiyi Zaho. The Center for Advanced Computer Studies, University of Louisiana at 

Lafayette. IEEE Conference on Image Processing (ICIP) California USA October 2008. 
[7] D.A.Huffman, "A method for construction of minimum-redundancy codes", Proc. IRE, Vol. 40,pp. 

1098-1101, Sept. 1952. 
[8] "RL-Hujf man Encoding for Test Compression and Power Reduction in Scan Applications". Mehrdad 

Nourani and Mohammed H. Tehranipour Center for Integrated Circuits and Systems, The University 

of Texas at Dallas. ACM Transactions on Design Automation of Electronic Systems vol.10.Nol. Jan 

2005. 
[9] Joint Optimization of Run-Length Coding, Huffman Coding, and Quantization Table with Complete 

Baseline JPEG Decoder Compatibility, by En-hui Yang, Fellow, IEEE, and Longji Wang, Member, 

IEEE. 
[10] A Low-Power Variable Length Decoder for MPEG- 2 Based on Successive Decoding of Short 

Codewords Sung-Won Lee, Student Member, IEEE, and In-Cheol Park, Senior Member, IEEE. 
[ 1 1 ] A Fast Parallel Huffman Decoder for FPGA Implementati ACTA TECHNIC A N APOCENSIS 

Electronics and Telecommunications Volume 49, Number 1, 2008. 
[12] 'JPEG Architecture and Implementation Issues ', by Dr. Sri Krishnan Department of Electrical and 

Computer Engineering Ryerson University. 
[13] A Study and Implementation of the Huffman Algorithm Based on Condensed Huffman Table, by Bao 

Ergude, Li Weisheng, Fan Dongrui, Ma Xiaoyu, School of Software, Beijing Jiaotong University; and 

Key Laboratory of Computer System and Architecture, Institute of Computer Technology, Chinese 

Academy of Sciences, 2008. 
[14] An efficient Memory Construction Scheme for an Arbitrary Side Growing Huffman table, by Sung- 

Wen Wang, Shang-Chih Chuang, Chih-Chieh Hsiao, Yi-Shin Tung and Ja-ling Wu CMLab, Dept. 

CSIE NTU, Setabox Corporation, Graduate Institute of Networking and Multimedia NTU, 

NOVEMBER 2008. 
[15] A Novel VLSI Architecture for Image Compression Model using Low Power DCT. By Vijaya Prakash 

A M and K. S Gurumurthy, WASET Vol .72 December 2010 Singapur. 
[16] Balance of 0, 1 Bits for Huffman and Reversible Variable-Length Coding, by Jia-Yu Lin, Ying Liu, 

and Ke-Chu Yi, MARCH 2004. 
[17] Parallel Zigzag Scanning and Huffman Coding for a GPU -Based MPEG-2 Encoder, by Pablo 

Montero, Javier Taibo Videalab University of A Coruna, A Coruna, Spain. Victor Gulias, MADS 

Group University of A Coruna, A Coruna, Spain. Samuel Rivas LambdaStream S.L. A Coruna, Spain. 
[18] Discrete Wavelet Transform for Image Compression and A Model of Parallel Image Compression 

Scheme for Formal Verification... Proceedings of the World Congress on Engineering 2007 Vol I by 
Kamrul Hasan Talukder and Koichi Harada. 
[19] Adaptive Context Based Coding for Losless Color Image Compression , by Yongli Zhu and Zhengya 
XU School of Computer Science and Technology North China Electrical Power University. IMACS 

Conference on 'Computational Engineering in Systems Applications "(CESA) 2006 Beijing China. 
[20] An Efficient and Selective Image Compression Scheme using Huffman and adaptive Interpolation by 

Sunil Bhushan and Shipra Sharma.24 th International Conference Image and Vision Computing New 

Zealand (IVCNZ 2009). 
[21] Low Bit Rate Image Coding Based on Wavelet Transform and Color Correlative Coding by Wenna Li 

and Zhaohua Cui "2010 International Conference on Computer design and Applications (ICCDA 

2010). 
[22] Medical Image Coding based on Wavelet transform and Distributed arithmetic Coding by Li Wenna , 



119 | 



Vol. 2, Issue 1, pp. 105-120 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Gao Yang , Yi yufeng and Gao Liqun. IEEE 2011 Chines Control and Decision Conference (CCDC). 
Authors 

Vijaya Prakash A.M Obtained is B.E from UVCE Bangalore of Bangalore University in 

the year 1992, Post graduated in M.E from SDMCET Dhawad of Karnataka University in 

the year 1997 and presently pursuing Ph.D. from Dr. M. G. R University Chennai. He has 

been actively guiding PG and UG student's projects in the area of VLSI Design and Image 

Processing. He has around 8 technical paper publications in international journals and 

international conferences. Currently he has been working as Associate Professor in 

Electronics and Communication Engineering Department, Bangalore Institute of 

Technology Bangalore-04. He has presented paper in Singapur. His research interests are 

Low Power VLSI, Image Processing, Retiming and Verification of VLSI Design, Synthesis and Optimization of 

Digital Circuits. He is a member of IMAPS and ISTE. 




K.S Gurumurthy obtained his B.E degree from M.C.E - Hassan of Mysore University in 
the year 1973. He got his M.E Degree from University of Roorkee (now IIT-Roorkee) in 
1982. He joined UVCE in 1982 and he has since been teaching Electronics related subjects. 
He obtained his Ph.D. degree in 1990 from IISc Bangalore. Presently he is a professor in 
the DOS in E & CE, UVCE, BU, Bangalore-1. He is a "University gold medal" winner 
from University of Roorkee and a recipient of the "Khosla award" for the best technical 
paper published in 1982. He has successfully guided 4 Ph.D., 2 M.Sc-Engg (by research) 
candidates and guided a number of UG and PG projects. He has around 75 technical paper 
publications in journals and international conferences. He has presented papers in JAPAN, FRANCE, 
MALAYASIA and USA. His interests are Low power VLSI, Multi valued logic circuits, Deep submicron 
Devices. He is a member of IEEE and ISTE. 



7 



120 | 



Vol. 2, Issue 1, pp. 105-120 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Verification Analysis of AHB-LITE Protocol with 

Coverage 

1 o o 

Richa Sinha , Akhilesh Kumar and Archana Kumari Sinha 
Department of E&C Engineering, NIT Jamshedpur, Jharkhand, India 
department of Physics, S.G.G.S. College, Patna City, Bihar, India 



Abstract 

The SoC design faces a gap between the production capabilities and time to market pressures. The design 
space, grows with the improvements in the production capabilities in terms of amount of time to design a system 
that utilizes those capabilities. On the other hand shorter product life cycles are forcing an aggressive reduction 
of the time-to-market. Fast simulation capabilities are required for coping with the immense design space that is 
to be explored; these are especially needed during early stages of the design. This need has pushed the 
development of transaction level models, which are abstract models that execute dramatically faster than 
synthe sizable models. The pressure for fast executing models extends especially to the frequently used and 
reused communication libraries. The presents paper describes the system level modelling of the Advanced High- 
performance Bus Lite (AHB-Lite) subset ofAHB which part of the Advanced Microprocessor Bus Architecture 
(AMBA). The work on AHB-Lite slave model, at different test cases, describing their simulation speed. Accuracy 
is built on the rich semantic support of a standard language SystemVerilog on the relevant simulator Riviera 
has been highlighted. 

KEYWORDS' AMBA( Advanced Microcontroller Bus Architecture), AHB-Lite( Advanced High performance 
Bus-Lite), SystemVerilog, SoC(System on chip), Verification intellectual property (VIP). 

I. Introduction 

The bus protocol used by the CPU is an important aspect of co-verification since this is the main 
communication between the CPU, memory, and other custom hardware. The design of embedded 
systems in general and a SoC in special will be done under functional and environmental constraints. 
Since the designed system will run under a well-specified operating environment, the strict functional 
requirements can be concretely defined. The environment restrictions on the other hand are more 
diverse: e.g. minimizing the cost, footprint, or power consumption. Due to the flexibility of a SoC 
design, ARM processors use different bus protocols depending on when the core was designed for 
achieving the set goals, involves analyzing a multi-dimensional design space. The degrees of freedom 
stem from the process element types and characteristics, their allocation, the mapping of functional 
elements to the process elements, their interconnection with busses and their scheduling. The 
enormous complexity of these protocol results from tackling high-performance requirements.Protocol 
control can be distributed, and there may be non-atomicity or speculation. 

AHB-Lite systems based around the Cortex-M™ processors ARM delivers the DMA-230 "micro" 
DMA controller [13]. ARM delivers DMA controllers for both high-end, high-performance AXI 
systems based on the Cortex-A™ and Cortex-R™ families and cost-efficient AHB systems built 
around Cortex-M™ and ARM9 processors. 
The CoreLink Interconnect family includes the following products for AMBA protocols: 

• Network Interconnect (NIC-301) for AMBA 3 systems including support for AXI, AHB and 
APB 

• Advanced Quality of Service (QoS-301) option for NIC-301 



121 | 



Vol. 2, Issue 1, pp. 121-128 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

The third generation of AMB A defines the targeted at high performance, high clock frequency system 
designs and includes features which make it very suitable for high speed sub-micrometer interconnect. 
In the present paper the some discussion is made on the family of AMBA and a small introduction on 
SystemVerilog language which used during VIP. And also the briefly described the AHB-Lite 
Protocol. Further verification intellectual property (VIP) of slave of the AHB-Lite protocol with 
different test cases is shown. 

II. AMBA Protocols 

Figure 1. shows the different protocols performances from the time of initialization [9]. 






-.s 




-> Time 



1995 



1999 



2003 



Figure l.Protocols of AMBA[9] 

4- APB (Advanced Peripheral Bus) mainly used as an ancillary or general purpose register based 
peripherals such as timers, interrupt controllers, UARTs, I/O ports, etc. It is connected to the 
system bus via a bridge, helps reduce system power consumption. It is also easy to interface 
to, with little logic involved and few corner- cases to validate. 

J- AHB (Advanced High Performance Bus) is for high performance, high clock frequency 
system modules with suitable for medium complexity and performance connectivity 
solutions. It supports multiple masters. 

A- AHB-Lite is the subset of the full AHB specification which intended for use where only a 
single bus master is used and provides high-bandwidth operation. 

III. SYSTEMVERILOG 

SystemVerilog is a Hardware Description and Verification Language based on Verilog. Although it 
has some features to assist with design, the thrust of the language is in verification of electronic 
designs. The bulk of the verification functionality is based on the Open Vera language donated by 
Synopsys[12]. SystemVerilog has just become IEEE standard PI 800-2005. SystemVerilog is an 
extension of Verilog-2001; all features of that language are available in SystemVerilog i.e Verilog 
HDL, VHDL, C, C++. 

IV. AHB-LITE Protocol System 

AMBA AHB-Lite protocol addresses the requirements of high-performance synthesizable designs. It 
is a bus interface that supports a single bus master and provides high-bandwidth operation. 
AHB-Lite implements the features required for high-performance, high clock frequency systems 
including: [1] 

• burst transfers 

• single-clock edge operation 

• non-tristate implementation 



122 I 



Vol. 2, Issue 1, pp. 121-128 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

• Wide data bus configurations, 64, 128, 256, 512, and 1024 bits. 
The most common AHB-Lite slaves are internal memory devices, external memory interfaces, and 
high bandwidth peripherals. Although low-bandwidth peripherals can be included as AHB-Lite 
slaves, for system performance reasons they typically reside on the AMBA Advanced Peripheral Bus 
(APB). Bridging between this higher level of bus and APB is done using a AHB-Lite slave, known as 
an APB bridge. 




— HWDATA[31:D] 

— HADDR[31:D]- 




-HSEL 1- 



Deooder -HSEL_2 — i 
-HSEL_3 n 

1 

Multiplexor 

select 



-HRDATA[31:0]- 




=2 



HRDATAJ- 
HRDATA.Z- 
HRDATA 1- 



Slave 1 



J5 Slave 2 



Slave 3 



Figure 2. AHB-Lite block diagram 

Figure 2. shows a single master AHB-Lite system design with one AHB-Lite master and three AHB- 
Lite slaves. The bus interconnect logic consists of one address decoder and a slave-to-master 
multiplexor. The decoder monitors the address from the master so that the appropriate slave is 
selected and the multiplexor routes the corresponding slave output data back to the master.The main 
component types of an AHB-Lite system are described in: 

• Master 

• Slave 

• Decoder 

• Multiplexor 



4.1 Operations of AHB-Lite 

The master starts a transfer by driving the address and control signals. These signals provide 
information about the address, direction, width of the transfer, and indicate if the transfer forms part 
of a burst. Transfers can be:[l 1] 

Table 1. Transfer type values 



Cycle Type 


Description 


HTRANS[1:0] 


IDLE 


No bus activity 


00 


BUSY 


Master inserting wait states 


01 


NON-SEQUENTIAL 


Trasnsfer with address not related to 
the previous transfer 


10 


SEQUENTIAL 


Trasnsfer with address related to the 
previous transfer 


11 



The write data bus moves data from the master to a slave, and the read data bus moves data from a 
slave to the master. Every transfer consists of: [2] 

• Address phase one address and control cycle 

• Data phase one or more cycles for the data. 



123 | 



Vol. 2, Issue 1, pp. 121-128 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

A slave cannot request that the address phase is extended and therefore all slaves must be capable of 
sampling the address during this time. However, a slave can request that the master extends the data 
phase by using HREADY. This signal when LOW, causes wait states to be inserted into the transfer 
and enables the slave to have extra time to provide or sample data. 
The slave uses HRESP to indicate the success or failure of a transfer. 



Table 2.Response type values 


Description 


HRESP[1:0] 


Completed Successfully 


00 


Error occurred 


01 


Master should retry 


10 


Perform Split Protocol 


11 



V. Specification Different from AHB 

The AHB -Lite specification differs from the full AHB specification in the following ways [2]: 

• Only one master. There is only one source of address, control, and write data, so no Master- 
to-Slave multiplexor is required. 

• No arbiter. None of the signals associated with the arbiter are used. 

• Master has no HBUSREQ output. If such an output exists on a master, it is left 
unconnected. 

• Master has no HGRANT input. If such an input exists on a master, it is tied HIGH. 

• Slaves must not produce either a Split or Retry response. 

• The AHB -Lite lock signal is the same as HMASTLOCK and it has the same timing as the 
address bus and other control signals. If a master has an HLOCK output, it can be retimed to 
generate HMASTLOCK 

• The AHB -Lite lock signal must remain stable throughout a burst of transfers, in the same way 
that other control signals must remain constant throughout a burst. 

VI. Compatibility 

Table 3 shows how masters and slaves designed for use in either full AHB or AHB -Lite can be used 
interchangeably in different systems. 

Table 3 



Component 


Full AHB system 


AHB -Lite system 


Full AHB master 


S 


s 


AHB -Lite master 


Use standard AHB master 
wrapper 


s 


AHB slave (no Split/Retry) 


s 


s 


AHB slave with Split/Retry 


>/ 


Use standard AHB master 
wrapper 



VII. Simulation Results of Design of AHB-Lite Protocol 

Figure 4. show single write and read operation which is taking place in AHB-Lite bus protocol. In the 
above simulated result the has been written by the signal Hw_data at the address Haddr when Hwrite 
signal is active high. The same data is been read by the system by the signal Hr_data at same address 
when the Hwrite signal is low. 



124 | 



Vol. 2, Issue 1, pp. 121-128 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 





LI ® ^^ V9 ^b m *l * a -™ 


s Places System g) 










■ 


File Edit Waveform 


^HIIIIIM 










as 


s a. s t> -q. -s ^a -e js [of5-262 P s 


& & ^ m O ffl * * >• * < 


> jP •; 


ffil & gtf i? -^ 


ss - o 


» [Auto ^ J[ 


ns ^ 




J'^ l :\^-, r h:\M:\ r r ; :\ r h\^ l :^', l iU l r^^^PP J 




^\.FV°,.P?°,,P 



!Hg 



B _Ll l°f* - 262ps 



1 182 7731s] 



Lgen.w (... gj Riviera-PRO - Un... ^ Untitledl.a 



CH ^ 



Figure 4. Single write and read operation 



Places System 



File Edit Waveform 

IB* Si t> J4 A ^ JR JS [l53Qps-1622ps v] & j; j& 
Name V alue [isip | | p.535 | | |i54o | p.545 | | fLSSH | |i 



A 



u^**^^ 



■>- Hclk 
°- Hreset_n 
"- Hselx 
»- Hready 

+ a- Htrans 
S *- Hprot 
a u- Hburst 
BJ- »- Hsize 



»- Hw_data 
■+ ■■» Hr_data 
-» Hready_out 

+ R= p_state 
+ R= n_s1ate 
B-R= count 



_r 



_r 




&| HI ^ 

, M 6 ? , . M 6 T 



^ *a £ I ■:■'.: te P ^ ^ Si - o 



[T543 591 fs | 
1530ps - 1622ps J 




a Terminal g| Riviera-PRO - Untitledl.a... 


^ Untitled!. awe* 



Figure 5. Read operation with unwritten 

In Figure 5. Read operation with unwritten location is taking place i.e it is said to randomize 
operation. It is shows that the address is being added but Hw_data is 0000000 because this is taking 
place after the reset. 



125 | 



Vol. 2, Issue 1, pp. 121-128 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 




Hr_data 



+ R=| 
+ R= i 



iOOOOOOOO 
iOOOOOOOO 
iOOOOOOOO 



iOOOOOOOO 



Default cursor 96 459 fs |9S459fs| 

I - 1 " IZDjLI _|87 PS -218 P! 



■ Terminal 


gg Riviera-PRO - Untitledl.a... ^ Untitled 1. awe* 






Figure 6. write_inc_8 operation 



In Figure 6. it shown that inc_8 is taking place in Haddr of write. In Figure 7 inc_4 is taking place of 
write and read. 



m^^v^mm* 



I Applications Places 




iBfBS) 



{^ ahb_slv_gen.sv (... gg Riviera-PRO - Un... W] Untitled!. awe* 



Figure 7. Write and read with inc_4 



126 | 



Vol. 2, Issue 1, pp. 121-128 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

VIII. Coverage Analysis 



B____i_ 



I Applications Places System Q 



File Edit View History Bookmarks Tools Help 



4^3 ^J l_ file:///home/student/cvc_tng/old_user/teja/system_verilog/AHB_LITE/sample_l/run/rcover_report.h 

_ Most Visited ▼ M|RedHat M|Red Hat Magazine ■! Red Hat Network W|Red Hat Support 



| ^ ] |_H Google" 







FUNCTIONAL COVERAGE: 


= 


E'lgTy pe $u n it. ah b_slu_tran s: : ah b_cav 




Covergroup 


«• U^l*— 






TYPE.?:.':'! .,■:■: .", r\i';.:. .•;■:': ■ -:.<■ 


83.125% 1 100.000% | Uncovered 




COVERPOIMT $unit.ehh_stv_tran5..ahh_cov :: addr_cp 


100.000% |l00.000% |Covered 




binbl 
bin b2 
bin b3 
bin b4 


3 
21 
26 
50 


1 

1 
1 


Covered 
Covered 
Covered 




COVERPOIMT $untBhb_slv_o-ans::ahb_cov :: dala_cp 


53.125% 1 100.000% | Uncovered 




bin auto[0:67108863] 
bin auto[67108864:134217727] 
bin auto[134217728:201326591] 
bin auto[2 0132 6592: 2 68435455] 
I'lh .iiiH:-i.k|:'-|'.i.,;35544319] 
I'llliUlli'l "-II . ■■ i". ■' li- ] 
bin auto[402 653184:4697 62 047] 
bin auto[4697 62 048: 53687 0911] 

l> li l..|!.;b87 0912: 60397 9775] 

bin auto[60397 97 7 6:671088639] 
bin auto[67 1088640: 7381 97 503] 
■in . ii. iH. "38197 504:805306367] 

bin .ii.il.] 11' 1| 

bin auto[872415232:939524095] 
bin auto[939524096: 1006632 959] 
bin auto[1006632960:1073741823] 
bin auto[1073741824:1140850687] 
bin auto[1140850688:1207959551] 
bin auto[l 207959552:12 75068415] 
bin auto[1275068416:1342177279] 
bin auto[1342 177 2 80:14092 86143] 
bin auto[14092 86144: 147 6395007] 
bin auto[147 6395008: 1543503871] 
bin auto[l 54350387 2: 1610612 735] 
bin auto[1610612736:1677721599] 
bin auto[l 6777 2 1600: 1744830463] 
bin auto[l 744830464:1 81 193932 7] 
bin auto[l 81 193932 8: 187 90481 91] 
bin auto[l 87 90481 92: 19461 57 055] 
bin auto[1946157056:2013265919] 
bin aulo[2C'132bHCC':2080374783] 







1 
1 
1 
2 

1 
2 







2 

3 
2 
2 
1 



1 







Zero 
Covered 

Covered 
Covered 

Covered 

Covered 
Covered 

Covered 


Done 


| l*i File Manager [7) || M Terminal | %. fcover_report.html - Moz... | ^D| 



Figure 8. Coverage Analysis 

The Coverage Report gives the details of the functional coverage when complete Analysis was 
done for the AHB-Lite and coverage report was generated as shown in Figure 8. It is found that 
the coverage is 100%. 

IX. Conclusion 

In the paper a general definition for AHB-LITE protocol which has high performance represents a 
significant advance in the capabilities of the ARM AMBA™ bus on-chip interconnect strategy, by 
providing a solution that reduces latencies and increases the bus bandwidth. AHB-Lite fully 
compatible with the current AHB specification. AHB-Lite increases the choice of architectures 
available to the AMBA bus-based designer, and is supported by a comprehensive range of products 
from ARM. 

References 

[I] ARM, "AMBA_3_AHB-Lite", available at http://www.arm.com/. 

[2] ARM, "AMBA Specification (Rev 2.0)", available at http://www.arm.com. 

[3] ARM, "AMBA AXI Protocol Specification", available at http ://www. a rm. com 

[4] Samir Palnitkar, "Verilog HDL: A Guide to Digital Design and Synthesis", Second Edition 

[5] Chris Spear, SystemVerilog for Verification, New York : Springer, 2006 

[6] http://www.testbench.co.in 

[7] http://www.doulos.com/knowhow/sysverilog/ovm/tutorial_0 

[8]. http://www.inno-logic.com/resourcesVMM.html 

[9] Akhilesh Kumar and Richa Sinha, "Design and Verification Analysis of APB3 Protocol with Coverage" 

IJAET Journal, Vol. 1, Issue 5, pp. 310-317, Nov 2011. 

[10] http://en.wikipedia.org/wiki/Advanced_Microcontroller_Bus_Architecture 

[II] http://books.google.co.in/books 
[12] http://www.asicguru.com 

[13] http://www.arm.com 

[14] Bergeron, Janick. Writing testbenches: functional verification of HDL models. s.L: Springer, 2003 



127 | 



Vol. 2, Issue 1, pp. 121-128 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Authors 

Richa Sinha received B.E. Degree from RajaRamBapu Institute of Technology Shivaji 
University, Kolhapur, Maharashtra, India in 2007. Currently she is pursuing M. Tech 
project work under the guidance of Prof. Akhilesh Kumar in the Department of Electronics 
& Communication Engg, N. I. T., Jamshedpur. Her interest of field is ASIC Design & 
Verification. 

Akhilesh Kumar received B.Tech degree from Bhagalpur University, Bihar, India in 1986 
and M.Tech degree from Ranchi, Bihar, India in 1993. He has been working in teaching and 
research profession since 1989. He is now working as H.O.D. in Department of Electronics 
and Communication Engineering at N.I.T. Jamshedpur, Jharkhand, India. His interest of 
field of research is analog and digital circuit design in VLSI. 

A. K. Sinha is presently Associate Professor and Head of the Department of Physics at 
S.G.G.S College, Patna Saheb, Bihar, India. She did her M.Sc (Physics) and Ph.D Degree 
from Magadh University, Bodh Gaya, Bihar in the month/year August 1981 and June 2003 
respectively. Her fields of interest are Material Science, Semi-Conductor Devices, Electronic 
Spectra, Structure of Polyatomic Molecules, VLSI and its related fields. 





a 



128 



Vol. 2, Issue 1, pp. 121-128 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Impact of Voltage Regulators 

in Unbalanced Radial Distribution Systems using 

Particle Swarm Optimization 

Puthireddy Umapathi Reddy 1 *, Sirigiri Sivanagaraju 2 , 

Prabandhamkam Sangameswararaju 

department of Electrical and Electronics Engineering, Sree Vidyanikethan Engineering 

College, Tirupati, India, 
department of Electrical and Electronics Engineering, Jawaharlal Nehru Technological 



3 



University College of Engineering Kakinada, Kakinada, India. 
Department of Electrical and Electronics Engineering, Sri Venkateswara University College 

of Engineering, Tirupati, India. 



Abstract 

In rural power systems, the Automatic Voltage Regulators (AVRs) help to reduce energy loss and to improve the 
power quality of electric utilities, compensating the voltage drops through distribution lines. This paper 
presents selection of optimal location and selection of tap setting for voltage regulators in Unbalanced Radial 
Distribution Systems (URDS). PSO is used for selecting the voltage regulator tap position in an unbalanced 
radial distribution system. An algorithm makes the initial selection, installation and tap position setting of the 
voltage regulators to provide a good voltage profile and to minimize power loss along the distribution network. 
The effectiveness of the proposed method is illustrated on a test system of 25 bus unbalanced radial distribution 
systems. 

KEYWORDS' Unbalanced radial distribution systems, Voltage regulator placement, Loss minimization, 
Particle swarm optimization. 

I. Introduction 

This paper describes a new approach for modelling of automatic voltage regulator in the 
forward/backward sweep-based algorithms for unbalanced radial distribution systems [1], [2]. A 
voltage regulator is a device that keeps a predetermined voltage in a distribution network despite of 
the load variations within its rated power [3], [4]. Since it is the utilities' responsibility to keep the 
customer voltage with in specified tolerances, voltage regulation is an important subject in electrical 
distribution engineering [5]. However, most equipment and appliances operate satisfactorily over 
some 'reasonable' range of voltages; hence, certain tolerances are allowed at the customers' end. 
Thus, it is common practice among utilities to stay within preferred voltage levels and ranges of 
variations for satisfactory operation of apparatus as set by various standards [6]. In distribution 
systems operation, shunt capacitor banks and feeder regulators are necessary for providing acceptable 
voltage profiles to all end-use customers and reducing power losses on large distribution systems [9]. 
A voltage regulator is equipped with controls and accessories for its tap to be adjusted automatically 
under load conditions. Moreover, it can be controlled by the installation of devices such as fixed and 
controlled capacitors banks, transformers with On Load Tap Changers (OLTCs), and Automatic 
Voltage Regulators (AVRs) [11], [12]. Loss reduction and improvement of voltage profile have been 
also studied by using OLTCs [13]. One of the most important devices to be utilized for the voltage 



129 | 



Vol. 2, Issue 1, pp. 129-138 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

regulation is the AVRs which can be operated in manual or automatic mode. In the manual mode, the 
output voltage can be manually raised or lowered on the regulator's control board and it could be 
modelled as a constant ratio transformer in power flow algorithms [14]. In the automatic mode, the 
regulator control mechanism adjusts the taps to assure that the voltage being monitored is within 
certain range [16]. 

Optimal power flow analysis is used to determine the optimal tap position and the ON/OFF state of 
the capacitor banks. The same problem is solved by Vu et al. [7] using the loss equation as the 
objective function and voltage inequalities as constraints through the use of an artificial neural 
network. Safigianni and Salis [10] proposed the number and location of AVRs by using a sequential 
algorithm. In addition to this, the objective function is defined by using the AVR's investment and 
maintenance costs and also the cost of the total energy losses. Chiou et al. [15] initially attempted to 
solve the problem of voltage regulator by changing the tap positions at the substation and later solved 
the capacitor problem. J. Mendoza et al. [17] developed a method for optimal location of AVRs in 
radial distribution networks by using simple genetic algorithms. However, there are only a few 
publications that have treated the complex problem of the optimal location of the AVRs in distribution 
systems, despite the fact that the benefits of including AVR devices. The automatic voltage regulators 
(AVRs) are included into the sweep based methods and tested by using two distribution test 
systems. The proposed method deals with the placing of voltage regulator and tap position of 
regulators for power loss reduction and voltage profile improvement [18]. Daniela Proto, Pietro 
Varilone et al[19] explained about Voltage Regulators and Capacitor Placement in Three-phase 
Distribution Systems with Non-linear and Unbalanced Loads. Multiobjective Location of Automatic 
Voltage regulators in a radial Distribution Network Using a Micro Genetic Algorithm is given in [20]. 
Optimal Distribution Voltage Control and losses minimization suitable methods are proposed in [21], 
[22]. Integrated volt/VAr control in distribution systems is illustrated in [23], [24]. Impact of 
Distribution Generation on voltage Levels in Radial Distribution Systems using Voltage 
regulators/controllers in[26],[27] are proposed to improve voltage profit. 

This paper, explains mathematical model, Algorithm for finding the tap settings of a voltage 
regulator, Implementation of PSO and results and discussions. The branch that has the highest voltage 
drop is picked as the best location for the voltage regulator placement. PSO is used to find the 
selection of tap position of the voltage regulator, to obtain the tap position of the voltage regulators 
that maintain the voltages within the limits of the unbalanced radial distribution systems so as to 
minimize an objective function, which consists of power loss. 

II. MATHEMATICAL FORMULATION 

In this paper in order to maintain the voltage profile and to reduce the power losses, these voltage 
regulators are installed in the distribution system. The optimization problem has been presented into 
two sub problems: Locating the AVRs on the network and the selection of the tap position of AVRs. 

2.1 Optimal location of Automatic Voltage Regulators (AVR) 

The optimal location of voltage regulator (AVR) is defined as function of two objectives, one 
representing power loss reduction and the other one representing the voltage deviations but both are 
essential to secure the power supply. It is difficult to formulate the problem in terms of cost incidence 
of these objectives over the system of operation because even when the cost incidence of power losses 
is clear it is not the same for keeping the voltage values at the buses close to the rated value. 

The objective function to be minimized is 

Yi h 
Minimize f = £ P' °; bc (!) 



j = l 



loss 



J 



Where, Pi , is the active power loss in the j* branch after voltage regulator placement, 
'nb' is the number of branches in the system. 



130 | 



Vol. 2, Issue 1, pp. 129-138 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

2.2 Tap Position Selection 

In general, voltage regulator position at bus 'q' can be calculated as 

vP h '=V ±TapP h x V Pk (2) 

q q rated 

Tap position (tap) can be calculated by comparing voltage obtained before VR installation with the 
lower and upper limits of voltage. 

'+' for boosting of voltage 

'-' for bucking of voltage 

The bus voltages are computed by load flow analysis for every change in tap setting of 
voltage regulators, till all bus voltages are within the specified limits. 

III. Algorithm for Finding the Tap Settings of a Regulator 

Step 1 : Read the system and regulator data 

Step 2 : Calculate the branch current in which regulator is inserted from the backward sweep. 

Step 3 : Find the CT ratio for three phases as 

CT ph = CT P Whereas CT^ h =5 Amps, (3) 

CT ph 

Step 4 : Convert the R and X values from volts to ohms as 

(R- -X) ph A Rsettin 9 ~ jX Setting ) volts ( 4 ) 

^ J 'ohms ^^ j, , ph 

CT of current F 

s J 

Step 5: Calculating current in the compensator 

ph _ (current in the branch) p /g\ 

l comp ~ Z^u 

CT ratio pn 

Step 6: Calculate the input voltage to the compensator as 

ph _ (Voltage at the sending end of the branch) p (6) 

reg PT ratio ph 

Step 7: Voltage drop in the compensator circuit is 

Step 8: Voltage across the voltage relays in three phases 

v Ph _ v ph _ v ph (8) 

v R ~ v reg v drop 

Step 9: Finding the tapping of the regulator 

n (lower limit of the voltage limit ) p -V p /q\ 

P = vhT 

(change in voltages for a step change of the regulator) y 

Step 10: Voltage output of the regulator 

v ro h =(voltageof thesendingendof thebranchf h ±Tap ph x(0.00625 J (10) 

'+' For raise 
'_' For lower 
Step 11: Stop 

IV. Implementation of PSO 

In this section, the optimal voltage regulator tap setting at candidate node of the unbalanced radial 
distribution system is selected using PSO. 



131 | Vol. 2, Issue 1, pp. 129-138 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

4.1 Initialization of PSO Parameters 

The control parameters such as lower and upper bounds of node voltage and tap setting of voltage 
regulators are selected as initialize parameters. Randomly generate an initial swarm (array) of 
particles with random positions and velocities. 

4.2 Evaluation of Fitness Function 

The fitness function should be capable of reflecting the objective and directing the search towards 
optimal solution. Since the PSO proceeds in the direction of evolving best-fit particles and the fitness 
value is the only information available to the PSO, the performance of the algorithm is highly 
sensitive to the fitness values. For each particle or swarm, the voltage regulators are placed at the 
sensitive nodes and run the load flow to calculate the losses, net saving using Eqn.(l) and these net 
saving becomes the fitness function of the PSO (as saving are maximized). 

4.3 Optimal Solution 

Optimal solution (the best position and the corresponding fitness value) to the target problem. 
Information of the best position includes the optimal location and number of voltage regulators, and 
the corresponding tap setting value represents the maximizing the total saving of the system. 
Accordingly, the optimal location and number of voltage regulators with tap setting at each node can 
be determined. 

This modification can be represented by the concept of velocity (modified value for the current 
positions). Velocity of each particle can be modified by the following equation 

V t k+l =WV t k +C l rand l x[Pbest i -X*] + C 2 rand 2 x[Gbest-X*] 
where, 
V i : Velocity of particle i at iteration k, 

Vj ; Modified velocity of particle i at iteration k+1, 

W : Inertia weight, 

Ci,C 2 : Acceleration Constants, 

randi,rand 2 : Two random numbers 

Xj ; Current position of particle i at iteration k, 

Pbest : Pbest of particle i, 

Gbest : Gbest of the group. 

In the equation (4), 

The term ranc^ x (Pbesti - X^ ) is called particle memory influence 

The term ranc^ x (Gbest - X^ ) i s called swarm influence. 

The randi, rand 2 are the two random numbers with uniform distribution with range of { 0.0 to 1.0 } 
W is the inertia weight which shows the effect of previous velocity vector on the new vector. Suitable 
selection of inertia weight W provides a balance between global and local exploration, thus requiring 
less iteration on average to find optimal solution. A larger inertia weight 'W facilitates global 
exploration, while smaller inertia weight 'W tends to facilitates local exploration to fine tune. 
The following inertia weight is usually utilized in equation (11): 

W -W 

W=W max - — min xiter (12) 

ner max 

Where, 

W max : Initial value of the Inertia weight, 

W min : Final value of the Inertia weight, 

iter max : Maximum iteration number, 

iter : current iteration number. 

Accordingly, the optimal types and sizes of voltage regulators to be placed at each compensation node 

can be determined. 



132 | 



Vol. 2, Issue 1, pp. 129-138 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

4.4 Algorithm for Optimal Location using PLI and Tap setting of VR using PSO 

The detailed algorithm is to determine optimal location along with tap setting of voltage regulator is 

given below. 

Step 1: Read system data such as line data and load data of distribution system. 

Step 2: Initialize the PSO parameters such as Number of Agents (M), Number of Particles (N), 

Number of Iterations (K max ), Initial value of Inertia weight (W max ), Final value of Inertia 

weight (W min ), Acceleration Constants (Q & C 2 ). 
Step 3: Initialize the parameters as explained in section 4.(i) 

Step 4: Obtain the optimal location of VR by using PLI (Power Loss Index) as input. 
Step 5: Initialize the swarm by assigning a random position and velocity in the problem hyperspace 

to each particle, where each particle is a solution of tap setting of VR. 
Step 6: Run the load flow and compute the fitness value of each particle using equation (11). 
Step 7: Compare the present fitness value of i th particle with its historical best fitness value. If the 

present value is better than Pbest update the Pbest, else retain Pbest as same. 
Step 8: Find the Gbest value from the obtained Pbest values. 
Step 9: Update the particle positions & velocity using eqns. (11) & (12). 
Step 10: Apply boundary conditions to the particles 

Step 11: Execute steps 6-10 in a loop for maximum number of iterations (K max ). 
Step 12: Stop the execution and display the Gbest values as the final result for optimal tap setting of 

voltage regulator. 

V. Results and Discussion 

The performance of the proposed method is evaluated for test system of 25 bus URDS for voltage 
regulator placement to find placing and tap settings of the voltage regulator. For the positioning of 
voltage regulators, the upper and lower bounds of voltage are taken as ± 5% of base value. Proper 
allocation of VR gives minimum losses in URDS and improves performance of the system. The real 
and reactive power losses of given test system is controlled by controlling voltage regulator size and 
location. The PSO Parameter values for voltage regulator placement: Number of Particles (N) =20, 
Number of Iterations (K max ) =100, Initial value of the inertia weight (W max ) =0.9 ,Final value of the 
inertia weight (W min ) =0.4, Acceleration constants, (Q & C 2 )=4. The proposed method is illustrated 
with test system consisting of 25 bus URDS. 

5.1 Case Study 

Power loss indices for 25 bus URDS is shown in the Figure. 1. From the Figure. 1 it can be concluded 
that the power loss index above 0.6 is the minimum voltage point to locate voltage regulator. The 
proposed algorithm is tested on 25 bus URDS as shown in Figure 2. The line and the load data of this 
system is given in [18]. The tap settings of the regulator are obtained with PSO algorithm. The single 

line diagram of 25 bus URDS after voltage regulator placement is shown in Figure. 2 

1 




25 



Node no. 



Figure.l Power loss indices for 25 bus URDS 



133 | 



Vol. 2, Issue 1, pp. 129-138 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



19* 



18<t 



\—^~ 



IM 23L± 



si i-*^ 



15 L 



y * 



24 25 

lO 11 



IV 



iak 



Figure. 2 Single line diagram of 25 bus URDS after voltage regulator placement. 













Voltage Profile for Phase A 
















1 05 






I 


I 




I 


—5^^~&^ 




I 




« Base Case 

— e — GA method 
1 PSO method 


" 










1 


* < ^ZH"' 














^<s- e — 


— * — 


Sr- 


<fe * 93— 


- -& 








, 




, 










I 




I 





Voltage Profile for Phase B 
















Voltage 


Profile 


for Phase C 






















1.05 








\^ I 


! 






__^_ 


, 


















- 


1 


*^ZI1 






3" *r~~~^ o — 


3- 


— 6 


— e 


-e « 


— s— 




— ■ e— - -^ 














- 


95 






I 




i 










" 






I 








I 





Node Number 



Figure.3 Voltage profit 



Real Power Loss for Phase A 




Real Power Loss for Phase C 



P 5 




- Base Case 

- GA method 
-PSO method 



^ % — & — , — 4> » — « — $ — ® 4 -# ft $ ft 



Branch Number 



Figure 4. Real power loss 



134 | 



Vol. 2, Issue 1, pp. 129-138 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Reactive Power Loss for Phase A 




Branch Number 



Figure.5 Reactive power losses 
Table: 1 Summary of test results of 25 bus URDS with voltage regulator placement. 



Description 


Without Regulators 


With kcRulatHH-s 




Kxibi Lng. mcChod[ li I 


Proposed method 




Nafe.N 

0. 


Ftiase A 


Phase 

e 


Ptiase 
C 


PhaK 
A 


Phas: 
B 


Ftiase 
C 


PnaseA 


Phase 
B 


Phase 
C 


Tap-setting ot' 
leg. with 
node no. 


: 








■^ 


7 


s 


7 


ft 


3 


7 


5 


S 


1 


8 


7 


6 


Min. voltage (piu) 


0.92B4 


0.92B4 


0.9366 


0.9995 


i.aooo 


L.OOOD 


1.0000 


i. ■:>:>::- 


1. '•■ 


Mas. Voltage 
RecuMDRtp.u) 


0.0726 


0.0716 


0.0634 


O00Q5 


o 














tmprove me m in Max imam 
Voltage Regulation {%) 


— 


— 


— 


99.3U3 


100 


100 


100 


100 


100 


Total active Powgj 
demareUkWi 


1126.1 164 


U 38.744 


1125.1621 


LLJ4.206L 


L 124,6501 


11 17.72 B5 


1H2.15S7 


1124.2917 


HL4.9545 


Total active power 
loss(tW) 


52.S11S 


55.442S 


41.661.1 


40.9061 


4L3598 


344235 


39.1587 


40.9917 


31.6545 


[mpro^nieal Inactive 
power loss reduction [ft) 


— 


— 


— 


225480 


25.4009 


17.7.M2 


25.84457 


26.0649 


243828 


Total reacts Ptjwer 
demanidjfcVAr) 


8513216 


8542954 


855.6916 


837.174S 


S3P.9S3S 


£444912 


&35.IQ34 


339.9680 


341.4120 


Total reactive powei loss 
(tVAn 


SOUS 


53.2912 


55.6m 


45.1748 


3&9S38 


444912 


13.1031 


38.9680 


4L.4420 


[mprove me nl in ce active 
power loss reduction [%} 


— 


— 


— 


225395 


248517 


20. L 105 


26.0913 


26.8813 


25.5857 


Total l-eeder Demand 


]4 11.0935 


1423.5724 


1413.5763 


1393.6707 


1403.7209 


1100.8363 


1391.0291 


1403.4166 


1396 835 L 


Released Jeedei capacity 
fltVAi 


— 


— 


— 


L7.4228 


L9.85I5 


1269 


20.0615 


20.L56 


147412 


Total system 
power loss 


Rest 


~ 


116 6944 


IILB049 


WCfSt 


150.1191 


116.5229 


Ave 


117.5996 


II2.D0E7 






7.V73H 


66.20300 



From the Figure. 2 it can be concluded that 1 st branch having more drop than others. Therefore voltage 
regulator should be placed in this branch. It is boosted the total network voltage intern power losses 
are minimized. 

Voltage profit values, active and reactive power loss and summary of test results of 25 bus URDS for 
voltage regulator placement are given in figure. 3, figure.4 and table. 1 respectively. From table 1, it is 
observed that the minimum voltages in phases A, B and C are improved from 0.9284, 0.9284 and 
0.9366 p.u (without Regulators) to 1.0, 1.0 and 1.0 p.u (with Regulators) respectively and the active 
power loss in phases of A, B and C is reduced from 52.82, 55.44 and 41.86 kW to 39.16, 40.99 and 
31.65 kW respectively. Hence, there is an improvement in the minimum voltage and reduction in 



135 | 



Vol. 2, Issue 1, pp. 129-138 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

active power loss when compared with the before voltage regulator placement and after voltage 
regulator placement. The total active power loss Vs generation number of 25 bus URDS is shown in 
Figure 6. 

185i ' 




PSO 

TT-TTT-T- O/A 



M + M + M t MM »»»»» M » M »»» M + M »»»» MM +»»»» MMM »» MM +»t»» M + MM »»» M +» M »+»» M » M »» 



20 30 40 50 B0 

Generation no. 



70 



80 



90 



100 



Figure. 6 Total Active Power loss Vs Generation number of 25 bus URDS 

VI. Conclusions 

This paper presents a simple method to determine optimal allocation and tap setting of voltage 
regulators in Unbalanced Radial Distribution Systems through voltage drop analysis and PSO 
algorithm respectively. The effectiveness of the PSO has been demonstrated and tested. The proposed 
PSO based methodology was applied to 25 bus URDS. The obtained solution has succeeded in 
reducing total active power losses 25.43% in 25 bus URDS. From the test results, it can be said that 
the proposed model is valid and reliable, and the performance of the algorithms is not significantly 
affected from the inclusion of regulator modelling. The power loss per phase of unbalanced 
distribution system can be reduced by proper placement of voltage regulator. In addition to power loss 
reduction, the voltage profile also improved by the proposed method. The time of execution is 
reduced from 73.7 to 66.20 seconds for the same configuration system. 

References 

[1] R.R. Shouts, M.S. Chen, and L. Schwobel, "Simplified feeder modelling for load flow calculations", 

IEEE Transactions Power Systems, vol.2, pp. 168-174, 1987. 
[2] T.-H. Chen, M.-S. Chen, KJ. Hwang, P. Kotas, and E. A. Chebli, " Distribution system power flow 

analysis - A rigid approach", IEEE Transactions on Power Delivery, vol. 6, pp. 1146-1152, July 1991. 
[3] D. Rajicic, R. Ackovski, and R. Taleski, "Voltage correction power flow", IEEE Transactions on 

Power Delivery, vol. 9, pp. 1056-1062, Apr. 1994. 
[4] S.K. Chang, G. Marks, and K. Kato, "Optimal real time voltage control", IEEE Transactions Power 

Systems, vol. 5, no. 3, pp. 750-758, Aug. 1990. 
[5] C. J. Bridenbaugh, D. A. DiMascio, and R. D'Aquila, "Voltage control improvement through capacitor 

and transformer tap optimization", IEEE Transactions Power Systems, vol. 7, no. 1, pp. 222- 226, 

Feb. 1992. 
[6] C. S. Cheng and D. Shirmohammadi, "A Three Phase Power Flow Method for Real Time Distribution 

System Analysis", IEEE Transactions on Power Systems, vol. 10, no. 2 pp 671- 679, May 1995. 
[7] H. Vu, P. Pruvot, C. Launay, and Y. Harmand, "An improved voltage control on large-scale power 

systems", IEEE Transactions Power Systems, vol. 11, no. 3, pp. 1295-1303, Aug. 1996. 



136 | 



Vol. 2, Issue 1, pp. 129-138 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[8] Z. Gu and D. T. Rizy, "Neural network for combined control of capacitor banks and voltage 

regulators in distribution systems", IEEE Transactions Power Delivery, vol. 11, no. 4, pp. 1921-1928, 

Oct. 1996. 
[9] M. M. A Salama, N. Manojlovic, V. H. Quintana, and A. Y. Chikhani, "Real-Time Optimal Reactive 

Power Control for Distribution Networks", International Journal of Electrical Power & Energy 

Systems, vol. 18, no. 3, pp. 185-193, 1996. 
[10] A. Safigianni and G. Salis, "Optimal voltage regulator placement in radial distribution network," 

IEEE Trans, on Power Systems, vol. 15, no. 2, pp. 879-886, May 2000. 
[11] M. A. S. Masoum, A. Jafarian, M. Ladjevardi, E. F. Fuchs, and W. N Grady, "Fuzzy approach for 

optimal Placement and sizing of capacitor banks in the presence of harmonic", IEEE Transactions 

Power Delivery, vol. 16, no. 2, pp. 822-829, Apr. 2004. 
[12] B. Alencar de Souza, H. do Nascimento Alves, and H. A. Ferreira, "Micro genetic algorithms and 

fuzzy logic applied to the optimal placement of capacitor banks in distribution networks," IEEE 

Transactions Power Systems, vol. 19, no. 2, pp. 942-947, May 2004. 
[13] B. Milosevic and M. Begovic, "Capacitor placement for conservative voltage reduction on 

distribution feeders," IEEE Transactions Power Delivery, vol. 19, no. 3, pp. 1360-1367, July 2004. 
[14] M. A. S. Masoum, M. Ladjevardi, A. Jafarian, and E. Fuchs, "Optimal placement, replacement and 

sizing of voltage regulators in distorted distribution networks by genetic algorithms", IEEE 

Transactions Power Delivery, vol. 19, no. 4, pp. 1794-1801, Oct. 2004. 
[15] J. Chiou, C. Chang, and C. Su, "Ant direction hybrid differential evolution for solving large capacitor 

placement problems", IEEE Transactions Power Systems, vol. 19, no. 4, pp. 1794-1800, Nov. 2004. 
[16] A. Augugliaro, L. Dusonchet, S. Favazza, and E. Riva, "Voltage regulation and power losses 

minimization in automated distribution networks by an evolutionary multiobjective approach," 

IEEE Trans. Power Syst., vol. 19, no. 3, pp. 1516-11527, Aug. 2004. 
[17] J. Mendoza et, al "optimal location of voltage regulators in radial distribution networks using genetic 

algorithms," in Proceedings 15 th power systems computation conference, Bellgium, Augest 2005. 
[18] J.B.V. Subramnyam, "Optimal capacitor placement in unbalanced radial distribution networks 

Journal of Theoretical and Applied Information Technology vol:6,N0:l,ppl06-115. 2009. 
[19] Daniela Proto, Pietro Varilone, " Voltage Regulators and Capacitor Placement in Three-phase 

Distribution Systems with Non-linear and Unbalanced Loads" International Journal of Emerging 

Electric Power Systems, Vol. 7, No.4, Nov. 2010. 
[20] J.E. Mendoza, D.A. Morales, R.A.Lopez, E.A.Lopez, "Multiobjective Location of Automatic Voltage 

regulators in a radial Distribution Network Using aMicro Genetic Algorithm" IEEE Transactions 

Power Systems, vol. 22, no. 1, pp. 404-412, Feb. 2007. 
[21] T.Senjyu, Y.Miyazato, A. Yona, N.Urasaki, T.Funabashi," Optimal Distribution Voltage Control and 

Coordination With Distributed Generation" IEEE Transactions Power Delivery, vol. 23, no. 2, pp. 

1236-1242, April 2008. 
[22] H.A Attia, " Optimal voltage profile control and losses minimization of radial distribution feeder" 

Power System Conference, (MEPCON 2008), pp 453-458,March 2008. 
[23] P.V.V.RamaRao, S.Sivanagaraju, "Voltage Regulator Placement In Radial distribution Network Using 

Plant Growth Simulation Algorithm" International Journal of Engineering, Science and Technology, 

Vol. 2, No. 6, pp. 207-217, 2010. 
[24] V. Borozan, M.E.Baran,D. Novosel," Integrated volt/VAr control in distribution systems" IEEE Power 

Engineering Society Winter Meeting, vol. 3, pp. 1485-1490, Feb 2010. 
[25] B.A. De Souza, A.M.F de Almeida, " Multi objective Optimization and Fuzzy Logic Applied to 

Planning of the Volt/ Var Problem in Distributions Systems" IEEE Transactions Power Systems, vol. 

25, no. 3, pp. 1274-1281, Aug. 2010. 
[26] Srikanth Apparaju, Sri Chandan K "impact of Distribution Generation on voltage Levels in Radial 

Distribution Systems" International Journal of Engineering Research and Applications Vol. 1, Issue 2, 

pp.277-28 1,2010. 
[27] Jianzhong Tong; Souder, D.W. Pilong, C. Mingye Zhang, Qinglai Guo, Hongbin Sun, Boming Zhang," 

Voltage control practices and tools used for system voltage control of PJM" IEEE power and Energy 

Society General Meeting, pp. 1-5, July 2011. 

Authors 



P. Umapathi Reddy: He Received B.E from Andra University and M. Tech., (Electrical 
Power Systems) from Jawaharlal Nehru Technological University, Anantapur, India in 1998 
and 2004 respe ctively, Now he is pursuing Ph.D. degree. Currently he is with Department of 
Electrical and Electronics Engineering, Sree Vidyanikethan Engineering College, Tirupati, 
India. His research interest includes Power distribution Systems and Power System operation 



2 



137 | 



Vol. 2, Issue 1, pp. 129-138 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

and control. He is Life Member of Indian Society for Technical Education. 

S. Sivanaga Raju: He received B.E from Andra University and M.Tech.degree in 2000 from 
IIT, Kharagpur and did his Ph.D from Jawaharlal Nehru Technological University, Anantapur, 
India in 2004. He is presently working as Associate professor in J.N.T.U. College of 
Engineering Kakinada, (Autonomous) Kakinada, Andrapradesh, India. He received two national 
awards (Pandit Madan Mohan Malaviya memorial Prize and best paper prize award from the 
Institute of Engineers (India) for the year 2003-04. He is referee for IEEE journals. He has 
around 75 National and International journals in his credit. His research interest includes Power 
distribution Automation and Power System operation and control. 

P. Sangameswara Raju He is presently working as professor in S.V.U. College Engineering, 
Tirupati. Obtained his diploma and B.Tech in electrical Engineering, M.Tech in power system 
operation and control and Ph.D in S.V. University, Tirupati. His areas of interest are power 
system operation, planning and application of fuzzy logic to power system, application of power 
system like non-linear controllers. 




n 



138 | 



Vol. 2, Issue 1, pp. 129-138 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Study on Performance of Chemically Stabilized 

Expansive Soil 

P. VenkaraMuthyalu, K. Ramu and G.V.R. Prasada Raju 
Department of Civil Engg., University College of Engineering, JNTUK, Kakinada, India 



ABSTRACT 

Expansive soils, such as black cotton soils, are basically susceptible to detrimental volumetric changes, with 
changes in moisture. This behaviour of soil is attributed to the presence of mineral montmorillonite, which has 
an expanding lattice. Understanding the behaviour of expansive soil and adopting the appropriate control 
measures have been great task for the geotechnical engineers. Extensive research is going on to find the 
solutions to black cotton soils. There have been many methods available to controlling the expansive nature of 
the soils. Treating the expansive soil with electrolytes is one of the techniques to improve the behaviour of the 
expansive ground. Hence, in the present work, experimentation is carried-out to investigate the influence of 
electrolytes i.e., potassium chloride, calcium chloride and ferric chloride on the properties of expansive soil. 

KEYWORDS' Expansive soil, Calcium Chloride, Potassium Chloride, Ferric Chloride 

I. Introduction 

Expansive soil is one among the problematic soils that has a high potential for shrinking or swelling 
due to change of moisture content. Expansive soils can be found on almost all the continents on the 
Earth. Destructive results caused by this type of soils have been reported in many countries. In India, 
large tracts are covered by expansive soils known as black cotton soils. The major area of their 
occurrence is the south Vindhyachal range covering almost the entire Deccan Plateau. These soils 
cover an area of about 200,000 square miles and thus form about 20% of the total area of India. The 
primary problem that arises with regard to expansive soils is that deformations are significantly 
greater than the elastic deformations and they cannot be predicted by the classical elastic or plastic 
theory. Movement is usually in an uneven pattern and of such a magnitude to cause extensive damage 
to the structures resting on them. 

Proper remedial measures are to be adopted to modify the soil or to reduce its detrimental effects if 
expansive soils are indentified in a project. The remedial measures can be different for planning and 
designing stages and post construction stages. Many stabilization techniques are in practice for 
improving the expansive soils in which the characteristics of the soils are altered or the problematic 
soils are removed and replaced which can be used alone or in conjunction with specific design 
alternatives. Additives such as lime, cement, calcium chloride, rice husk, fly ahs etc. are also used to 
alter the characteristics of the expansive soils. The characteristics that are of concern to the design 
engineers are permeability, compressibility and durability. The effect of the additives and the 
optimum amount of additives to be used are dependent mainly on the mineralogical composition of 
the soils. The paper focuses about the various stabilization techniques that are in practice for 
improving the expansive soil for reducing its swelling potential and the limitations of the method of 
stabilization there on. 

Modification of BC soil by chemical admixture is a common method for stabilizing the swell-shrink 
tendency of expansive soil [5]. Advantages of chemical stabilization are that they reduce the swell- 
shrink tendency of expansive soils and also render the soils less plastic. Among the chemical 



139 | 



Vol. 2, Issue 1, pp. 139-148 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

stabilization methods for expansive soils, lime stabilization is mostly adopted for improving the swell- 
shrink characteristics of expansive soils. The reaction between lime and clay in the presence of water 
can be divided in to two distinct processes [20]. The use of calcium chloride in place of lime, as 
calcium chloride is more easily made into calcium charged supernatant than lime [40]. The 
electrolytes like potassium chloride, calcium chloride and ferric chloride can be effectively used in 
place of the conventionally used lime, because of their ready dissolvability in water and supply of 
adequate cations for ready cation exchange ([55], [56], [42]). 

Calcium chloride is known to be more easily made into calcium charged supernatant than lime an 
helps in ready cation exchange reactions [44]. The CaCl 2 might be effective in soils with expanding 
lattice clays [33]. The stabilization to the in-situ soil using KOH solution was made and revealed that 
the properties of black cotton soils in place can be altered by treating them with aqueous solution of 
KOH [27]. The laboratory tests reveals that the swelling characteristics of expansive soils can be 
improved by means of flooding at a given site with proper choice of electrolyte solution more so using 
chloride of divalent or multivalent cations [19]. The influence of CaCl 2 and KOH on strength and 
consolidation characteristics of black cotton soil is studied [55] and found an increase in the strength 
and reduction in the settlement and swelling. 5% FeCl 3 solution to treat the caustic soda contaminated 
ground of an industrial building in Bangalore [55]. In this work an attempt made to study the effect of 
electrolytes like KC1, CaCl 2 and FeCl 3 on the properties of expansive soiL 

The bibliography on stabilization of soil and calcium chloride giving its wide use in highways [5 8]. 
[30], [18], [53] has stated that CaCl 2 enjoyed its wide use as dust palliative and frost control of 
subgrade soil. 

When lime stabilization is intended to modify the in-situ expansive soil bed it is commonly applied in 
the form of lime piles ([24], [6], [23], [7], [1], [10], [65], [18], [51]) or lime slurry pressure injection 
(LPSI) ([66],[63],[36],[58],[26],[9],[3],[59]). 

Numerous investigators,([20], [34], [64], [43], [15], [41], [35], [45], [29], [37], [45], [4], [22], [2], 
[31], [39], [32]), have studied the influence of lime, cement, lime-cement, lime-flyash, lime -rice- 
husk-ash and cement - flyash mixes on soil properties, mostly focusing on the strength aspects to 
study their suitability for road bases and subbasess. As lime and cement are binding materials, the 
strength of soil-additive mixtures increases provided the soil is reactive with them. However, for 
large-scale field use, the problems of soil pulverization and mixing of additives with soil have been 
reported by several investigators ([20], [58], [9], [5], [44]). 

It is an established fact that, whenever a new material or a technique is introduced in the pavement 
construction, it becomes necessary to experiment it for its validity by constructing a test track, where 
the loading, traffic and other likely field conditions are simulated. Several test track studies 
([38], [49], [54], [50], [12], [25], [8], [14], [17], [52]), have been carried out in many countries to 
characterize the pavement materials and to assess the effectiveness of remedial techniques developed 
to deal with the problematic condition like freeze-thaw, expansive soil and other soft ground 
problems. 

Recent studies ( [60], [28]), indicated that CaCl 2 could be an effective alternative to conventional lime 
used due to its ready dissolvability in water and to supply adequate calcium ions for exchange 
reactions. [13] Studied the use of KC1 to modify heavy clay in the laboratory and revealed that from 
engineering point of view, the use of KC1 as a stabilizer appears potentially promising in locations 
where it is readily and cheaply available. In the present work, the efficiency of Potassium Chloride 
(KC1), Calcium Chloride (CaCl 2 ) and Ferric Chloride (FeCl 3 ) 5 as stabilizing agents, was extensively 
studied in the laboratory for improving the properties of expansive soil. 

The experiences of various researchers in the field as well as laboratory chemical stabilization have 
been presented briefly in the above section. Experimental study methodologies for laboratory are 
presented in the following section. 

II. Experimental Study 

2.1. Soil 

The black cotton soil was collected from Morampalem, a village nearer to Amalapuram of East 
Godavari District in Andhra Pradesh in India. The physical properties of the soil are given in Table 1. 



140 



Vol. 2, Issue 1, pp. 139-148 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Table.l: Physical Properties of Expansive Soil 



Property 












Grain Size 
Distribution 


Sand (%) 


Silt (%) 


Clay (%) 






2 


22 


76 






Atterberg 
Limits 


Liquid limit 

(%) 


Plastic limit (%) 


Plasticity 
Index 


Shrinkage 
Limit (%) 




85 


39 


46 


12 




Classification 


CH 


Specific Gravity 


2.68 


Free Swell 
Index 


140% 


Compaction 
properties 


Maximum 

Dry 

Density 

(g/cc) 


Optimum moisture 
Content (%) 


Soaked 

CBRof 

sample 

prepared 

at MDD 

&OMC 


2 


1.42 


26.89 


Permeability 
of the sample 
prepared at 
OMC & MDD 


1.89x10" 7 
cm/sec 


Shear Strength 
Parameters of the 
sample prepared at 
OMC & MDD 


Cohesion 

(C) 

(kg/cm 2 ) 


Angle of internal friction 

(0) 


0.56 


2° 



2.2. Chemicals 

Three chemicals of commercial grade, KC1, CaCl 2 and FeCl 3 are taken in this study. The quantity of 
the chemical added to the expansive soil was varied from to 1.5% by dry weight of soil. 

2.3. Test Program 

Electrolytes like KC1, CaCl 2 and FeCl 3 are mixed in different proportions to the expansive soil and the 
physical properties like liquid limit, plastic limit, shrinkage limit and DFS of the stabilized expansive 
soil are determined to study the influence of electrolytes on the physical properties of the expansive 
soil. Then stabilized expansive soil with different percentage of electrolytes are tested for engineering 
properties, like permeability, compaction, unconfined compressive strength and shear strength 
properties to study the influence of electrolytes on expansive soil. 

In this section the details of laboratory experimentation were presented. Analysis and discussion of 
test results will be presented in the next section. 



III. Results and Discussion 

3.1. Effect of Additives on Atterberg's Limits 

The variation of liquid limit values with different percentages of chemicals added to the expansive 
soil is presented in the Fig. 1. It is observed that the decrease in the liquid limit is significant upto 1% 
of chemical added to the expansive clay for all the chemicals, beyond 1% there is a nominal decrease. 
Maximum decrease in liquid limit for stabilized expansive clay is observed with the chemical FeCl 3 , 
compared with other two chemicals, KC1 and CaCl 2 . Nominal increase in plastic limit of stabilized 
expansive clay is observed with increase the percentage of the chemical (Fig. 2). 
Fig. 3 shows the variation of plasticity index with the addition of chemicals to expansive clay. The 
increase in the plastic limit and the decrease in the liquid limit cause a net reduction in the plasticity 
index. It is observed that, the reduction in plasticity indexes are 26%, 41% and 48% respectively for 1 
% of KC1, CaCl 2 and FeCl 3 added to the expansive clay. The reduction in plasticity index with 
chemical treatment could be attributed to the depressed double layer thickness due to cation exchange 
by potassium, calcium and ferric ions. 

The variation of shrinkage limit with the percentage of chemical added to the expansive soil is 
presented in the Fig. 4. With increase in percentage of chemical added to the expansive soil the 
shrinkage limit is increasing. With 1.5 % chemical addition, the shrinkage limit of stabilized 



141 | 



Vol. 2, Issue 1, pp. 139-148 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

expansive clay is increased from 12% to 15.1%, 15.4% and 16% respectively for KC1, CaCl 2 and 
FeCl 3 . 

3.2. Effect of Additives on DFS 

The variation of DFS of stabilized expansive clay with addition of different percentages of chemicals 
is shown in the Fig.5. It is observed that the DFS is decreasing with increasing percentage of chemical 
added to the expansive soil. Significant decrease in D.F.S. is recorded in stabilized expansive clay 
with addition of 1% of chemical. The reductions in the DFS of stabilized expansive clay with 
addition of 1% chemical are 40%, 43% and 47% for KC1, CaCl 2 and FeCl 3 respectively compared 
with the expansive clay. The reduction in DFS values could be supported by the fact that the double 
layer thickness is suppressed by cation exchange with potassium, calcium and ferric ions and with 
increased electrolyte concentration. 





90 
85 






■ 






+ Potas sium 




^^ 






Chloride 


h3 


80 


v^>^ 






M Calcium 




YN 


^^ 




Chloride 
A Ferric 


'3 
3 


\ 


\ ^ 


k«^ 


Chloride 


70 
65 


\ 


\ 


^^ 






1 


^^: 


!r . 






Ti 


^^ 




60 

C 






^^^ 






) 0.5 1 1.5 






(%) Chemical 



Fig.l: Variation of liquid limit with addition of percentage Chemical 



^ 



42.5 



42 



41.5 



41 



1 40.5 



£ 40 



39.5 



39 



38.5 







•Potassium 

Chloride 
•Calcium 

Chloride 

Ferric 

Chloride 



0.5 



1.5 



(%) Chemical 



Fig.2: Variation of Plastic limit with addition of percentage Chemical 



142 | 



Vol. 2, Issue 1, pp. 139-148 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

3.3. Effect of Additives on CBR 

Fig. 6 shows the variation of CBR of stabilized expansive clay with addition of different percentages 
of chemicals. It is can be seen that the CBR is increasing with increasing percentage of chemical 
added to the expansive soil. Significant increase in CBR is recorded in stabilized expansive clay with 
addition of chemical upto 1%, beyond this percentage the increase in CBR is marginal. The increase 
in CBR values of stabilized expansive clay with addition of 1% chemical are 80%, 99% and 116% for 
KC1, CaCl 2 and FeCl 3 respectively compared with the expansive clay. The increase in the strength 
with addition of chemicals may be attributed to the cation exchange of KC1, CaCl 2 & FeCl 3 between 
mineral layers and due to the formation of silicate gel. The reduction in improvement in CBR beyond 
1% of chemicals KC1, CaCl 2 & FeCl 3 , may be due to the absorption of more moisture at higher 
chemical content. 



1 40 

1 35 
>> 

to 

a 

S 25 

20 

15 

C 






w 
















+ Potassium 
Chloride 

M Calcium 
Chloride 

A Ferric 
Chloride 


^ 


^ 


-"-^A 












> 


s: 




^ 


^^. 










N 




\ 


^^ 


^ 


^^ 








Tfc^ 


"— -. 


^ 


I 




^^ 










^^i 


^ 


^^^^ 


— ■ 
















^A 


) 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 

(%) Chemical 



Fig.3: Variation of plasticity index with addition of percentage Chemical 



17 
16 

1 15 

J 14 

If 13 
£ 12 f 

11 

10 
( 














+ Potassium 
Chloride 

M Calcium 
Chloride 

A Ferric 
Chloride 






^^. 


1 




^ 


^ 




^i 


^ 


S 




^ 


x^ 






r^ 
















) 0.5 1 1.5 : 

% of Chemical 


> 



143 | 



Fig.4: Variation of shrinkage limit with addition of percentage Chemical 



Vol. 2, Issue 1, pp. 139-148 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



160 
140 f 
120 
- 100 

80 

60 

40 

C 






■ 








+ Potassium 

Chloride 
M Calcium 

Chloride 
A Ferric Chloride 


^ 


t 






x< 


tx 






^ 


^ 


b-w 






^4 


^ 


► 






^ 


L 


) 0.5 1 1.5 2 
(%) Chemical 



Fig. 5 Variation of DFS with addition of percentage Chemical 



5 
4.5 

4 
3.5 

# 3 
pi 2.5 

PQ 

u 2 r 

1.5 

1 
0.5 



C 




# Potassium 
Chloride 

M Calcium 
Chloride 

A Ferric 
Chloride 

> 










^ 


! * 




^^ 


1 ■ 


> 


A 


f ^/ 






^ 


</ 






^^ 


r 






W 
































) 0.5 1 1.5 : 

% of chemical 



Fig.6: Variation of CBR of stabilized expansive bed with percentage of Chemical 

3.4. Effect of Additives on Shear Strength Properties 

The unconfined compressive strength of the remoulded samples prepared at MDD and optimum 
moisture content with addition of 0.5%, 1% and 1.5 % of chemicals, KC1, CaCl 2 & FeCl 3 , to the 
expansive soil are presented in the table 2. The prepared samples are tested after lday, 7 days and 14 
days. As expected, the unconfined compressive strength is increasing with time may be due chemical 
reaction. It is observed that the unconfined compressive strength of the stabilized expansive soil is 
increasing with increase in percentage of chemical added to the soil. The unconfined compressive 
strength of stabilized expansive clay is increased by 133%, 171% & 230% when treated with 1% 
chemical, of KC1, CaCl 2 and FeCl 3 respectively. The increase in the strength with addition of 
chemicals may be attributed to the cation exchange of KC1, CaCl 2 & FeCl 3 between mineral layers and 



144 | 



Vol. 2, Issue 1, pp. 139-148 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

due to the formation of silicate gel. The reduction in strength beyond 1% each of KC1, CaCl 2 & FeCl 3 
may be due to the absorption of more moisture at higher KC1, CaCl 2 & FeCl 3 . 

The undrained shear strength parameters of the remoulded samples prepared at MDD and optimum 
moisture content with addition of 0.5%, 1% and 1.5 % of chemicals, KC1, CaCl 2 & FeCl 3 , to the 
expansive soil are presented in the table 3. The prepared samples are tested after lday, 7 days and 14 
days. Significant change in undrained cohesion and marginal change in angle of internal friction is 
observed with addition of chemicals to the expansive clay. The increase in the shear strength 
parameters with addition of chemicals may be attributed to the cation exchange of chemicals. The 
shear strength parameters are increases upto 1 % chemical addition of above three chemicals, beyond 
this percentage there is a considerable decrease is observed may be due to the absorbtion of more 
moisture at higher chemical content. 



Table: 2 Variation of Undrained compressive strength of stabilized expansive clay 



Chemical added 
to the soil 


Percentage of 

Chemical added 

to the soil 


Unconfined Compressive Strength (KPa) 


1 day 


7 days 


14days 


Without chemical 


-- 


92 


-- 


-- 


KC1 


0.5 


130 


175 


188 


1.0 


170 


185 


215 


1.5 


125 


160 


180 


CaCl 2 


0.5 


135 


200 


215 


1.0 


175 


215 


250 


1.5 


128 


184 


207 


FeCl 3 


0.5 


140 


245 


256 


1.0 


181 


270 


304 


1.5 


132 


223 


248 



Table: 


3 Variation of Shear strength parameters with the addition of chemicals to the expansive clay 


Chemical 

added to 

the soil 


Percentage 

of 

Chemical 

added to 

the soil 


Unconfined Compressive Strength (KPa) 


lday 


7 days 


14days 


Cohesion, 
C u (kg/cm 2 ) 


Angle of 
internal 
friction, 

♦, 0>eg.) 


Cohesion, 

C u 
(kg/cm 2 ) 


Angle of 

internal 

friction, <|>, 

(Deg.) 


Cohesion, 
C u (kg/cm 2 ) 


Angle of 
internal 
friction, 
♦, (Deg.) 


Without 
chemical 


— 


0.56 


2 U 


— 


— 


~ 


— 


KC1 


0.5 


0.61 


? 


1.11 


5° 


1.28 


7° 


1.0 


0.72 


5 U 


1.23 


4° 


1.32 


4° 


1.5 


0.65 


6° 


1.15 


4° 


1.26 


4° 


CaCl 2 


0.5 


0.70 


? 


1.21 


5° 


1.30 


4° 


1.0 


0.78 


6° 


1.32 


5° 


1.38 


3° 


1.5 


0.77 


6° 


1.27 


4° 


1.34 


3° 


FeCl 3 


0.5 


0.89 


6° 


1.26 


4° 


1.33 


3° 


1.0 


0.96 


4° 


1.35 


3° 


1.46 


3° 


1.5 


0.93 


3 U 


1.30 


4 U 


1.38 


3 U 



145 | 



Vol. 2, Issue 1, pp. 139-148 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

In this section the results of various tests carried out in the laboratory are discussed. Conclusions will 
be discussed in the next section. 

IV. Conclusions 

The following conclusions can be drawn from the laboratory study carried out in this investigation. 

It is observed that the liquid limit values are decreased by 57 %, 63% and 70% respectively for 1% of 

KC1, CaCl 2 and FeCl 3 chemicals added to the expansive clay. Marginal increase in plastic limits is 

observed with addition of chemical to the expansive clay. Decrease in plasticity index is recorded 

with addition of chemical to the expansive soil. The shrinkage limit is increasing with 1.5 % chemical 

addition; it is observed that the shrinkage limit of stabilized expansive clay is increased from 12% to 

15.1%, 15.4% and 16% respectively for KC1, CaCl 2 and FeCl 3 . 

The DFS values are decreased by 40%, 43% and 47% for 1% of KC1, CaCl 2 and FeCl 3 treatments 

respectively. 

The CBR values are also increased by 80%, 103% and 116% respectively for 1% of KC1, CaCl 2 and 

FeCl 3 treatment. 

It is observed that the Significant change in undrained cohesion and marginal change in angle of 

internal friction is observed with addition of chemicals to the expansive clay. 

The UCS values are increased by 133%, 171% and 230% respectively for 1% of KC1, CaCl 2 and 

FeCl 3 treatments for a curing period of 14 day 

References 

[1] Babushanker N. (1986), "What Techniques other than under reamed piles have? Proven to be Effective in 

Minimizing Foundation Problems in Black Cotton Soils", IGC-86, New Delhi, Vol 1, pp. 155-158. 
[2] Bansal, R.K., Pandey, P.K.and Singh, S.K (1996), "Improvement of a Typical Clay for Road Subgrades 

with Hydrated Lime", Proc. of National Conf. on Problematic Subsoil Conditions, Terzaghi-96, Kakinada, 

India, ppl93-197. 
[3] Bhattacharya, P. And Bhattacharya, A. (1989). "Stabilization of Bad banks of Railway Track by Lime 

Slurry Pressure Injection Technique", Proc. Of IGC-89, Visakhapatnam, Vol. 1, pp. 315-319. 
[4] Basma, A.A. and Tuncer, R.E. (1991). "Effect of Lime on Volume Change and Compressibility of 

Expansive Clays", TRR-1295, pp.52-61. 
[5] Bell, F.G. (1993). "Engg. Treatment of Soils", E&FN Spon Pub. Co. 
[6] Broms. B.B. and Boman, P. (1978). "Stabilization of soil with lime columns", Design hand book, second 

edition, royal institute of technology, Sweden. 
[7] Bredenberg, H. and Tekn, D.R. (1983). "Lime Columns for Ground improvement at New Cargo Terminal 

in Stockholm", Proc. Of the Eighty European Conf. on Soil Mechanics and Foundation Engg., Helsinki,pp. 

881-884. 
[8] CRRI. (1978). "Handbook on Under-reamed and Bored Compaction Pile Foundation", Jain Printing Press, 

Roorkee, India. 
[9] Chen, F.H. (1988). "Foundations on Expansive Soils", Elsevier publications Co., Amsterdam. 

[10]Chummar, A.V. (1987). "Treatment of Expansive Soil Below Existing Structures with Sand - Lime Piles", 

Proc. Fo sixth Int. Conf. on expansive soils, New Delhi, pp. 451-452. 
[ll]Desai, I.D. and Oza, B.N. (1977), "Influence of Anhydrous Calcium Chloride on the Shear Strength of 

Expansive soils, , Proc. of the First National Symposium on Expansion soils, HBTI-Kanpur, India, pp 4-1 to 

4-5. 
[12]Deshpande, M.D. et al. (1990). "Performance Study of Road Section Constructed with Local Expansive 

Clay (Stabilized with lime) as Subbase material" , Indian highways, pp. 35-41. 
[13]Frydman, S., Ravins, L and Ehrenreich, T. (1997), "Stabilization of Heavy Clay with Potassium Chloride", 

Journal of Geo technical Engg., Southeast Asian Society of Soil Engg., Vol 8, pp. 95-108. 
[14] Gichaga, F.J. (1991). "Deflections of Lateritic Gravel-Based and Stone Based Pavemetns of a Low- Volume 

Tea Road in Kenya", TRR-1291. TRB. Pp. 79-83. 
[15]Gokhale, K.V.G.K. (1977). "Mechanism of Soil Stabilization with Additives", Proc, of the first national 

symposium on expansive soils, HBTI, Kanpur, pp. 10-1 to 10-5. 
[16]Gokhale, Y.C. (1969). "Some Highway Engg. Problems in Black Cotton Soil Region", Proc. of the 

Symposium on characteristics of and construction techniques in black cotton soil pp, 209-212. 
[17] Gupta, A.K., Jain, S.S. and Bhatia, S.K. (1992). "A Study on Relationship between Rut Depth, Deflection 

and other Distress modes for flexible pavements", IRC Journal pp. 141-187. 



146 | 



Vol. 2, Issue 1, pp. 139-148 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[18] Hausmann, M.R. (1990). "Engg . Principles of Ground Modification", Mc Graw Hill Book Co., New Delhi. 
[19] Ho, M.K (1968). "Swelling Characteristics of an Expansive Clay with Access to Common Electrolytes". 

Proc. of the Southeast Asian Regional Conf. on soil engg., Asian institute of Tech., Bangkok, pp. 159-167. 
[20] Holtz, W.G. (1969). "Volume Change in Expansive Clay Soils and Control by lime Treatment". Proc. of 2 nd 

Int. Research and Engg. Conf on expansive clayey soils, Texas A & M Press, Texas, pp. 157-174. 
[21] Holtz, W.G. (1959): "Expansive Clays - Properties and Problems", First Annual Soil Mechanics Conf., 

Colorado School of Mines, Colorado, pp. 1-26. 
[22] Hopkins, T.C., Hunsucker, D.Q.and Beckam, T. (1994). "Selection of Design Strengths of Untreated Soil 

Sub grades and Sub grades treat with cement and hydrated lime". TRR-1440, TRB, pp. 37-44. 
[23] Holm, G., Brendenberg, H. and Broms, B.B. (1981): "Lime Columns as Foundation for Light Structures", 

Proc. of 10 th ICSMFE, Stockholm, Vo. 3, pp. 687-694. 
[24]Humad, S. (1977). "Lime pile stabilization of Black cotton soil", Proc. of the 1 st National Symposium on 

Expansive Soils, HBTI-Kanpur, India, pp. 4-1 to 4-8. 
[25] James, M.M. and Pandey, B.B (1990), "Performance of flexible pavements",TRR-1307,TRB, Washington, 

pp. 51-62. 
[26] Joshi, R.C., Natt, G.S. and Wright, P.J. (1981): "Soil improvement by lime-fly ash slurry injection", proc. 

of 10 th Int. Conf. on IMFE, Vol. 3, Stockholm, PP. 707-712. 
[27]Katti, R.K., Kulkarni, K.R. and Radhakrishnan, N. (1966), "Research on Black Cotton Soils without and 

with Inorganic Additives", IRC, Road Research Bulletin, No. 10, pp. 1-97. 
[28]Koteswara Rao.D(2011), Laboratory investigations on GBFS- CH soil mixes for the utilization of 

foundation beds, CONCEPTS-2011, JNT university college of engineering, Kakinada. 
[29] Lakshmana Rao, C.B. et al. (1987) "Stabilization of Black cotton Soil with Inorganic Additives", Proc. of 

6 th Int, Conf. on expansive soils. New Delhi, India. Vol. 1, pp. 453-458. 
[30] Leonards, G.A. (1962). "Foundation Engineering", Mc-Graw Hill Book Co., New Delhi. 
[31] Little, N.D. (1996). "Assessment of In-situ Structural Properties of Lime Stabilized Clay Subgrades", TRR- 

1546, pp. 13-31. 
[32] Miller, G.A. and Zaman, M. (2000): "Field and laboratory evaluation of cement kiln dust as a soil 

stabilizer", TRR-1714, TRB, pp. 25-32. 
[33] Mitchell, J.K. and Radd, L. (1973). "Control of Volume Changes in Expansive Earth Materials", Proc. 

ofworkshop on expansive clays and shales in highway design and construction, Federal Highway 

Administration, Washington, D.C., pp. 200-217. 

[34] Mc Dowell, C. (1959). "Stabilization of Soils with Lime, Lime-flyash and other Lime reactive minerals", 
HRB, Bulletin No. 231. 

[35] Mohan Rai and jaisingh, M.P. (1985). "Advances in Building materials and construction", CBRI 

Publication, Roorkee, India. 
[36]0'Neil, M.W. and Poormoayed, N. (1980) "Methodology for foundation on expansive clays", journal of 

geo technical engg., proc. of ASCE, Vol. 106. No. GT 12. 
[37]Osinubi, K.J. (1988). "permeability of Lime Treated Lateritic Soil", Journal of transportation Eng., pp. 465- 

469. 
[38]Patel, A.N. and Quereshi,M.A., (1979). "A Methodology of Improving single lane roads in black cotton soil 

area", Indian Highways, pp. 5-11. 
[39]Petry, M.T. (1997). "Performance based testing of chemical stabilziers", TRR-1219, TRB, pp. 36-41. 
[40]Petry, T.M. and Armstrong, J.C. (1989), "Stabilization of Expansive Clay Soils", TRR-1219, TRB,pp. 103- 

112. 
[41] Pise, P.J. and Khanna, A.P. (1977): "Stabilization of Black Cotton Soil", Proc. of the first National 

Symposium on Expansive soils, HBTI, Kanpur, India, pp. 7-2 to 7-5. 
[42]Prasada Raju, G.V.R. (2001). "Evaluation of flexible pavement performance with reinforced and chemical 

stabilization of expansive soil sub grades". A Ph.D Thesis submitted to Kakatiya University, Warangal, 

(A.P.) 
[43]Ramannaiah, B.K., Sivananda, M and Satya Priya, (1972), "Stabilization of Black Cotton Soil with lime 

and Rice-Husk- Ash", 13 th Annual General Body Meeting of Indian Geotechnical Society. 
[44]Ramana Murthy, V. (1998). "Study on swell pressure and method of controlling swell of expansive soil", 

Ph.D. Thesis, Kakatiya university, REC, Warangal. 
[45]Ramana Sastry,M.V.B. (1989). "Strengthening Subgrades of Roads in Deltaic Areas of Andhra Pradesh", 

Proc of IGC-89, Visakhapatnam, India Vol.1, pp 181-184. 
[47]Ramana Sastry, M.V.B., Srinivasulu Reddy, M and Gangaraju, Ch.P. (1986). "Comparative Study of Effect 

of Addition of Rice-Husk- Ash and Cinder- Ash to Soil-Lime Mixtures", Indian highways, Vol. 14, No. 8, pp 

5-14. 



147 | 



Vol. 2, Issue 1, pp. 139-148 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[48] Rao, S.M and Subba Rao,K.S. (1994), "Ground heave from Caustic soda solution spillage - A case study", 

Journal of soils and foundations, Japanese Society of soil Mech and foundation Engg,Vol. 34, No. 2, pp. 

13-18. 
[49]Reddy, K.C., et al (1981). "Structural evaluation of sub-base courses", journal of Indian Roads congress, 

42-2-14, paper No. 341, pp. 215-251. 
[50]Rolt, J. et al. (1987) . "Performance of a Full-scale pavement design experiment in jamica, TRR-1117, 

TRB, pp, 38-46. 
[51] Rogers, CDF and Glendenning. S. (1994). "Slope Stabilization using Lime Piles". TRR-1440, TRb, pp. 63- 

70. 
[52] Seeds, S.B.et al. (1999), "Development of performance related specifications for hot-Mix Asphalt 

Pavements through westrack", TR-1575, TRB,pp. 85-91. 
[53] Shepard, J.M. et al. (1991). "Full depth reclamation with calcium chloride", TRR-1295, TRB, pp.87-94. 
[54] Sivaguru, N., Reddy, K.C., Rajagopal, A.S., Veer aragha van, A. and Justo, C.E.G. (1986). "Studies on New 

Flexible Pavements", IRC, Vol. 47-1, Paper No. 375 pp. 1 1 1-156. 
[55]Sivanna,G.S. et al. (1976). "Strength and consolidation characteristics of black cotton soil with chemical 

additives - CaCl 2 & KOH", report prepared by Karnataka Engg. Research station, Krsihnarajasagarjndia. 
[56] Sivapullaiah, P.V. et al. (1994), "Role of electrolytes on the shear strength of clayey soil", Proc. of IGC-94, 

Warangal, pp. 199-202. 
[57] Slate, F.O. and Johnson, A.W. (1958), "Stabilization of soil with calcium chloride", HRB,Bibligraphy-24, 

pp. 1-90. 
[58]Snethen, D.R. et al. (1979), "An evaluation methodology for prediction and minimization of detrimental 

volume change of expansive soils in highway subgrades", research report, Vol. 1, prepared for federal 

highway administration, Washington. 
[59] Special Report -14 , IRC, (1995), "Ground Improvement by Lime stabilization". IRC, Highway research 

board, Washington. 
[60] Srinivas, M. (2008), Test track studies on chemically stabilized expansive soil subgrades, a Ph.D. thesis, 

JNT University college of engineering, Kakinada, June 2008. 
[61] Subba Rao, K.S. (1986). "What Techniques other than Under-Reamed pile have proven to be effective in 

minimizing foundation problem in black cotton soils", Proc of IGC-86, New Delhi, Vol. 1, pp. 149-154. 
[62] Subba Rao, K.S. (1999). "Swell - Shrink behavior of Expansive Slits-Geo technical Challenges", 22 nd IGS 

Annual Lecture, IGC-99, Calcutta. 
[63] Thompson, M.R and Robnett, Q.L. (1976). "Pressure Injected Lime for Treatment of Swelling Soils", one 

of the 4 Reports prepared for the 54 th Annual meeting of the TRB, TRR-568, pp 24-34. 
[64]Uppal, H.L. and Chadda,L.R. (1969). "Some Problems of Road Construction in Black Cotton Soils and the 

Remedial Measures". Proc of symposium on characteristics of and construction techniques in black cotton 

soil, the college of Military Engg., Puna, India, pp. 215-218. 
[65] Venkatanarayana, P. et al. (1989). "Ground Improvement by Sand - Lime Columns", Proc. Of IGC-89, 

Visakhapatnam, India, Vol. 1 , pp. 335-338. 
[66] Wright, P.J. (1973). "Lime Slurry Pressure Injection in Tames Expansive Clays", Civil Engg. ASCE. 

Authors Biographies 

P. Venkata Muthyalu is the post Graduate Student of Department of Civil Engg., University 
College of Engineering, JNTUK, Kakinada, India. 



K. Ramu is working as Associate Professor in Department of Civil Engineering, JNTU 
College of Engineering, Kakinada, India. He has guided 15 M.Tech Projects & has 20 
publications. 



G.V.R. Prasada Raju is Professor of Civil Engineering and Director Academic Planning, 
JNTUK Kakinada, India. He is guiding 6 PhD scholars & 4 has been awarded the PhD. He has 
guided 60 M.Tech Projects & has 97 publications. 



148 



Vol. 2, Issue 1, pp. 139-148 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Designing an Automated System for Plant Leaf 

Recognition 

Jyotismita Chaki 1 and Ranjan Parekh 2 
School of Education Technology, Jadavpur University, Kolkata, India 



Abstract 

This paper proposes an automated system for recognizing plant species based on leaf images. Plant leaf images 
corresponding to three plant types, are analyzed using three different shape modelling techniques, the first two 
based on the Moments-Invariant (M-I) model and the Centroid-Radii (C-R) model and the third based on a 
proposed technique of Binary-Superposition (B-S). For the M-I model the first four central normalized moments 
have been considered. For the C-R model an edge detector has been used to identify the boundary of the leaf 
shape and 36 radii at 10 degree angular separation have been used to build the shape vector. The proposed 
approach consists of comparing binary versions of the leaf images through superposition and using the sum of 
non-zero pixel values of the resultant as the feature vector. The data set for experimentations consists of 180 
images divided into training and testing sets and comparison between them is done using Manhattan, Euclidean 
and intersection norms. Accuracies obtained using the proposed technique is seen to be an improvement over 
the M-I and C-R based techniques, and comparable to the best figures reported in extant literature. 

KEYWORDS' Plant recognition, Moment Invariants, Centroid Radii, Binary Superposition, Computer Vision 

I. Introduction 

It is well known that plants play a crucial role in preserving earth's ecology and environment by 
maintaining a healthy atmosphere and providing sustenance and shelter to innumerable insect and 
animal species. Plants are also important for their medicinal properties, as alternative energy sources 
like bio-fuel and for meeting our various domestic requirements like timber, clothing, food and 
cosmetics. Building a plant database for quick and efficient classification and recognition of various 
flora diversities is an important step towards their conservation and preservation. This is more 
important as many types of plants are now at the brink of extinction. In recent times computer vision 
methodologies and pattern recognition techniques have been successfully applied towards automated 
systems of plant cataloguing. From this perspective the current paper proposes the design of a system 
which uses shape recognition techniques to recognize and catalogue plants based on the shape of their 
leaves, extracted from digital images. The organization of the paper is as follows : section 2 discusses 
an overview of related works, section 3 outlines the proposed approach with discussions on feature 
computation and classification schemes, section 4 provides details of the dataset and experimental 
results obtained, and section 5 brings up the overall conclusion and scopes for future research. 

II. Previous Works 

Many methodologies have been proposed to analyze plant leaves in an automated fashion. A large 
percentage of such works utilize shape recognition techniques to model and represent the contour 
shapes of leaves, however additionally, color and texture of leaves have also been taken into 
consideration to improve recognition accuracies. One of the earliest works [1] employs geometrical 
parameters like area, perimeter, maximum length, maximum width, elongation to differentiate 
between four types of rice grains, with accuracies around 95%. Use of statistical discriminant analysis 



149 | 



Vol. 2, Issue 1, pp. 149-158 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

along with color based clustering and neural networks have been used in [2] for classification of a 
flowered plant and a cactus plant. In [3] the authors use the Curvature Scale Space (CSS) technique 
and k-NN classifiers to classify chrysanthemum leaves. Both color and geometrical features have 
been used in [4] to detect weeds in crop fields employing k-NN classifiers. In [5] the authors propose 
a hierarchical technique of representing leaf shapes by first their polygonal approximations and then 
introducing more and more local details in subsequent steps. Fuzzy logic decision making has been 
utilized in [6] to detect weeds in an agricultural field. In [7] the authors propose a two step approach 
of using a shape characterization function called centroid-contour distance curve and the object 
eccentricity for leaf image retrieval. The centroid-contour distance (CCD) curve and eccentricity 
along with an angle code histogram (ACH) have been used in [8] for plant recognition. The 
effectiveness of using fractal dimensions in describing leaf shapes has been explored in [9]. In 
contrast to contour-based methods, region-based shape recognition techniques have been used in [10] 
for leaf image classification. Wang et al. [11] describes a method based on fuzzy integral for leaf 
image retrieval. In [12] the authors used an improved CSS method and applied it to leaf classification 
with self intersection. Elliptic Fourier harmonic functions have been used to recognize leaf shapes in 
[13] along with principal component analysis for selecting the best Fourier coefficients. In [14] the 
authors propose a leaf image retrieval scheme based on leaf venation, represented using points 
selected by the curvature scale scope corner detection method on the venation image and categorized 
by calculating the density of feature points using non parametric estimation density. In [15] the 
authors propose a new classification method based on hypersphere classifier based on digital 
morphological feature. In [16] 12 leaf features are extracted and orthogonalized into 5 principal 
variables which consist of the input vector to a neural network (NN), trained by 1800 leaves to 
classify 32 kinds of plants. NNs have also been used in [17] to classify plants based on parameters 
like size, radius, perimeter, solidity and eccentricity of the leaf shape. In [18] shape representation is 
done using a new contour descriptor based on the curvature of the leaf contour. Wavelet and fractal 
based features have been used in [19] to model the uneven shapes of leaves. Texture features along 
with shape identifiers have been used in [20] to improve recognition accuracies. Other techniques like 
Zernike moments and Polar Fourier Transform have been proposed in [21] for modeling leaf 
structures. In [22] authors propose a thresholding method and H-maxima transformation based 
method to extract the leaf veins for vein pattern classification. In [23] authors propose an approach for 
combining global shape descriptors with local curvature -based features, for classifying leaf shapes of 
nearly 50 tree species. Finally in [24] a combination of all image features viz. color, texture and 
shape, have been used for leaf image retrieval. 

III. Shape Recognition Techniques 

In this section we review two existing methods of shape recognition which have been used for plant 
classification, namely Moments -Invariant (M-I) and Centroid-Radii (C-R) and compare it with our 
proposed technique with respect to their recognition accuracies. 

3.1. Moments Invariant (M-I) Approach: An Overview 

M. K. Hu [25] proposes 7 moment features that can be used to describe shapes which are invariant to 
rotation, translation and scaling. For a digital image, the moment of a pixel P(x,y) at location 
(x, y)is defined as the product of the pixel value with its coordinate distances i.e. m = x.y.P(x, y) . The 
moment of the entire image is the summation of the moments of all its pixels. More generally the 
moment of order (p,q) of an image I(x, y) is given by 



m Pq 



= ^[x p y«I(x,y)] (1) 

x y 

Based on the values of p and q the following are defined: 



150 | 



Vol. 2, Issue 1, pp. 149-158 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



>%, = £I>y •/(*, y)] = ZZt 7 ^ y)] 

x y x y 

^io = ZZ^V - 7 ^ y)] = ZZt*- 7 ^ y)i 

x y x y 

"oi = ZI>V./(*, J)l = ZZ^- 7 ^ y)l 
"ii = £2>V •/(*, y)] = ^^[xy.l(x, ?)] 

x y x y 

>«02 = ZZr*V •/(*, y)] = ZZr/- 7 ^ >>)] 

m 21 = ££>V ./(x, y)l = ZZt*V(*, J)l 

* y x y 

>% = £2>y ./(*, ?)i = ZZ^ 2 - 7 ^ y)] 

x y x y 

m 30 = X2> 3 y°./(x, y)] = YT J U\l(x, y)] 

x y x y 

m m =YLlx°y\l(x, y)] = ££[y 3 ./(x, y)] 



The first four Hu invariant moments which are invariant to rotation are defined as follows: 

<f\=m 2Q +m 02 
(p 2 =(m 20 -m 02 ) 2 + (2m n ) 2 



(p 3 = (m 30 -3m 12 ) 2 +(3m 21 -m 03 ) 2 (3) 



(p A = (m 30 +m 12 ) 2 +(m 21 +m 03 ) 2 

To make the moments invariant to translation the image is shifted such that its centroid coincides with 
the origin of the coordinate system. The centroid of the image in terms of the moments is given by: 

m 10 

(4) 

y c = — 

Then the central moments are defined as follows: 



v Pq =mS-(x-Xc) p (y-yc) q Kx,y)} 



(5) 



To compute Hu moments using central moments the (p terms in equation (3) need to be replaced by (i 
terms. It can be verified that uoo = m o, u.10 = = u^i- 

To make the moments invariant to scaling the moments are normalized by dividing by a power of Uoo- 
The normalized central moments are defined as below : 



M va = -^-, where 0) = l + ^- 
3.2. Centroid-Radii (C-R) Approach: An Overview 



(6) 



In [26] K. L. Tan et al. proposes the centroid-radii model for estimating shapes of objects in images. 
A shape is represented by an area of black on a white background. Each pixel is represented by its 
color (black or white) and its x-y co-ordinates on the canvas. The boundary of a shape consists of a 
series of boundary points. A boundary point is a black pixel with at least one white pixel as its 

151 | Vol. 2, Issue 1, pp. 149-158 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

neighbor. Let (x.,j.) 5 i = l,...,n represent the shape having n boundary points. The centroid is 
located at the position C( X c , Y c ) which are respectively, the average of the x and y co-ordinates for 
all black pixels : 

n 

I* 



(7) 



1 c 

n 

A radius is a straight line joining the centroid to a boundary point. In the centroid-radii model, lengths 
of a shape's radii from its centroid to the boundary are captured at regular intervals as the shape's 
descriptor using the Euclidean distance. More formally, let 9 be the measure of the angle (in degrees) 
between radii (Figure 1). Then, the number of angular intervals is given by k -360/9. The length L. of 

the /-th radius formed by joining the centroid C ( X c , Y c ) to the i-th sample point s t (x ( , y. ) is given 
by: 

Z,=V(X c -x,) 2 + (F c -j,) 2 (8) 

All radii lengths are normalized by dividing with the longest radius length from the set of radii lengths 
extracted. Let the individual radii lengths be L^ , L 2 , L 3 , ..., L k where k is total number of radii drawn at 

an angular separation of 9. If the maximum radius length is L max then the normalized radii lengths are 
given by: 

( i =— J —, i=l,...,k /^x 

max 

Furthermore, without loss of generality, suppose that the intervals are taken clockwise starting from 
the x-axis direction (0°). Then, the shape descriptor can be represented as a vector consisting of an 
ordered sequence of normalized radii lengths: 

With sufficient number of radii, dissimilar shapes can be differentiated from each other. 



Figure 1. Centroid-radii approach 

3.3. Proposed Approach: Binary Superposition (B-S) 

The proposed approach is conceptually simpler than either of the above two techniques but provides 
comparatively better recognition accuracies. The leaf images are converted to binary images by 
thresholding with an appropriate value. Two binary shape images Si and S 2 are superimposed on each 
other and a resultant R is computed using logical AND operation between them. 

R = s l f]s 2 (H) 

152 | Vol. 2, Issue 1, pp. 149-158 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

For the binary resultant image, all the non-zero pixel values are summed up. This sum is used as the 
feature value for discrimination. A large value of the sum would indicate high similarity between the 
images while a low sum value indicates low similarity. A test image is compared to all the training 
samples of each class and the mean resultant for each class is computed. The test image is classified to 
the class for which the mean resultant is maximum. Figure 2 illustrates that when two images of 
different classes are superimposed then the resultant image contains less non-zero pixels than for 
images belonging to the same class. 





An image of 
Class C 



An image of 
Class B 



Resultant 

Superposition 

Image 





An image of 
Class C 



An image of 
Class C 



Resultant 

Superposition 

Image 



Figure 2. Resultant images after binary superposition 

IV. Experimentations and Results 

4.1. Dataset 

Experimentations are performed by using 180 leaf images from the Plantscan database [27]. The 
dataset is divided into 3 classes: A (Arbutus unedo), B (Betula pendula Roth), C (Pittosporum_tobira) 
each consisting of 60 images. Each image is 350 by 350 pixels in dimensions and in JPEG format. A 
total of 120 images are used as the Training set (T) and the remaining 120 images as the Testing set 
(S). The legends used in this work are as follows : AT, BT, CT denotes the training set while AS, BS, 
CS denotes the testing set, corresponding to the three classes. Sample images of each class are shown 
below in Figure. 3. 



4444 



B 



/ I 



ft 



II 



Figure 3. Samples of leaf images belonging to the three classes 



153 | 



Vol. 2, Issue 1, pp. 149-158 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

4.2. M-I based computations 

The first four moments Mi to M 4 of each image of the training and testing sets are computed as per 
equation (6). Feature values are first considered individually and corresponding results are tabulated 
below. Comparisons between training and testing sets are done using Manhattan distances (Li). 
Results are summarized below in Table 1. The last column depicts the overall percentage accuracy 
value. 

Table 1 : Percentage accuracy using M-I approach 



F 


A 


B 


C 


O 


Mi 


96 


100 


47 


81 


M 2 


60 


37 


90 


62 


M 3 


53 


100 


37 


63 


M 4 


77 


40 


70 


62 



Best results of 81% are obtained using Mi. Corresponding plots depicting the variation of the Mi 
feature values for the three classes over the training and testing datasets are shown below in Figure 4. 




10 15 20 

Training set images 



25 



30 



0.3 



AS 
BS 
C3 



I 



o^^^^^e^^^ 



10 15 20 
Testing set images 



25 



30 



Figure 4. Variation of Mi for Training and Testing set images 

4.3. C-R based computations 

Each image is converted to binary form and the Canny edge detector is used to identify its contour. Its 
centroid is computed from the average of its edge pixels. Corresponding to each edge pixel the angle 
it subtends at the centroid is calculated and stored in an array along with its x- and y- coordinate 
values. From the array 36 coordinate values of edge pixels which join the centroid at 10 degree 
intervals from to 359 degrees are identified. The radii length of joining these 36 points with the 
centroid is calculated using the Euclidean distance and the radii lengths are normalized to the range 
[0, 1]. For each leaf image 36 such normalized lengths are stored in an ordered sequence as per 
equation (10). Figure 5 shows a visual representation of a leaf image, the edge detected version, the 
location of the centroid and edge pixels, and the normalized radii vector. 



154 | 



Vol. 2, Issue 1, pp. 149-158 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Original Image 




Edge Pixel and Centroid 




Edge Detected Image 




Figure 5. Interface for Centroid-Radii computation 



The average of the 36 radii lengths for each image of each class both for the training and testing sets, 
is plotted in Figure 6, to depict the overall feature range and variation for each class. 



Variation of Mean Radii Lengths of Training set 




Figure 6. Variation of mean radii length for Training and Testing set images 

Classes are discriminated using Euclidean distance (L 2 ) metric between the 36-element C-R vectors of 
training and testing samples. Results are summarized below in Table 2. 

Table 2 : Percentage accuracy using C-R approach 



F 


A 


B 


c 


O 


C-R 


97 


100 


97 


98 



155 | 



Vol. 2, Issue 1, pp. 149-158 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

4.4. B-S based computations 

Each image is converted to binary form by using a 50% thresholding. A binarized test image is 
multiplied with each of the binarized training samples for each class as per equation (11) and the sum 
of the 'Is' in the resultant image is used as the feature vector for discrimination. Figure 7 shows the 
variation of feature values for the three classes. The upper plot is obtained by binary superposition of 
Class-A testing images with all training samples, the middle plot is obtained by binary superposition 
of Class-B testing images with all training samples and the lower plot is obtained by binary 
superposition of Class-C testing images with all training samples. 



2.2 



- 2 



x10 



1.8 



E 1.6 



1.4 




( 

1.8 


] 5 
x10 6 


10 


15 
Testing images 


20 


25 30 


i 


I 

I 


i 

i 


i 








w 

1 1-8 


Ki 


-&- BS*AT 
-&- BS*BT f 
-^r- BS*CT 


* 1.4 
□. 

o 

E 1.2 

=; 

in 
1 


l 


i 




Figure 7. Variation of feature value for Testing set images using B-S approach 

Classes are discriminated by determining the maximum values of the resultant B-S matrices computed 
by superimposing the training and testing samples. Results are summarized below in Table 3. 

Table 3 : Percentage accuracy using B-S approach 



F 


A 


B 


c 


O 


B-S 


100 


100 


97 


99 



V. Analysis 

Automated discrimination between three leaf shapes was done using a variety of approaches to find 
the optimum results. The study reveals that for M-I approach provide best results for the Ml feature. 
Accuracies based on C-R method using a 36-element radii vector provide results better than 
individual M-I features. The proposed approach of binary superposition improved upon the results 
provided by the M-I approach. Best results obtained using different methods are summarized below in 
Table 4. 

Table 4 : Summary of accuracy results 



F 


A 


B 


c 


O 


M-I 


96 


100 


47 


81 


C-R 


97 


100 


97 


98 


B-S 


100 


100 


97 


99 



156 | 



Vol. 2, Issue 1, pp. 149-158 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

To put the above results in perspective with the state of the art, the best results reported in [8] is a 
recall rate of 60% for discrimination of chrysanthemum leaves from a database of 1400 color images. 
Accuracy for classification for 10 leaf categories over 600 images is reported to be 82.33% in [10]. 
Overall classification accuracy reported in [11] for 4 categories of leaf images obtained during three 
weeks of germination, is around 90%. Accuracy reported in [13] for classification of 32 leaf types 
from a collection of 1800 images is around 90%. An overall classification of 80% is reported in [14] 
for identifying two types of leaf shapes from images taken using different frequency bands of the 
spectrum. Best accuracies reported in [17] are around 93% using Polar Fourier Transforms. Results 
reported in [20] are in the region of 80% for classifying 50 species. Accuracies of around 97% have 
been reported in [21] for a database of 500 images. It therefore can be said that the accuracies reported 
in the current paper are comparable to the best results reported in extant literature. It may however be 
noted that in many of the above cases color and geometrical parameters have also been combined with 
shape based features to improve results, while the current work is based solely on shape 
characteristics. 

VI. Conclusions and Future Scopes 

This paper proposes an automated system for plant identification using shape features of their leaves. 
Two shape modelling approaches are discussed: one technique based on M-I model and the other on 
C-R model, and these are compared with a proposed approach based on binary superposition. The 
feature plots and recognition accuracies for each of the approaches are studied and reported. Such 
automated classification systems can prove extremely useful for quick and efficient classification of 
plant species. The accuracy of the current proposed approach is comparable to those reported in 
contemporary works. A salient feature of the proposed approach is the low-complexity data modelling 
scheme used whereby the computations only involve binary values. 

Future work would involve research along two directions: (1) combining other shape based techniques 
like Hough transform and Fourier descriptors, and (2) combining color and texture features along with 
shape features for improving recognition accuracies. 

References 

[I] N. Sakai, S. Yonekawa, & A. Matsuzaki (1996), "Two-dimensional image analysis of the shape of rice 
and its applications to separating varieties", Journal of Food Engineering, vol 27, pp. 397-407. 

[2] A. J. M. Timmermans, & A. A. Hulzebosch (1996), "Computer vison system for on-line sorting of pot 

plants using an artificial neural network classifier", Computers and Electronics in Agriculture, vol. 15, 
pp. 41-55. 

[3] S. Abbasi, F. Mokhtarian, & J. Kittler (1997), "Reliable classification of chrysanthemum leaves 

through curvature scale space", Lecture Notes in Computer Science, vol. 1252, pp. 284-295. 

[4] A. J. Perez, F. Lopez, J. V. Benlloch, & S. Christensen (2000), "Color and shape analysis techniques 

for weed detection in cereal fields", Computers and Electronics in Agriculture, vol. 25, pp. 197-212. 

[5] C. Im, H. Nishida, & T. L. Kunii (1998), "A hierarchical method of recognizing plant species by leaf 

shapes", IAPR Workshop on Machine Vision Applications, pp. 158-161. 

[6] C-C Yang, S. O. Prasher, J- A Landry, J. Perret, and H. S. Ramaswamy (2000), "Recognition of weeds 

with image processing & their use with fuzzy logic for precision farming", Canadian Agricultural 
Emgineering, vol. 42, no. 4, pp. 195-200. 

[7] Z. Wang, Z. Chi, D. Feng, & Q. Wang (2000), "Leaf image retrieval with shape feature", International 

Conference on Advances in Visual Information Systems (ACVIS), pp. 477-487. 

[8] Z. Wang, Z. Chi, & D. Feng (2003), "Shape based leaf image retrieval", IEEE Proceedings on Vision, 

Image and Signal Processing (VISP), vol. 150, no.l, pp. 34-43. 

[9] J. J. Camarero, S. Siso, & E.G-Pelegrin (2003), "Fractal dimension does not adequately describe the 

complexity of leaf margin in seedlings of Quercus species", Anales del Jardin Botdnico de Madrid, 
vol. 60, no. 1, pp. 63-71. 

[10] C-L Lee, & S-Y Chen (2003), "Classification of leaf images", 16 th IPPR Conference on Computer 

Vision, Graphics and Image Processing (CVGIP), pp. 355-362. 

[II] Z. Wang, Z. Chi, & D. Feng (2002), "Fuzzy integral for leaf image retrieval", IEEE Int. Conf on Fuzzy 
Systems, pp. 372-377 '. 

[12] F. Mokhtarian, & S. Abbasi (2004), "Matching shapes with self-intersection: application to leaf 

classification", IEEE Trans, on Image Processing, vol. 13, pp. 653-661. 



157 | 



Vol. 2, Issue 1, pp. 149-158 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[13] J. C. Neto, G. E. Meyer, D. D. Jones, & A. K. Samal (2006), "Plant species identification using elliptic 

Fourier leaf shape analysis", Computers and Electronics in Agriculture, vol. 50, pp. 121-134. 

[14] J-K Park, E-J Hwang, & Y. Nam (2006), "A vention - based leaf image classification scheme", 

Alliance of Information and Referral Systems, pp. 416-428. 

[15] J. Du, X. Wang, & G. Zhang (2007), "Leaf shape based plant species recognition", Applied 

Mathematics and Computation, vol. 185, pp. 883-893. 

[16] S. G. Wu, F. S. Bao, E. Y. Xu, Y-X Wang, Y-F Chang, & Q-L Xiang (2007), "A leaf recognition 

algorithm for plant classification using probabilistic neural network", The Computing Research 
Repository (CoRR), vol.1, pp. 11-16. 

[17] J. Pan, & Y. He (200 8), "Recognition of plants by leaves digital image and neural network", 

International Conference on Computer Science and Software Engineering, vol 4, pp. 906 - 910. 

[18] C. Caballero, & M. C. Aranda (2010), "Plant species identification using leaf image retrieval", ACM 

Int. Conf on Image and Video Retrieval (CIVR), pp. 327-334. 

[19] Q-P Wang, J-X Du, & C-M Zhai (2010), "Recognition of leaf image based on ring projection wavelet 

fractal feature", International Journal of Innovative Computing, Information and Control, pp. 240-246. 

[20] T. Beghin, J. S. Cope, P. Remagnino, & S. Barman (2010), "Shape and texture based plant leaf 

classification", International Conference on Advanced Concepts for Intelligent Vision Systems 
(ACVIS), pp. 345-353. 

[21] A. Kadir, L.E. Nugroho, A. Susanto, & P.I. Santosa (2011), "A comparative experiment of several 

shape methods in recognizing plants", International Journal of Computer Science & Information 
Technology (IJCSIT), vol. 3, no. 3, pp. 256-263 

[22] N. Valliammal, & S. N. Geethalakshmi (2011), "Hybrid image segmentation algorithm for leaf 

recognition and characterization", International Conference on Process Automation, Control and 
Computing (PACC), pp. 1-6. 

[23] G. Cerutti, L. Tougne, J. Mille, A. Vacavant, & D. Coquin (2011), "Guiding active contours for tree 

leaf segmentation and identification", Cross-Language Evaluation Forum (CLEF), Amsterdam, 
Netherlands. 

[24] B. S. Bama, S. M. Valli, S. Raju, & V. A. Kumar (2011), "Conten based leaf image retrieval using 

shape, color and texture features", Indian Journal of Computer Science and Engineering, vol. 2, no. 2, 
pp. 202-211. 

[25] M-K Hu (1962), "Visual pattern recognition by moment invariants", IRE Transactions on Information 

Theory, pp. 179-187. 

[26] K-L Tan, B. C. Ooi, & L. F. Thiang (2003), "Retrieving similar shapes effectively and efficiently", 

Multimedia Tools and Applications, vol. 19, pp. 111-134 

[27] Index of /PI @ ntNet/plantscan_v2 ( http://imediaftp.inria.fr:50012/PL@ntNet/plantscan v2) 



Authors 



Jyotismita Chaki is a Masters (M.Tech.) research scholar at the School of 
Education Technology, Jadavpur University, Kolkata, India. Her research interests 
include image processing and pattern recognition. 



Ranjan Parekh is a faculty at the School of Education Technology, Jadavpur 
University, Kolkata, India. He is involved with teaching subjects related to 
multimedia technologies at the post-graduate level. His research interests include 
multimedia databases, pattern recognition, medical imaging and computer vision. 
He is the author of the book "Principles of Multimedia" published by McGraw- 
Hill, 2006. 




158 



Vol. 2, Issue 1, pp. 149-158 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Fuzzy Control of Squirrel Cage Induction 
Machine Wind Generation System 

B. Ravichandra Rao and R. Amala Lolly 
Department of EEE Engineering, GNITS, Hyderabad, India 



Abstract 

Artificial intelligence techniques, such as fuzzy logic, neural network and genetic algorithm are recently 
showing a lot of promise in the application of power electronic systems. This Paper describes the control 
strategy development, design and of a fuzzy logic based variable speed wind generation system. In this work 
cage type induction generator and double-sided PWM converters are used. The fuzzy logic based control of the 
system helps to optimize the efficiency and enhance the performance. The generation system uses three fuzzy 
logic controllers. The first fuzzy controller tracks the generator speed with the wind velocity to extract maximum 
power. The second fuzzy logic controller programs machine flux for light load efficiency improvement. The third 
fuzzy logic controller provides robust speed control against wind vortex and turbine oscillatory torque. The 
complete control system has been developed, analyzed, and simulated in Matlab. 

KEYWORDS' Induction Generator, Fuzzy Logic Controller and Wind Generation system. 

I. Introduction 

GRID-connected wind electricity generation is showing the highest rate of growth of any form of 

electricity generation, achieving global annual growth rates in the order of 20 - 25%. Wind power is 

increasingly being viewed as a mainstream electricity supply technology. Its attraction as an 

electricity supply source has fostered ambitious targets for wind power in many countries around the 

world. 

Wind power penetration levels have increased in electricity supply systems in a few countries in 

recent years; so have concerns about how to incorporate this significant amount of intermittent, 

uncontrolled and non-dispatchable generation without disrupting the finely-tuned balance that 

network systems demand. 

Grid integration issues are a challenge to the expansion of wind power in some countries. Measures 

such as aggregation of wind turbines, load and wind forecasting and simulation studies are expected 

to facilitate larger grid penetration of wind power. In this project simulation studies on grid connected 

wind electric generators (WEG) employing 

Squirrel Cage Induction Generator (SCIG)[2]. 

Fuzzy Logic is a powerful and versatile tool for representing imprecise, ambiguous and vague 

information. It also helps to model difficult, even intractable problems. The system uses three 

fuzzy controllers, Fuzzy Programming of Generator Speed, Fuzzy Programming of Generator 

Flux, Fuzzy Control of Generator Speed Loop 

II. Wind - Generation System Description 

2.1 Converter System 

The AC/DC/AC converter is divided into two components: the rotor-side converter (C rotor ) and the 
grid-side converter (C grid ). C rotor and C grid are Voltage-Sourced Converters that use forced-commutated 



159 



Vol. 2, Issue 1, pp. 159-167 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

power electronic devices (IGBTs) to synthesize an AC voltage from a DC voltage source. A capacitor 
connected on the DC side acts as the DC voltage source. A coupling inductor L is used to connect 
C grid to the grid. The three-phase rotor winding is connected to C rotor by slip rings and brushes and the 
three-phase stator winding is directly connected to the grid. The power captured by the wind turbine is 
converted into electrical power by the induction generator and it is transmitted to the grid by the stator 
and the rotor windings. The control system generates the pitch angle command and the voltage 
command signals V r and V gc for C rotor and C grid respectively in order to control the power of the wind 
turbine, the DC bus voltage and the reactive power or the voltage at the grid terminals. 



Turbine 

-A 



=Hm 



*" I / Drive train [ 

Wind S tutor 

¥ Induction 

1 Genemtor 




Three -phase 
Grid 



Pitch ungLe 
Figl: The Wind Turbine Doubly-Fed Induction Generator 

Lastly the generation system feeds power to a utility grid. Some of its salient features are as follows: 

• Line side power factor is unity with no harmonic current injection. 

• The cage type induction machine is extremely rugged, reliable, economical, and universally 
popular. 

• Machine current is sinusoidal and no harmonic copper loss. 

• Rectifier can generate programmable excitation for the machine. 

• Continuous power generation from zero to highest turbine speed is possible. 

• Power can flow in either direction permitting the generator to run as a motor for start-up. 

• Autonomous operation is possible either with the help of start up capacitor or dc link battery. 

• Extremely fast transient is also possible. 

The mechanical power and the stator electric power output are computed as follows: 

P m =T m co r -—(l) 
P s = T em G) s —(2) 



For a lossless generator the mechanical equation is: 

T dw 



dt 



= T-(3) 



In steady-state at fixed speed for a lossless generator T m = T em and P m = P s + P r . 
It follows that: 

P r =P m - P s = T m w r - T em w s = -T m Ws " Wr w s 

w s 

P r =-sT m w s 
P r =-sP s -(4) 
where s is defined as the slip of the generator: s = (co s -co r )/co s — (5) 



160 



Vol. 2, Issue 1, pp. 159-167 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 




Three -phase 
Grid 



Stater 

Induction 
Genemtor 



Fig 2: The Power Flow diagram 



Generally the absolute value of slip is much lower than 1 and, consequently, P r is only a fraction of P s . 
Since T m is positive for power generation and since co s is positive and constant for a constant 
frequency grid voltage, the sign of P r is a function of the slip sign. P r is positive for negative slip 
(speed greater than synchronous speed) and it is negative for positive slip (speed lower than 
synchronous speed). For super-synchronous speed operation, P r is transmitted to DC bus capacitor and 
tends to raise the DC voltage. The design and performance evaluation of variable speed wind 
generation system [3]. For sub-synchronous speed operation, P r is taken out of DC bus capacitor 
and tends to decrease the DC voltage. C grid is used to generate or absorb the power P gc in order to keep 
the DC voltage constant. In steady-state for a lossless AC/DC/AC converter P gc is equal to P r and the 
speed of the wind turbine is determined by the power P r absorbed or generated by C rotor . 

2.2 Indirect Vector Control: 

The Figure 3 explains the fundamental principle of indirect vector control with the help of a 
phasor diagram. The d s -q s axes are fixed on the stator but the d r -q r axes, which are fixed on 
the rotor, are moving at speed w r as shown. Synchronously rotating axes d e -q e [4] are rotating 
ahead of the d r -q r axes by the positive slip angle s i corresponding to slip frequency w s i. 
Since the rotor pole is directed on the d e axes and w e = w r + w s (6) 



we can write 



e = j w e dt = j(w r + w s i)dt = r + s i 



(7) 



The rotor pole position is not absolute, but is slipping with respect to the rotor at frequency 
w s i. The phasor diagram suggests that the decoupling control, the stator flux component of 
current i qs should be on the q e axis, as shown 

For decoupling control, we can now make a derivation of control equations of indirect vector 
control with the help of d e -q e equivalent. The rotor circuit equation can be written as 



d Vdr 
dt 

dw 



+ R r i dr - (w e -w r ) \|/ qr =0 (8) 



qr 



dt ■ + R r i qr - (We-W r ) \|/ d r =0 (9) 

The rotor flux linkage expression can be given as 



161 | 



Vol. 2, Issue 1, pp. 159-167 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Ydr= L r i dr + L m i ds (10) 

\|/qr=Lriqr + L m i qs ^^ 

From the above equations, we can write i dr and i qr as 

Iqr — L Yqr ^ Iqs 



1 Lfji . „ 

Idr = — Vdr " — Ids & 



-(12) 




Fig 3: phasor diagram of indirect vector control 

The rotor current in above equations which are inaccessible, can be eliminated with the help of 
equations of i dr and i qr as 

^T + t V* - 77 Rr lds " Wsl Vqr =0--(13) 

^r + £ ^ - 17 Rr ^ + Wsi v* =°--( 14 ) 

Where w s i = w e - w r has been substituted. 
For decoupling control, it is desirable that 



V|/ qr = 



That is 



d^ = 0-(15) 



So that the total rotors flux \|/ r is directed on the d e axis. By substituting the above equations we get 

iqs~™ (16) 



L m R r . 
W s i - — — l a 



y/ r L r 

The frequency signal can be estimated as follows cos0 e = \|/ ds s /\|/ s and sin0 e = \|/ ds 7\|/ s — (17) 

III. Power Circuit and Control Strategy 

The Turbine at the left (a vertical type) is coupled to the cage - type induction generator through a 
speed up gear ratio The variable frequency, variable voltage power generated by the machine is 
rectified to dc by a PWM voltage fed rectifier that also supplies the excitation current (lagging) to the 
machine. 



162 | 



Vol. 2, Issue 1, pp. 159-167 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 




m 




SYNCHRONOUS 
CURRENT CONTROL 

AND 
VrjCrroft ROTATOR 



4 



— nm m* 



SPWM 
MOD. 
SIGN, 



hi r 



SYNCHRONOUS 

CURRENT CONTROL WITH 
DECOUPLER AND 
VflCTOR ROTATOR 



1 1 V 



220V,50Hz 
Grid 




Fig 3: Control Block Circuit 

The dc link power is inverted to 230V, 50Hz ac through a PWM inverter and fed to the utility grid. 
The Line current is sinusoidal at unity power factor, as indicated. The generator speed w r is controlled 
by an indirect vector control with a torque control for stiffness and a synchronous current control in 
the inner loops. The output power P is controlled to the dc link voltage V d as shown in the figure3. 
Because an increase in P causes a decrease in V d , the voltage loop error polarity has been inverted. 
The insertion of the line filter inductance L s creates some coupling effect, which is eliminated by a 
decoupler in the synchronous current control loops. The power can be controlled to flow easily in 
either direction. The vertical turbine is started with a motoring torque. As the speed develops, the 
machine goes into generating mode and the machine shut downs by regenerative braking. 

3.1 Generator Speed Tracking Control (FLC-1) 

With an incrementation (or decrementation) of speed, the corresponding incrementation (or 
decrementation) of output power P is estimated. If AP is positive with last positive, Aw r indicated in 
the figure in per-unit value by LAW r (PU), the search is continued in the same direction. 



DU SCALE FACTORS 

"^ COMPUTATION 



KWR 




Fig 4: Block Diagram of FLC -1 



163 | 



Vol. 2, Issue 1, pp. 159-167 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

If on the other hand +Aw r causes - AP , the direction of search is reversed. The variables AP , Aw r and 
LAw r are described by membership functions and rule table. In the implementation of fuzzy control, 
the input variables are fuzzified, the valid control rules are evaluated and combined, and finally the 
output is defuzzified to convert to the crispy value. The wind vortex and torque ripple can lead the 
search to be trapped in a minimum which is not global, so the output Aw r is added to some amount of 
LAw r in order to give some momentum to continue the search and to avoid such local minima. The 
controller operates on a per-unit basis so that the response is insensitive to system variables and the 
algorithm is universal to any system. The membership functions of fuzzy logic controllers are 
explained in [4]. The scale factors KPO and KWR, as shown in Fig. 4, are generated as a function of 
generator speed so that the control becomes somewhat insensitive to speed variation. The scale factor 
expressions are given, respectively, as 

KPO = aiW r 
KWR = a 2 w r 

Where aiand a 2 are the constant coefficients that are derived from simulation studies. Such 
coefficients are converting the speed and power in per-unit values. The advantages of fuzzy control 
are obvious. It provides adaptive step size in the search that leads to fast convergence, and the 
controller can accept inaccurate and noisy signals. The FLC-1 [5] operation does not need any wind 
velocity information, and its real time based search is insensitive to system parameter variation 

3.2 Generator Flux Programming Control (FLC-2) 

Since most of the time the generator is running at light load, the machine rotor flux can be reduced 
from the rated value to reduce the core loss and thereby increase the machine-converter system 
efficiency. The principle of online search based flux programming control by a second fuzzy 
controller FLC-2 is explained in Fig.5. This causes increasing torque current i qs by the speed loop for 
the same developed torque. As the flux is decreased, the machine iron loss decreases with the 
attendant increase of copper loss. However, the total system (converters and machine) loss decreases, 
resulting in an increase of total generated power P . 



SCALING 

FACTORS 

C0MPUTAI10N 



KIDS 



KP 



P n (k)^AP fl (k)iAP D {k), 




FUZZY 

INFERENCE AND 
DEFUZZMCATTON 



Ail 






Fig 5: Block diagram of FLC-2 

The principle of fuzzy controller FLC-2 is somewhat similar to that of FLC-1. The system output 
power Po(k) is sampled and compared with the previous value to determine the increment AP . In 
addition, the last excitation current decrement (LAi ds ) is reviewed. On these bases, the decrement step 
of i ds is generated from fuzzy rules through fuzzy inference and defuzzification, as indicated. It is 
necessary to process the inputs of FLC-2 in per-unit values. Therefore, the adjustable gains KP and 
KIDS convert the actual variable to variables with the following expressions 

KP = aw r + b 
KIDS = CiW r - c 2 i qs + c 3 



164 | 



Vol. 2, Issue 1, pp. 159-167 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

where a, b, Ci,c 2 and c 3 are derived from simulation studies. The current i qs is proportional to the 
generator torque, and Aw r is zero because the fuzzy controller FLC-2 is exercised only at steady-state 
conditions. The FLC-2 controller operation starts when FLC-1 has completed its search at the rated 
flux condition. If wind velocity changes during or at the end of FLC-2, its operation is abandoned, the 
rated flux is established, and FLC-1 control is activated. 

3.3 Closed-Loop Generator Speed Control (FLC-3) 

The speed loop control is provided by fuzzy controller FLC- 3, as indicated in Fig. 6. As mentioned 
before, it basically provides robust speed control against wind vortex and turbine oscillatory torques. 
The disturbance torque on the machine shaft is inversely modulated with the developed torque to 
attenuate the modulation of output power and prevent any possible mechanical resonance effect. In 
addition, the speed control loop provides a deadbeat type response when an increment of speed is 
commanded by FLC-1. 




Fig 6: Block Diagram of FLC - 3 

The speed loop error (Ew r ) and error change (AEw r ) signals are converted to per-unit signals, 
processed through fuzzy control, and then summed to produce the generator torque component of 
current. Note that, while fuzzy controllers FLC-1 and FLC-2 operate in sequence at steady (or small 
turbulence) wind velocity, FLC-3 is always active during system operation. 

IV. Simulation Results 

Wind - generation system is simulated to validate all the control strategies and then evaluate 

the performance of the system. 

The machine and turbine parameters are given as: 

Machine Parameters: 

3 Phase, 7 hp, 230/450v, 7.6A, 4 poles, 

1500rpm, Ri = 0.25ohm, R r = 0.335ohm 

Turbine Parameters 

3.5KW, Tower = 99.95m, ll.l-22.2r.p.m, r| gear = 5.2, 

A= 0.015, B=0.03, C=0.015 

Generator Speed - Time Output Power - Time 





165 | 



Vol. 2, Issue 1, pp. 159-167 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Flux current - Time 



Wind Velocity - Time 





n : 


] 

i 

4* 


u 














\ j 




















^^^ 



-^ Time 



Fig 7. Simulation Results 

Simulation of wind generation system is performed in matlab and results are presented in fig 7. 
Generator speed, output power, flux current and wind velocity with respect to time are plotted. 

V. Conclusion 

The fuzzy logic based variable speed cage machine wind generation system has been analyzed. The 
system performances have been studied with matlab- simulation to validate all the theoretical 
concepts. There are three fuzzy logic controllers in the generation system: 

• The first fuzzy controller FLC-1 searches on line the optimum generator speed so that the 
aerodynamic efficiency of the wind turbine is maximum. 

• The second fuzzy controller FLC-2 programs the machine flux by an on line search so as to 
optimize the machine converter efficiency. 

• The third fuzzy controller FLC-3 performs robust speed control against turbine oscillatory 
torque and wind vortex. 

The main conclusions of this paper are: 

• The system was found to be parameter insensitive with fuzzy controllers. 

• The system shows a fast-convergence with fuzzy controllers. 

• The system can accept noisy and inaccurate signals. 

• The fuzzy algorithms used in the system are universal and can be applied retroactively in any 
other system. 

• The performance of the system was found to be excellent with all the fuzzy logic controllers. 

References 

[1] K.Kaur,Dr.S.Chowdhury,Dr.S.P.Chowdhury,Dr.K.B.Mohanty,Prof.A.Domijan"Fuzzy Logic Based 

control of variable speed Induction machine wind generation ststem" IEEE Transactions. 
[2] Simoes , M. G. , Bose, B. K. , Spiegel , RJ. "Fuzzy logic based intelligent control of variable speed 

cage wind generation system" IEEE Transactions on Power Electronics Vol. 12 , No.l, (January 

1997):pp.87-95 . 
[3] Simoes , M. G , Bose , B. K. , Spiegel , RJ. "Design and performance evaluation of fuzzy logic based 

variable speed wind generation system", IEEE Transactions on Industry Applications. Vol. 33 ,No. 4 , 

(July/august 1997), pp. 956-965. 
[4] Bose, B.K. Modern power electronics and A.C drives pp368-378 

[5] Zhao, Jin. , Bose , B.K. "Evaluation of membership functions for fuzzy logic controlled induction 

motor drive", IEEE Transactions on Power Electronics , (2002), pp. 229-234. 
[6] C.C.Lee , "Fuzzy logic in control system- Fuzzy Logic controllers -I" , IEEE Transactions on Systems, 

Man and cybernetia,20(2),pp.404-418 , 1990 
[7] Bose, B.K. Modern power electronics and A.C drives, 2002.page 



166 



Vol. 2, Issue 1, pp. 159-167 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[8] Souusa, G.C.D. , B.K.Bose , B.K., J.G.Cleand J. G. "Fuzzy logic based on-line efficiency optimization 

control of an indirect vector - controlled induction motor drive", IEEE Transactions on Industrial 
Electronics,(April-1995),Vol. 42, No. 2. 

[9] Bhadra ,S.N. , Kastha , D. , Banerjee , S. Wind electrical system .New Delhi : Oxford Education ,2005. 



Authors 

B. Ravichandra Rao has received B.Tech from Sri Krishnadevaraya University, Anantapur in 2002 and 
M.E from Pune University,Pune in 2004 and pursuing Ph.D in electrical engineering from 
S.V.University, Tirupathi. He is presently working as Assistant Professor of EEE Department, 
G.Narayanamma Institute of Technology and Science, Hyderabad, India. 



sWI 



Amala Lolly.R received B.Tech from Jawaharlal Nehru Technological University Kakinada in the year 
2009 and pursuing M.Tech in G.Narayanamma Institute of Technology and Science (for women), 
Hyderabad, India. 




167 | 



Vol. 2, Issue 1, pp. 159-167 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



An Advanced Wireless Sensor Network for 
landslide detection 

Romen Kumar. M 1 & Hemalatha 2 

Research Scholar, Department of Computer Science 

2 Head, Department of Software System 

Karpagam University, Coimbatore-21, India 



Abstract 

The power of wireless sensor network technology has provided the capability of developing large scale systems 
for real -time monitoring. This paper describes the evolution and generation of a wireless sensor network 
system for landslide detection in north eastern state of India, a region known for its heavy rainfall, steep, slopes 
and frequent landslides. The post to different places and data retrieval or data collection from geographical 
sensors, the design, deployment and deployment of data collection and data aggregation algorithms needed for 
the network, and the network requirements of the deployed landslide detection system, data analysis system etc 
has been discussed in this paper. 

KEYWORDS' Wireless Sensor Networks, Distributed Aggregation Algorithms, Heterogeneous Networks, 
Landslide. 

I. Introduction 

The real- time monitoring of environmental disasters are one of the most the importance necessity of 
the world. Different type of technologies has been developed for this purpose. Wireless sensor 
network (WSN) is one of the major technologies that can be used for real- time monitoring. WSN has 
the capability of large scale deployment, low maintenance, scalability, adaptability for different 
scenarios etc. WSN has its own restriction such as low memory power and bandwidth etc, but its 
capability to be deployed towards environment, and low maintenance requirement made it one of the 
best suited technologies for real time monitoring. 

This paper discusses the design and development of a land slide dictation system using WSN at north 
eastern state of India. The deployment site has historically experienced several landslides, with the 
latest one occurring in the year 2008, this remainder of the paper is organized as follows. Section II 
describes the methods for landslide prediction related work in WSN systems, and other related work 
in WSN systems. In Section III, we describe about landslide phenomena and Section IV describes 
about the sensors needed for monitoring rainfall induced landslides. Section V details about the 
enhanced sensor column design used along with this system. Section VI describes about the wireless 
sensor architecture used for landslide scenario and Section VII details the different WSN algorithms 
implemented in the landslide detection network. The wireless sensor testbed is described in details in 
Section VIII Field deployment; its design concerns and experience are described in Section IX. 
Finally we conclude in Section X and in same section future work is also discussed 

II. Related works 

The evolution of WSN ha encourages the development of real-time monitoring of critical and 
emergency application. The wireless sensor technology has generated enthusiasm in computer 
scientists to learn and understand other domain areas which help to propose or develop real-time 



168 



Vol. 2, Issue 1, pp. 168-178 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

deployments. The major area of focus is environment monitoring, detection and prediction. The 
Drought Forecast and Alert Systems (DFAS) has been proposed and developed in paper [1, 10]. This 
system uses mobile communication to alert the users, whereas the deployed system uses real time data 
collection and transmission using the wireless sensor nodes, Wi-Fi, satellite network and also through 
internet. The real streaming of data through broadband connectivity provides connectivity to wider 
audience. 

An experimental soil monitoring network using a WSN is presented in reference [3] which explores 
real-time measurements at temporal and spatial granularities which were previously impossible. This 
paper also discusses about the reception of real-time measurements at temporal and spatial 
granularity. Research has shown that other than geotechnical sensor deployment and monitoring, 
other techniques such as remote sensing, automated terrestrial surveys, and GPS technology etc also 
can be used by themselves or in combination with other technologies to provide information about 
land deformation [54]. Paper describes a state of the art system that combines multiple sensor type to 
provide measurements the perform deformation monitoring. 

Reference the topics of slip surface localization in wireless sensor networks, which can be used for 
landslide prediction. A durable wireless sensor node has been developed [13] which can be employed 
in expandable WSN for remote monitoring of soil conditions in areas conducive to slope stability 
failures. This study incorporates both theoretical and practical knowledge from diverse domains such 
as landslides and geomechanics, wireless sensor, Wi-Fi and satellite networks, power saving solutions 
and electronic interface and design, which paved the design, development and deployment of a real 
time landslide detection system using a WSN [52]. 

III. Landslide 

Landslide is general term used to describe the down slope movement of soil, rock and organic 
materials under the influence of gravity. It can be triggered by gradual processes such as weathering 
or by external mechanism including: 

• Undercutting of a slope by stream erosion, wave action, glacier, or human activity such as 
road building. 

• Continuous rainfall, rapid snowmelt, or sharp fluctuation in ground-water levels. 

• Shocks and vibration caused by earthquakes or construction activity. 

Herby once the landslide is triggered, material is transport by various mechanisms including sliding, 

flowing and falling. 

The types of landslides vary with respect to the: 

• Rate of movement:- this range from a very slow creep (millimetres/year) to extremely 
rapid(meters/second) 

• Type of material: - Landslide is composed of bedrock, unconsolidated sediment and organic 
debris. 

• Nature of movement: - The moving remains can slide, slump, flow or fall. 

Landslides constitute a major natural hazard in India that accounts for considerable loss of life and 
damage to communication routes, human settlements, agricultural fields and forest lands. The Indian 
continents with diverse physiographic, seism tectonic and climatologically conditions is subjected to 
varying degree of landslide hazards; the Himalayas including North-eastern mountains ranges being 
the worst affected, followed by a section of Western Ghats and the Vidhyas[2,3]. 
In India, landslides mainly happen due to heavy rainfall, so this study concentrates on rainfall 
includes landslide. Earthquakes can also cause landslides, however in India this is primary confined 
to the Himalayan region. High rainfall intensity accelerates the sliding and slumping in the existing 
hazard zones [44]. 

IV. Sensor Need for Monitoring Rainfall Induced Landslides 

Under heavy rainfall conditions, rain infiltration on the slope cause instability, a reduction in the 
factor of safety, pressure response, changes in water table height, a reduction in shear strength which 
holds the soil or rock, an increase in soil weight and a reduction in the angle of response. When the 
rainfall intensity is larger than the slope saturated hydraulic conductivity and runoff occurs. 



169 | 



Vol. 2, Issue 1, pp. 168-178 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

There are three distinct physical events occur during a landslide: 

• The preliminary slope failure 

• The subsequent transport and 

• The final evidence of the slide materials. 

The initial slope failure can occur due to the increase in pressure and soil moisture content, under 
heavy rainfall which necessitates the inclusion of geophysical sensors for detecting the changes in 
pressure and moisture content with the warning system developed for landslide detection. So the 
system discussed in this paper also includes geophysical sensors such as pressure transducer and 
dielectric moisture sensor for capturing the measurement [5, 8]. 

After the slope failure the subsequent transport of the materials happens that will generate slope 
gradient change, vibration etc which has to be measured and monitored for effective issue of 
warning. So the warning system includes strain gauge and tilt meter that can be used for measuring 
slope gradient changes. Along with them geophone is used for analyzing the vibration [35, 42]. 

V. Enhanced Sensor Column Design 

The commercially available wireless sensor node do not have implanted sensors to measure pressure, 
moisture content, vibration, earth movement etc. This constraint has leaded us to implement data 
acquisition boards to content the external sensors to the wireless sensor nodes [60]. The geological 
sensors were placed inside a sensor column and they were connected to the wireless sensor node via a 
data acquisition board as shown in Figure 1. The sensor column design discussed is an enhanced 
version, which uses a homogenous structure, whereas our design uses a heterogeneous structure which 
differs with respect to the terrain conditions and geological and hydrological parameters of the 
deployment site [6]. And also in this sensor column design all the geological sensor such as geophone 
and dielectric moisture sensor are not placed inside the column but are connected to the same wireless 
sensor node. The sensor column design also includes tilt meters which can be used for validating the 
deformation measurements captured using strain gauges. 



MICAihtGFTE + DAQ ba-ard 




Figure 1. Enhanced Sensor Column Design 

VI. Wireless Sensor Network Architecture 

The WSN at the deployment site follows a two-layer hierarchy, with lower layer Wireless sensor 
nodes, sample and collect the heterogeneous data from the sensor column and the data packets are 



170 | 



Vol. 2, Issue 1, pp. 168-178 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

transmitted to the upper layer. The upper layer aggregates the data and forwards it to the sink node 
gateway kept at the deployment site [28]. 




Figure 2. Regionalized WSN Architecture for Landslide 

The geological and hydrological properties of the whole landslide area differ in each location, so it 
can be divided into a region having unique properties. Our deployment area is divided in three region 
such as crown region, middle region and toe region as shown in figure and numerous low level nodes 
attached to homogenous sensor column are deployed in these regions [7, 8]. 

VII. Wireless Sensor Network Algorithm 

The WSN uses four algorithms for implementing clustering, distributed consensus among the data 
energy efficient data aggregation and time synchronization, which will contribute for the development 
of an efficient landslide detection system [25, 47]. 

The real-time monitoring networks are constrained by energy consumption, due to the remote location 
of the deployment site and the non availability of constant power. Considering this factor, the WSN at 
the deployment site implemented a totally innovative concept for distributed detection, estimation and 
consensus to arrive at reliable decisions, more accurate than that of each single sensor and capable to 
achieve globally optimal decisions as discussed in research papers [9, 22]. In landslide scenario, the 
implemented of this algorithm imposes a constraint of handling heterogeneous sensors in each sensor 
column. The different methods that can be used for implementing this algorithm, for landslide 
scenario are:- 

• Homogenous sensor columns deployed in each region can be compared and a consensus value 
can be achieved for all the sensor columns in that region. 

• All the sensors deployed in the landslide area can be assigned with a weightage with regards 
to its impact on landslide detection and a common consensus value can be achieved executing 
the algorithms at once, for all deployed sensors. 

• Decentralized consensus performed for the same type of sensors in all sensor columns in a 
region. 

Decentralized consensus for the same type of sensors has developed for the deployed network. The 
decentralized algorithms will be executed for each type of sensors, one by one, for all homogenous 
sensor columns deployed at each region. After initial set of sensor achieve its consensus, the next set 
of sensors will execute the decentralized algorithm and so on [30]. The other designs demand 
knowledge of correlation between different geophysical sensors, whereas this method does not require 
this knowledge, but the processing delay will be more compared to other methods, due to the multiple 
execution of same algorithm. 



171 | 



Vol. 2, Issue 1, pp. 168-178 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Comerau* erf 

CMertrfc Moltf ur*5inf0r 



C<xt wrvtut of 

Pore Prraure Traraduc 



Strain Gauges 



Corner hjl of 



Cans*niuiof 



TCMAilol* 




J 



Figure 3. Decentralized Consensus for Same Type of Sensors 

Since the study concentrates on the detection of rainfall induced landslide, the most relevant data will 
be arriving during rainy season. So rainfall based alert levels have been developed which will 
influence the sampling rate of the geological sensors and the transmission of data to higher layers as 
discussed in the threshold based algorithm [10, 11]. This algorithm will help to reduce the energy 
consumed during the low alert levels and also in collecting and transmitting large amount of data, 
only when the environment and geological conditions demand the same. Other than these methods, 
state level transitions have been incorporated to reduce the energy consumption per node which will 
also contribute to reduced energy consumption throughout the network [24, 32]. These requirements, 
however lead to the need of time synchronization and the algorithm planned for implementation in our 
network is discussed in research paper. 

VIII. Wireless Sensor Testbed for Landslide Detection 

The design and development of a WSN for the landslide scenario involves consideration of different 
factors such as terrain structure, vegetation index, climate, variation, accessibility of the area etc. The 
prerequisites of WSN development are selection of sensor column location, sensor column design and 
its data collection method, understanding transmission range and necessity of external antennas or 
addition relay nodes, identification of the communication protocol, development of application 
specific algorithms for data aggregation routing and fault tolerance etc [51]. 

The wireless sensor testbed deployed at northeast, India follow a two layer hierarchy, with the lower 
layer and an upper layer. The lower layer wireless sensor nodes are attached to the sensor column. 
They will sample and collect the heterogeneous data from the sensor column and the data packets are 
transmitted to the upper layer [12, 41]. The upper layer consists of cluster heads, which will aggregate 
the data and forwards it to the sink node gateway kept at the deployment site. Data received at the 
gateway has to be transmitted to the Field Management Center (FMC) which is approximately 500m 
away from the gateway. A Wi-Fi network is used between the gateway and FMC to establish the 
connection [39, 37]. 

The FMC incorporates facilities such as a VSAT (Very Small Aperture Terminal) satellite earth 
station and a broadband network for long distant data transmission. The VSAT satellite earth solution 
is used for data transmission from the field deployment site to the Data Management Centre (DMC) 
situated at our university campus 300 km away, while the broadband connection provides fault 
tolerance for long distance transmission and can be used for uploading real time directly to a web 
page within minimum delay [11, 20]. The DMC consist of the database server and am analysis station, 
which perform data analysis and landslide modelling and simulation on the field data to determine the 
landslide probability. The WSN architecture for land slide detection is shown in figure 4. 



172 | 



Vol. 2, Issue 1, pp. 168-178 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 




Figure 4.WSN Architecture for Landslide Detection 

IX. Field Deployment 

The existing infrastructure has evolved through several iterative phases in its implemented. Important 
research focal points were deciding the sensor column locations, designing and constructing the 
sensor columns, sensor column deployment methods, interfacing circuitry, wireless sensor network, 
Wi-Fi network, satellite network, power solution, soil test and data analysis[59,20]. Extensive field 
investigations were conducted for identifying the possible locations for sensor column deployment. 
At the deployment site, an initial twenty sensor column location consisting of 150 sensors total, were 
identified with respect to their geological relevance [44]. The pilot deployment consist of two sensor 
columns, with ten sensors are deployed in the field along with six wireless sensor nodes as shown in 
figure 5. 




Fig 5. Field Deployment 



9.1. Deployment of Sensor Column 

One of the sensor columns is deployed at the toe region where various water seepage lines converge. 
This facts lead to the installation of pressure transducer at different depth (2m, 5m) of the sensor 



173 | 



Vol. 2, Issue 1, pp. 168-178 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

column 1[34]. The other geophysical sensors attached to its sensor column are dielectric moisture 

content transducer, and s geophone. Both pressure transducer and the dielectric moisture sensor are 

sampled at the rate of 10 samples/ second. The micaZ wireless sensor node connected to the sensor 

column transmits the digitized data values to the upper layers of the networks [13]. 

The other sensor column is attached with movement sensor since the location of it's in a unstable 

region. This sensor column has three tilt meters 91m, 2m, 3.5m) and three strain gauges (1.5m, 2.5m, 

and 4m) to capture the earth movement from the sensor column to the sensor column at 1 feet depth. 

The wireless sensor nodes sample three sensors at every five minutes and sent the data to upper level 

sensor nodes in the network [57, 26]. 

The spatial granularity will be increased by further addition of more sensor column approximately 20 

and wireless sensor nodes approximately 20 which is in process. 

9.2. Design and deployment of WSN 

The design and development of WSN for the landslide scenario involves consideration of different 
factors such as terrain structure, vegetation index, climate variation, accessibility of the area etc. The 
prerequisites of WSN development are selection of sensor column location, sensor column design and 
its data collection method, understanding transmission range and necessity of external antennas or 
additional relay nodes, identification of the communication protocol, development of application 
specific algorithm for data aggregation, routing and fault tolerance etc [19,52]. The WSN architecture 
at the deployment is discussed section IV and the wireless sensor nodes used for the deployment are 
2.4GHz MicaZ from Crossbow. The initial gateway was star gate with Intel XScale processor 
500MHZ and running ARM Linux OS was programmed as the Sink Node while the new gateway is a 
single board computer which has 100MB RAM, 32 MB flash and a fixed base mote that is used to 
send and receive the messages through the transceiver [14]. 

The sensor column is physically attached to a wireless sensor node which is integrated with a data 
acquisition board. The distance between current sensor columns is approximately 50 meters, at a slope 
of about 70r. Due to the terrain structure and vegetation, the data for the sensor columns are not able 
to reach the gateway [23, 27]. The major reason for this is no line of sight path between the columns, 
between the first sensor column and the gateway, and between the second sensor column and the 
gateway. The observation along with experiment tests, have has led us to employ three relay nodes in 
between the sensor columns themselves and the gateway. One of the relay nodes is a clustered head 
for this first and second column. The data from the cluster head is transmitted to the gateway in the 
form of packets. At the gateway the received packets are time stamped and stored [40, 46]. 

9.3. Deployment of Wi-Fi Network 

The Wi-Fi network is used to transfer the data from the gateway to the FMC and its uses an external 
antenna and an access point for the same. The network has been tested with WLAN standards 
802.11 [31]. The Wi-Fi network allows us to install the gateway in any scalable distance from FMC. 
Since the region experiences frequent landslides and has several area within every 1 sqkm, which can 
be utilized as future extension sites for landslide detection systems by connecting them to FMC via a 
Wi-Fi networks [56.18]. 

9.4. Deployment of satellite Network 

The basic satellite communication network in the landslide scenario is based on VSAT. The 
geological data collection at the landslide deployment site is transmitted from the FMC at the 
deployment site to the DMC, using the VSAT earth station [33]. The data is transmitted using UPD 
Protocol which includes recovery of lost packets, corrupted packets, secure transmission, and route 
via broadband during unavailability of VSAT, buffering the data to disk in case both networks are 
unavailable and sending the data as soon as the network is connected etc[48,29]. 

9.5. Monitoring Using Data Analysis System 

The DMC consist o the database server and an analysis station which performs landslide modelling 
and data analysis on the data received from the field. The software also has the capability of real 
streaming of data and its analysis results, over internet which will provide greater capability of 
effective warning issue at minimum delay [45]. Data received at the DMC is being been analyzed 



174 | 



Vol. 2, Issue 1, pp. 168-178 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

using the in-house designed data visualization software which has the capability to determine factor of 
safety of the mountain and probability of landslide occurrence with respect to the signals received 
from the deployed sensors. It also has the capability to compare and analysis data from different 
sensor columns, different sensor in the same sensor column the same sensors in different sensor 
column, selective comparison etc [15, 17, 21,]. 

Data is successfully received from the deployment site with minimal data packet loss and analysis of 
data has been performed. Data received from two pressure transducers and a rain gauge is shown in 
Figure 6. 



prtifi^ell/nV! 



Field Data 



- Pore pf euire 2 (nAfl 



Rainfall Ownl 



700 
900 
SOD 
40G 

J 3CC 

5 200 

100 



1 









^^^T 


-r^ _ ^j»^^-\— ' v 


i *— • *— . 




- 






■ 





1 Tl 1 ? 1 1 s | M 1 M I ■ 1 I 1 1 r I 1 P 1 1 I 1 r 1 1 M 1 P 1 r 1 I M 1 M 1 : 1 1' 1 1 M i I 1 ■ I 1 I ■ 1 M 1 

|§l§§§f8g§§§§IS§§IISP 



14 
14 
12 * 



0* 

out 

4 



I 



Fig 6. Real-time field data 



During monsoon season, the sensors were able to capture the expansion and contraction of soil mass 
during heavy rainfall condition and after rainfall. The data analysis software showed respective 
variation in each of the deployed sensors. 

X. Conclusion and Future Work 

WSN for landslide detection is one of the challenging research areas available today in the field of 
geophysical research. This paper describes about an actual field deployment of a WSN for landslide 
detection. This system uses a heterogeneous network composed of wireless sensor nodes, Wi-Fi and 
satellite terminals for efficient delivery of real time data to the data management canter. The data 
management canter is equipped with software's and hardware's needed for sophisticated analysis of 
the data. The results of the analysis in the form of landslide warnings and risks assessments will be 
provided to the inhabitant of the region. 

In the future, this work will be extended to a full deployment with increased spatial variability, and 
the work in this regard is progressing. Field experiments will be conducted to determine the effects of 
density of the nodes, vegetation, location of sensor columns etc, for detecting rainfall induced 
landslide that may help in the development of low cost WSN for landslide detection. 

Acknowledgment 

We are very thankful to Karpagam University and Dr.M.Hemalatha for giving us advice and 
suggestions on the development of my research work. 

References 

[1]. Ramesh. M "Real time Wireless Sensor Network for Landslide Detection", proceedings of The Third 
International Conference on Sensor Technologies And Application, SENSORCOMN 2009JEEE, Greece 
June 18-23, 2009.[2] 

[2]. Terzis Andreas, Anandarajah, Annalingam, Wang, I-Jing" Slip Surface Localization in Wireless sensor 
networks for Landslide Prediction, ISPN'06,USA, April 19-21, 2006.[9] 



175 | 



Vol. 2, Issue 1, pp. 168-178 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[3]. Scutari. G,Barbarossa.S," Distributed Consensus Over Wireless Sensor Networks Affected by Multipath 

Fading" IEEE Transaction on Signal Processing, Vol.56,No.8, August 2008. 
[4]. Garich.E "Wireless Automated Monitoring For potential Land Slide Hazards" Master Thesis, Texas A&M 

University, May 2007. [13] 
[5]. Raj.R, Kumar.S" Fault Tolerant Clustering Approaches in Wireless Sensor Network for Landslide Area 

Monitoring " proceedings of the 2008 International Congress on Wireless Networks (ICWN'08), Vol.1, 

pages 107-113, CSERA Press, July 2008 [15] 
[6]. Raiz Ahamed.S'The Effects of Global positioning System for Reliable Positioning, Navigation and Timing 

Services" Vol4No9/8Vol4No9, Journal of Theoretical and Applied Information Technology,2005 
[7]. Martinz.K, Hart.J.K "Environment Sensor Network", IEEE Computer, Vol 37, 2004. 
[8]. Kung.H, Hau.J,Chen.C" Drought Forecast Model and Framework Using Wireless Sensor Networks" 

Journal of Information Science and Engineering, Vol 22, pages 751-769, 2006. 
[9]. CoganJ, Szalay.A, Teriz.A " A Wireless Soil Ecology Sensor Network", 2006. 
[10]. Wang.G and Sassa.K "Pore Pressure Generation and Movement of rainfall-induced landslide: Effect of 

gain size and fine-particle content", Engineering Geology Vol 36, pages 109-125,2003. 
[11]. Mckenna.G.T "Grouted In Installation of Piezometer in Boreholes" Canadian Geotechnical Journal, 

Vol 32, pages 353- 355, 1995. 
[12]. Marechal.M,Pierrot.J,Gorce.J "Fine synchronization for Wireless Sensor Networks using gossip 

averaging algorithms in proceedings of ICC 2008. 
[13]. Barbarossa.S, Scutari. G "Decentralized Maximum Likelihood Estimation for Sensor Networks 

Composed of Nonlinearly Coupled Dynamics Systems", IEEE Transactions on Signal Proceeding, Vol 55, 

No 7, July 2007. 
[14]. Kunnath.A.T "Wireless Geophone for remote monitoring and Detection of landslides" International 

Conference, Communication and Signal Processing (ICCSP), 2011. 
[15]. Cao Jiong-qing"Hill Landslide Detection System Based on the IEEE 802.1.4 Wireless Sensor 

Networks", Beijing Journal, Computer knowledge and Technology, 2008-2009. 
[16]. Arianna Pesci, Paolo Baldi, " Digital Elevation Models for landslide evolution monitoring application 

on two areas located in Reno river valley (Italy)", Annals of Geophysics, Vol 47, 2004. 
[17]. Bishwajeet Pradhan, Jasmi and Saolee "Probabilistic and Statistical Landslide hazard mapping using 

GIS and remote sensing at Cameron Highland, Malaysia", 1999. 
[18]. Abbasi.I.A "Slope failure and landslide mechanism, Muree area, north Pakistan" Geological Bulletin 

University of Peshawar, Vol 35, 2003. 
[19]. Eric, James. S, Gardner "Modelling landslide hazards in the Kullu Valley, India using GIS and remote 

sensing", Transactions on geosciences and remote sensing, IEEE, 2002. 
[20]. Yang Hong: An experiment global prediction system for rainfall-triggered landslide using satellite 

remote sensing and geospatial", Transactions on geosciences and remote sensing, Vol.45, 2007. 
[21]. Hu.H, Fernandez, T.M.Dong, Azza,M " LiDAR-based 3d FEM geological simulation and landslide 

stability analysis", The 19 th International Conference on Geoinformatics, China, June 24-26, 2011. 
[22]. Azzam.R, Fernandez, "Monitoring of landslide and infrastructure with sensor network in an earthquake 

environment", The 5 th International on Earthquake Geotechnical Engineering (5ICEGE), 2011. 
[23]. Fernandez-Steger, Rohn.J "Landslide after heavy rainfall: Causes and anthropogenic influence", 

Workshop and field work on Georisk, Eu- Project Asia link. Thailand, 2003 
[24]. Kempka.T, Waschusch.M, " Influence of water content on underground", Geoberlin,2006 
[25]. Lu, P., Stump, A., Kerle, N. and ... [et al.] Object - oriented change detection for landslide rapid 

mapping. In: IEEE Geosciences and remote sensing letters, 8 (2011)4 pp. 701-705, 2011. 
[26]. Stumps, A. and Kerle, N. Object - oriented mapping of landslides using random forests. In: Remote 

sensing of environment, 115 (2011)10 pp. 2564-2577, 2011. 
[27]. Wirawan, Ranchman S, Pratomo I, Mitta N. Design of Low Cost Wireless Sensor Networks - Based 

Environmental Monitoring System for Developing Country. Proc. Int. Conference APCC 14th Asia-Pacific. 

Tokyo, Japan. 2008. 
[28]. Jamal din MZ, Aripin NM, Isa AM, Mohamed HWL. Wireless Soil Temperature and Slope Inclination 

Sensors for Slope Monitoring System. Proceedings of International Conference on Energy and Environment 

(ICEE). Selangor, Malaysia. 2006. 
[29]. Jung woos Lee M S, Real-time Monitoring of Landslide using Wireless Sensor Network. PhD Thesis. 

Ohio: The Ohio State University; 2009. 
[30]. Herry Z Kotta, Silvester Tena, Gregarious Klau, K Rantelobo. Application of Geographical 

Information System (GIS) for Mapping Landslide Susceptibility: A Case Study of Timor Tengah Selatan, 

NTTProvince. Proceedings of National Seminar on Applied Technology, Science, and Arts (1st 

APTECS).Surabaya. 2009. 



176 | 



Vol. 2, Issue 1, pp. 168-178 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[31]. Chuanhua Zhu, Xueping Wang. Landslide Susceptibility Mapping: A Comparison of Information and 

Weights-of-Evidence Methods in Three Gorges Area. International Conference on Environmental Science 

and Information Application Technology. Wuhan, China. 2009. 
[32]. Caine, N. (1980). The rainfall intensity-duration control of shallow landslides and debris flows, 

Geografiska Annaler, Vol. 62A, pp. 23-27, 1980. 
[33]. Garich, E. A. (2007). Wireless, Automated Monitoring for Potential Landslide Hazards, Master's 

Thesis, Texas A & M University, 2007.1verson, R.M. (2000). 
[34]. Landslide triggering by rain infiltration, Water Resource Research Vol. 36, pp. 1897-1910, 

2000.Kimura, H. & Yamaguchl. Y. (2000). 
[35]. Detection of landslide areas using satellite radar interferometer, Photogrammetric Engineering & 

Remote Sensing, Vol. 6 (3), pp. 337-344,2000. Kumar, V.S.; Sampath, S.; Vinayak, P. V. S. S. K. 

&Harikumar, R. (2007). 
[36]. Rainfall intensity characteristics in at coastal and high altitude stations in Kerala, Journal of Earth 

Systems Sciences, Vol. 116 (5), pp. 451-463, 2007. Kung, H.; Hua, J. & Chen, C. (2006) 
[37]. Drought Forecast Model and Framework Using Wireless Sensor Networks, Journal of Information 

Science and Engineering, Vol. 22, pp. Kenneth, A. T. & Ramesh.M, M. V. (2010). 
[38]. Integrating geophone network to real-time wireless sensor network system for landslide detection, In 

Proceedings of The Third International Conference on Sensor Technologies and Applications, 

SENSORCOMM 2010, IEEE Explore, 2010. 
[39]. LAN, Hengxing. ZHOU, C; Lee, C. F.; WANG, S. & Faquan, W. U. (2003). Rainfall induced 

landslide stability analysis in response to transient pore pressure - A case study of natural terrain landslide 

in Hong Kong, Science in China Ser. E Technological Sciences, Vol. 46, pp. 52-68, 2003. 
[40]. Liu, H.; Meng, Z. & Cui, S. (2007). A Wireless Sensor Network Prototype for Environmental 

Monitoring in Greenhouses, IEEE Explore, 2007 .Wilson, R. C. & Wieczorek, G. F. (1995). 
[41]. Rainfall Thresholds for the Initiation of Debris Flowa at La Honda, California, Environmental and 

Engineering Geosciences, Vol. 1 (1), pp.1 1-27, 1995. 
[42]. Kyoji Sassa. Editors. Landslide: Risk Analysis and Sustainable Disaster Management. Proceeding of 

First General Assembly of the Int. Consortium on Landslide. Springer-Berlin. NY. 2005. 
[43]. WEN Hai-jia, LI Xin, ZHANG Jia-lan. An Evaluation-Management Information System of High Slope 

Geo-risk for Mountainous City Based on GIS. International Conference on Information Science and 

Engineering (ICISE). Nanjing, China. 2009; 1976-1978. 
[44]. Jung woos Lee M S, Real-time Monitoring of Landslide using Wireless Sensor Network. PhD Thesis. 

Ohio: The Ohio State University; 2009. 
[45]. Cesarean Alippi, Romulo Camplani, Cristian Galperti, Manuel Roveri. Effective design of WSNs: from 

the lab to the real world. Proceeding of IEEE International Conference on Sensing Technology (ICST) 3rd. 

Taipei, Taiwan. 2008; 1-9 
[46]. .K. Aberer, M. Hauswirth, and A. Salehi. Global Sensor Networks. EEE Communications Magazine, 

special issue on Advances in Service Platform Technologies for Next Generation Mobile Systems, 2006. 
[47]. A. Arora, R. Ramnath, and E. Ertin. Exscal: Elements of an extreme scale wireless sensor network, 

2005. 
[48]. G. Barrenetxea, O. Couach, F. Ingelrest, M. Krichane, K. Aberer, M. Parlange, and M. 

Vetterli.SenS or scope: an Environmental Monitoring Network. Water Resources Research Journal, 2008. 
[49]. D. Estrin, L. Girod, L. Pottie, and M. Srivastava. Instrumenting the world with wireless sensor 

networks. Acoustics, Speech, and Signal Processing, 2001. Proceedings. (ICASSP'01). 2001 IEEE 

International Conference on, 4:2033-2036 vol.4, 2001. 
[50]. Straser, E.G. and Kiremidgian, A.S. A Modular, Wireless Damage Monitoring System for Structures, 

John A. Blume Earthquake Engineering Center Report No. 128, Stanford, CA,1998 
[51]. Garich, E.A. and Blackburn, J.T. "Automated, Wireless Instrumentation for Monitoring of Potential 

Landslide Hazards." Proc. IstAmerican Landslide Conf., Vail, CO, in press, 2007 
[52]. Aguado, L.E., O'Driscoll, C. Xia, P., Nurutdinov, K., Hill, C. and O'Breine, P. 2006. A Low-Cost, 

Low-Power Galileo/GPS Positioning System for Monitoring Landslides. Navitec 

.http://www.ggphi.eu/monitoring_landslides.pdf. October 2006 
[53]. Aoi, S., Kanugi, T. and Fujiwara, H. "Trampoline effect in extreme ground motion". Science, Vol. 

322, No. 5902, pp,2008 
[54]. Arnhardt, C, Asch, K.; Azzam, R; Bill, R.; Fernandez- Steger, T.M.; Homfeld, S.D. ; Kallash, A.; 

Niemeyer, F; Ritter, H.; Toloczyki, M. and Walter, K. "Sensor based Landslide Early Warning System - 

SLEWS, 2007 
[55]. Garich, E. A. Wireless automated monitoring for potential landslide hazards. - Master Thesis; Texas A 

& M University, 48 pp, 2007 



177 | 



Vol. 2, Issue 1, pp. 168-178 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[56]. IFRC " World Disaster report " - Focus on early warning, early action, International Federation of Red 

Cross and Red Crescent, 204 pp,2009 
[57]. Jibson, R.W., Harp, E.L. and Michael, J. A. "A method for producing digital probabilistic seismic 

landslide hazard maps". Engineering Geology, Vol. 58, 2000. 
[58]. Munich Re "Topics Geo - Natural catastrophes 2008, Analyses, assessments, positions". Knowledge 

Series, Number 302-06022, 50 pp, 2009 
[59]. Shou, K-J. and Wang, C-F. "Analysis of the Chiufengershan landslide triggered by the 1999 Chi-Chi 

earthquake in Taiwan". Engineering Geology, Vol. 68, pp. 237-2502004 
[60]. Keefer, D.K. "Landslides caused by earthquakes". Geological Society of America Bulletin, Vol. 95, pp. 

406-421. Keefer, D.K. "Investigating landslides caused by earthquakes - A historical review". Surveys in 

Geophysics, Vol. 23, pp. 473-510, 2002 
Authors 

M. Hemalatha completed MCA M.Phil., PhD in Computer Science and currently working as a 
Asst Professor and Head, dept of software systems in Karpagam University. Ten years of 
Experience in teaching and published Twenty eight papers in International Journals and also 
presented seventy-eight papers in various National conferences and one international 
conferences Area of research is Data mining, Software Engineering, bioinformatics, Neural 
Network. Also reviewer in several National and International journals 

M. Romen Kumar is presently doing PhD in Karpagam University, under the guidance of Dr. 
M. Hemalatha HOD of Department of software systems, Coimbatore, Tamil Nadu, and India 
and has completed MCA degree in 2008 and B.C. A in 2005. Major research area is Advance 
Networking - Sensor Network. Three papers published in International Journal and one paper 
presented in International Conference. 





178 | 



Vol. 2, Issue 1, pp. 168-178 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Evaluation of Phonetic Matching Approaches for 
Hindi and Marathi: Information Retrieval 

Sandeep Chaware 1 and Srikantha Rao 2 

Research Scholar, MPSTME, Mumbai, India 

2 Research Supervisor, MPSTME, Mumbai, India 



Abstract 

In multilingual environment, the phonetic matching plays an important role in various aspects. Basically, the 
techniques for phonetic matching are useful for information retrieval when text is not clear or not interpreted 
fully. Irrespective of correct form of the keyword, the entered keywords for information retrieval should matched 
phonetically and results should be displayed. Many approaches have been proposed for phonetic matching such 
as use of text-to -phonetic system in translator-based system, use of operators such as MLLike, code-based 
approaches or language- specific phonetic-rule based approaches etc. Each approach is having limitations. In 
this paper, we tried to find some of the limitations of using those existing approaches for Hindi and Marathi 
languages and proposed some solutions for phonetic matching used for information retrieval. 

KEYWORDS' Phonetic matching, text-to-phonetic, writing style, phonetic rules, threshold. 

I. Introduction 

The rapidly accelerating trend of globalization of businesses and the success of e-Governance 
solutions require data to be stored and manipulated in many different natural languages. The primary 
data repositories for such applications need to be efficient with respect to multilingual data. Efficient 
storage and query processing of data spanning over multiple natural languages are of crucial 
importance in today's globalized world. 

As our country is diversified by languages and approximately 10% of population is aware of English 
language, this diversity of languages is becoming a barrier to understand and acquainted in digital 
world. In order to remove the language barrier, information technology (IT) solutions can play a 
major role. A system should be developed and deployed with multilingual support so that it can serve 
all-regional community requirements [1]. However Government of India had already launched the 
program called Technology Development of Indian Languages (TDIL) under which there are many 
projects such as development of corpora, OCR, text-to-speech, machine translation, keyboard layouts 
and so on [2]. It has been found that when services are provided in native languages, it has been 
strongly accepted and used. 

India is a multilingual country with 22 recognized languages and 1 1 written script forms [3] (In some 
literature the officially Indian recognized languages were 23 [4]). All the scripts are derived from 
Brahmi and order of alphabet is similar. They also share some characteristics like common phonetic 
based alphabet, non-linear and complex scripts, word order free, and no cases in Indian scripts. A very 
peculiar feature of Indian languages is that though vowels can occur independently at the beginning, 
they do not occur independently within a word or as the last character of a word [5]. 
India is a country with various linguistics people. In India, the language or script changes after every 
20 kilometers approximately. Though English language is a global language, it cannot be used 
everywhere in India due to minimum percentage of literacy. We need native languages in order to 
reach rural population. There are many areas of applications where we have to keep the data in many 



179 | 



Vol. 2, Issue 1, pp. 179-189 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

languages so that people can access those data in their native languages when they don't know 
English. For example, railway reservation system, state or central government schemes, sales tax 
records, income-tax records, land records etc. These records should be maintained in English or in 
native languages. English records will help us faster processing and analyzing, which helps to make 
decision in certain situation where as native language records will be useful especially for rural and 
uneducated people. From those records, they either will get information or can provide valid data if 
necessary so that further analysis may be possible. 

The goal is to provide a seamless interface to the user crossing all the language barriers. It has been 
found that a user is likely to stay twice as long at a site and four-times more likely to buy a product or 
consume a service, if the information is presented in their native language. Today English on the web 
is down to 35% from 90% in 1995. The fraction of Internet users that are non-native English speakers 
has grown from about half in mid-90' s to about two-third and it is predicted that the majority of 
information available in the Internet will be multilingual by 2012 [6]. 

In this paper, we have proposed possible solutions to handle Indian language issues related to 
phonetic solutions. We had proposed system which will handle syntactic issues similar to phonetic as 
information retrieval for Hindi and Marathi. The phonetic issues are being handled by developing a 
system which will work on phonetic rules for languages and should allow minor variations in 
pronunciation or writing style. This way Indian language issues can be handled with respect to input, 
conversion and display. 

II. Phonetic Matching Issues for Hindi and Marathi 

There are many phonetic matching issues for Hindi and Marathi languages. Some have been described 
below and are addressed in the successive sections. 

> If we consider on hand approaches proposed for English, there are many alphabets for which 
no codes have been assigned as per algorithms. So, we may face problems in using and interpreting 

those alphabets. For example, the alphabets or letters like u \, ^T, T5, and ET does not have code to 
match in Hindi language. 

> If someone misses or adds language alphabets to a string, the string will be either 
misinterpreted or the system will give wrong result. 

> The pronunciation of people other than Hindi or Marathi language speaking community may 
vary. It will be of great challenge to interpret and process those strings and provide the information. 

> Strings ending with vowels need to be handled separately. 

> Also, the strings in Hindi may have ambiguity of using 'Matras' with vowels or consonants. 

> Special characters like '^t\ use of Nukta and so on need to be handled differently. 

> Verbal and visual equivalences between speech sounds (phonemes) and written sign 
(graphemes) need to be found out. Their relationships have to be found out. 

So, we have to consider all issues mentioned above in order to match the strings phonetically in Hindi 
and Marathi languages. This section focuses on some of the issues those are not being handled in 
existing approaches. 

HI. Foundations for Phonetic Matching 

3.1 Input Mechanism 

There are various mechanisms provided to input the keyword in native languages especially in Indian 
languages. Some are described below. We have used Input Method Editor (IME) method because of 
simplicity. 

> Multilingual Physical Keyboards: There are many multilingual physical keyboards available 
for inputting Indian languages, but they are not feasible because it increases the cost and most users 
don't have multilingual keyboards and so it would be a rigid approach. 



180 



Vol. 2, Issue 1, pp. 179-189 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

> Multilingual On-screen Keyboards: They can be downloaded from the Internet. But for each 
language the user must be aware of the character mappings between the existing physical keyboard 
and onscreen keyboard. 

> Input Method Editor (IME): Input Method Editor (IME) does transliteration. Transliteration is 
a mapping from one system of writing into another word-by-word or ideally letter-by-letter which is 
opposed to transcription that specifically maps the sounds of one language to the best matching script 
of another language [7]. 

> Inscript Keyboard Layout: This keyboard layout has been standardized in 1986 by DoE and 
addresses few concerns about languages. These concerns includes, first, people perceived Indian 
languages as very difficult to use on mechanical typewriters. There are difficulties in learning 
keyboard layout on vernacular typewriters. Second, there was no standardization on vernacular 
keyboard layouts [7]. Since our languages have a phonetic nature, this leads to the development of a 
common phonetic layout based on consonants and vowels alone. All compositions and conjuncts were 
now handled by a computer with intelligent algorithms. With this phonetic keyboard, one can work on 
multiple languages; it is easy to learn for infrequent users, is excellent for typists, and provides ease of 
use for Indian languages. Since it is common for all Indian scripts, it has been named as Inscript 
keyboard. 

> On-Screen Keyboard Layout with IME: In our domain, we had considered the on-screen 
keyboard layout to input Hindi and Marathi language strings. The on-screen keyboard layouts for 
Hindi and Marathi are shown in appendix A. In order to use those on-screen keyboard layout, we have 
to download and install IMEs for Hindi and Marathi. We had downloaded from bhashalndia.com 
website [7]. We added those two languages in the language bar of my computer from desktop. At 
right corner of the screen, a small icon with language options appeared. We can switch from one 
language to another by selecting the language from this icon. 

3.2 Storage Mechanism 

There are many multilingual database systems have been developed and deployed such as Oracle 9i, 
Microsoft SQL Server 2000, IBM DB2 Universal Server (7.0), and My SQL. Many support the 
encoding standards like Unicode, ISCII or NChar as data type. Some of encoding forms for those 
database systems are described below. 

> ASCII Encoding: The original American Standard Code for Information Interchange (ASCII) 
code was a 7-bit code used to encode all characters of the English language and several special 
characters such as a dot or a semicolon. However, this original code did not encode the umlauts of 
some of the European languages. Thus, the ASCII code was extended by 1 bit (8-bit ASCII code) to 
encode these characters as well. ASCII codes represent the text used in computer, communication 
devices. It includes definitions of 128 characters as 33 non-printable control characters, 94 printable 
characters and space are considered as invisible graphic. The ASCII code is a subset of the Unicode 
[8]. 

> ISCII Encoding: Indian Script Code for Information Interchange (ISCII) is a coding scheme 
for representing various writing systems of Indian languages. It is a Unicode standard for Indian 
scripts [33]. ISCII uses 8 bit code which is an extension of the 7 bit ASCII code containing the basic 
alphabet required for the 10 Indian scripts which have originated from the Brahmi script [8]. The 
ISCII code table is a super set of all the characters required in the Brahmi based on Indian scripts. For 
convenience, the alphabet of the official script Devnagari has been used in the standard. This is 
described in detail in appendix B. 

> Unicode Encoding: Unicode standard is the Universal character encoding standard, used for 
representation of text for computer processing. Unicode standard provides the capacity to encode all 
of the characters used for the written languages of the world. The Unicode standards provide 
information about the character and their use. This standard is very useful for computer users who 
deal with multilingual text, business people, linguists, researchers, scientists, mathematicians and 
technicians. It uses a 16 bit encoding that provides code point for more than 65000 characters 
(65536). It assigns each character a unique numeric value and name. The Unicode standard and 
ISO 10646 standard provide an extension mechanism called UTF-16 that allows for encoding as many 



181 | 



Vol. 2, Issue 1, pp. 179-189 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

as a million. Presently Unicode standard provides codes for 49194 characters. It is the default standard 
for multilingual data storage in any database system. Unicode is a uniform 2-byte encoding standard 
that allows storage of characters from any known alphabet or ideographic system irrespective of 
platform or programming environments. Unicode codes are arranged in character blocks, which 
encode contiguously the characters of a given script (usually single language) [11]. 

> Unicode or ISCII encoding uses separate code points for each character. Logical order is used 
in rendering rules which tends to correspond to pronunciation. They are supporting full consonant 
forms. For inputting the characters, they save lot of space, which increases memory efficiency [1]. 

> The NChar data type: SQL standard specifies a new data type as National Char, (referred to 
as NChar) large enough to store characters from any Indian language or script. We can use nchar 
when the sizes of the column data entries are probably going to be similar or nvarchar when the sizes 
of the column data entries are probably going to vary considerably. SQL-92 onward all standards 
support NChar data type for storing national characters. 

3.3 Display Mechanism 

We must consider two cases for displaying multilingual text. First, running a system from some 
terminal and second, running a system under a window system. Using terminal, a system just sends 
correctly encoded text to terminals and leaves the task of rendering multilingual text to them. The 
code conversion is done with accordance to a coding system specified for the system output. In a 
window system, a system takes responsibility of displaying multilingual text. Each character set is 
assigned the corresponding font. A collection of mappings from all character sets to the corresponding 
fonts is named fontset and is the basis for displaying each character. A fontset can be used according 
to the context. We had used the first approach since font may not be important. Each character is 
being displayed on the screen as a rendering form, where it is equivalent to its either Unicode value. 

IV. Phonetic Matching Approaches: Existing Systems 

4.1 Translator-Based System 

In this category, each string of a language is translated into a uniform representation by using a text- 
to-phonetic (TTP) system [9]. This system will translate each text string into phonetic form. This 
phonetic form is a set of an encoding standard IPA, in which all the alphabet characters are 
represented in phonetic form. For some of the Indian languages, either TTP systems may not be 
available or they need to be developed. 



Text strings 
to match 






Figure 1. General Architecture of Text-To-Phonetic (TTP) Based System 

Using this system, we cannot have the phonetic form of all the characters, especially for Hindi or 
Marathi. In order to match, edit distance can be calculated with some threshold value. Figure 3.1 
shows the general architecture for text-to-phonetic based system. 

4.2 Code-Based Systems 

Using some code in the numeral form or other form, entire string is translated into a code format. This 
code always starts with first character of a string followed by at least four characters [10]. We need to 
compare the codes of both the strings to match two strings. If the codes are same then we may say that 
both strings are phonetically matching. The codes can be generated by grouping the alphabets 
according to their phonemes. Each group will have the same code value. Some system starts the code 
with 0, some starts with 1. But sometimes, we may get same code for different strings. Examples of 



182 | 



Vol. 2, Issue 1, pp. 179-189 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

these systems are soundex, phonix and so on. Some systems may group the alphabet characters and 
assign the code for each group. If two strings are having maximum groups those are having same 
codes then we may say that both strings matches phonetically, example is Q-gram method. 

Rules 



Text strings 
to match 




Figure 2. General Architecture of Code-based System 

Figure 3.2 shows the general architecture for code -based system, where the codes will be generated by 
using rules. The matcher will match the codes for equivalence. 

4.3 Phonetic-Rule Based Systems 

These systems work on the phonetic rules designed for a particular language. These rules are used to 
group the alphabet characters according to phonemes. After applying these rules, each string is 
converted into its phonetic form either in text form or in some code form. In order to match, these 
forms are compared with some threshold value. These systems are easy to use, but difficult to build as 
we have to design phonetic rules for a language. 




Threshold 



Figure 3. General Architecture of Phonetic-Rule based System 

Figure 3.3 shows the general architecture for phonetic rule-based system, where the rules for each 
language apply to convert the string into its phonetic form. The matcher will use threshold value in 
order to match. 

In this section, we elaborated the basic three approaches for phonetic matching. Those approaches 
may work for Hindi and Marathi languages, but need to be revised to a greater extent. 

V. Drawbacks of Existing Phonetic Matching Approaches 

The following are some of the drawbacks from existing phonetic approaches. 

> In one of the approach, we need to find IPA code for each string for phonetic matching which 
is difficult and may not be available for Indian languages. 

> Also we need to use text-to-phonetic (TTP) system for each language. Use of TTP makes the 
system complex. 

> The algorithm depends on the user's predefined threshold value, so there may be an 
ambiguity in matching. 

> The edit distance calculation is complex since many operations are to be carried out. 

> The soundex and Q-gram methods use code for each alphabet. These methods are either 
generating wrong results or may not accommodate the code for all the alphabets for Hindi and 
Marathi languages. 



183 | 



Vol. 2, Issue 1, pp. 179-189 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

VI. Proposed Phonetic Matching Approaches 

We proposed two phonetic approaches. One is based on writing style of the strings, where phonetic 
matching has been done by considering all possible writing styles of the native languages strings. 
Once matching has been done, information retrieval gives us the required results. In the second 
approach, we matched the strings phonetically by converting the strings into its equivalent phonetic 
form by using its phonetic rules for each language. These two matching approaches has been 
explained with proposed algorithm and example in the successive sections. 

6.1 Phonetic Matching Approach - 1 

Objective: Phonetic Matching with Writing Style for Hindi and Marathi 
Input: Native language string, S L i 
Output: IR in selected native language. 

1. Enter the string in any native language such as Hindi or Marathi. 

2. Parse the string to get vowels, consonants or modifiers. 

3. Extract the vowels from the string. 

4. Construct all possible combinations of string using vowels. 

5. Convert the native language string into English by using mapping methodology. 

6. Search the database based on all combinations. 

7. Extract the result string from database. 

8. Convert English language string/s from database into native language string/s. 

9. Display the exact match in native language. 

6.1.1 Example 

Let's take an example of a Hindi string for which corresponding information has been retrieved. 

String in Hindi: '^rfcfoTT 

The following are the steps as per matching algorithm 5.2.3 to be applied to this string. 

STEP 1 (Parsing): After taking native language string as input it is interpreted and parsed according 

to vowels, consonants and modifiers. Thus, we are getting a syllable for a string. 

Parsing of a string HUpHI' : <T Q 3T V o& f& cT or. 
Figure 4. Parsing of a String Htllekdl' 
The consonants are: T, ET, cT, el" 
The vowels are: 3T 
The modifiers are: O, c^ fc?, or. 
Figure 4 shows the parsing of a string Htileldi' as one of the possible ways of writing styles in Hindi 

or Marathi. Other possible ways of writing the same string are 'TErfcToTT or '<fc|<Hl<Hl' or 'T^hoTT'. 

For each string, the system should matched phonetically and provide desired information. Similarly, 

we acquired the parsing of each string and used for matching. 

Here, we are using full consonant approach so that we should get exact consonant, vowel or modifier. 

Even if we are using little high number of primitives for the string, it does not affect the inputting 

efficiency [1]. 

STEP 2 (Translation): Each native language string has to be translated into English, as we are 

maintaining the database in English for a shopping mall domain. We had used character-by-character 

mapping methodology for the translation. In this methodology, each character will be separately 

mapped as shown in table 1. This will convert native language string to English language string. 

After mapping, the entered Hindi string is translated to English as 'raghoolila' as per combinations of 

vowels, consonants and modifiers as shown in table 1. 



184 



Vol. 2, Issue 1, pp. 179-189 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Table 1. Hindi-To-English Conversion Mapping Table 



Hindi 
Characters 


T+q 


3T 


ST 


"CV 


*T 


R* 


oT 


CT 


Equivalent 

English 
Characters 


R 


a 


gh 


00 


1 


i 


1 


a 


Equivalent 
ASCII 
Codes 


2352 


2309 


2328 


2370 


2354 


2311 


2354 


2310 



STEP 3 (Query Formation): After conversion, a query is formed in SQL and fired against the 
database which is stored in English. 

SQL Query: Select * from shopping_mall where shopping_mall_name = 'Raghoolila'. Similarly for 
all translated strings, SQL query is formed as in figure 5. 




Figure 5. SQL Query for a String 'Raghuiila' and Other Forms 

The string is being passed to query module as a parameter and according to cases the query is formed. 
The string is searched in the corresponding database and retrieves it by the database module. 
STEP 4 (Translation and Display): In order to convert English to native language string, we 
mapped each character with its ASCII code [7] and corresponding character is displayed, as shown in 
table 2. This task has been done by translation module. 

Table 2. English-To-Hindi Conversion Mapping Table 



Equivalent 


R 


a 


gh 


00 


L 


i 


1 


a 


English 


















Characters 


















Equivalent 


2352 


2309 


2328 


2370 


2354 


2311 


2354 


2310 


ASCII 


















Codes 


















Equivalent 
Hindi 


T+q 


3T 


tr 


"<s 


cT 


ft 


*T 


or 


Characters 



















For the string 'Raghoolila', the entire tuple has been retrieved as information and translated into Hindi 
as per mapping methodology and shown as information. 

6.1.2 Results 

The results after phonetically matching as IR are shown in figures 6 and 7. Figure 6 shows the user 
interface to enter a string for phonetically matching. Figure 7 shows the IR result after phonetically 
matched with the existing database according to algorithm. 



185 



Vol. 2, Issue 1, pp. 179-189 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 






_ 


-fl/lci ■: ' i V: TTFTlt slvH-iJUtil^ 





Figure 6. Sample Input Interface for Native Language 




Figure 7. Result of Sample Query 



6.2 Phonetic Matching Approach - II 



Objective: Rule-based Phonetic Matching for Hindi or Marathi 

Input: Two strings either in Hindi or Marathi to match OR one string for IR. 

Output: Phonetic Matching Yes or No OR display of record/s from database as IR. 

> Enter two strings Hindi or Marathi in order to match phonetically. 

> Each string is translated into its phonetic form by using phonetic rules for each language. 

> Parse those two strings to acquire combinations of vowels, consonants or modifiers. 

> Obtain Unicode for each translated string by summing the Unicode value of each character of 
a string. 

> Compare the resultant Unicode values of both the strings by considering a threshold value of 
5%. 

> If these values are within 5%, then we are saying that they are phonetically matched. Else 
they are not matching. 

> For IR, the entered string is searched in database after converting into its equivalent phonetic 
form. If it matches by considering threshold value of 15%, then the corresponding tuple is displayed 
asIR. 

6.2.1 EXAMPLE 

Consider the two strings ^Hdh' and 'F^ffa"' in Hindi. 

STEP 1 (Phonetic Equivalent Strings): 

Its corresponding phonetic forms are: 

STEP 2 (Parsing): 

After parsing those two strings, we acquired the results as combinations of vowels, consonants and 
modifiers as: 

*I3W31cpta = *T o 3T5T q 3TcT Q 3fr*T 



186 



Vol. 2, Issue 1, pp. 179-189 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

STEP 3 (Comparison): 

After acquiring phonetic codes from Unicode of each character and transferring them to decimal 
values, we acquired the following codes for the strings: 

*TQ 3T5TO 3T?T Q 3?m= 23487 

^TQ3ToT03T«ro3frT =23488 

By considering 5% threshold to match, the difference is calculated as: 

(23488 - 23487) / 23488) * 100 = 0.0042%. 

STEP 4 (Result): 

The difference is within 5% threshold, so we can say that those strings are phonetically matched. 

6.2.2 Results 

Table 3 shows the comparison of various strings in Hindi and Marathi for phonetic matching. We 
compared our approach with soundex and Q-gram methods and obtained better and accurate results. 
The results are also shown in graphical form as in figure 8. Figure 9 and figure 10 shows the 
information retrieval results after phonetic matching as per proposed methodology. 





Table 3: Comparison of Strings 


for Hindi and Marathi 




Strings 


HINDI 


MARATHI 




SOUNDEX 


Q-GRAM 


INDIC- 
PHONETIC 


SOUNDEX 


Q-GRAM 


INDIC- 
PHONETIC 


41 dH & 


YES 


YES 


YES 


YES 


YES 


YES 




YES 


YES 


NO 


YES 


YES 


NO 


o 


YES 


YES 


YES 


YES 


YES 


YES 


<midcii 


Shan & 


YES 


YES 


NO 


YES 


YES 


NO 




I SOUNDEX 
I Q-GRAM 
IINDIC-PHONETIC 



INDIC-PHONETIC 
Q-GRAM 
SOUNDEX 



Figure 8: Graphical Comparison of Three Phonetic Matching Methods 



187 | 



Vol. 2, Issue 1, pp. 179-189 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 





SHOPPING MALL DOMAIN 

Select Language <•> Hindi 


O Marathi 

1 




Enter Mall Name | JlViJli 
jLabelS 


SUBMIT CLEAR 





Figure 9. Phonetic Name- wise Search in Domain for IR 









g L ^| L ^i| 






MA 

Mall Name 
Mall Location 

F*Mone Number 


LL DETAIL 






l^nl | 




\*i*-2\$ I 


|33998989 



Figure 10: IR after Phonetic Matching 



VII. Conclusion 

Many phonetic matching approaches, methods, algorithms have been proposed. But all these need lot 
of parameters, number of external resources needed for matching and so on. Basically all these 
methods are dependent on either international phonemic alphabet or translation system for each 
language. Some approaches rely on code for each alphabet or rules based on pronunciation for 
matching. In this paper, we classified the general approaches for phonetic matching. In proposed 
approaches, these classifications have been applied and evaluated. We also made an evaluation of our 
proposed approaches and compared with approaches like soundex, Q-gram which may work for 
English but may give wrong result for Hindi and Marathi languages. We found better and accurate 
results as compared to other existing approaches for our proposed approaches. 

References 

[1] Madhuresh Singhal et al. 'Developing Information Technology Solutions in Indian Languages: Pros and 

Cons'. Private Publication. 

[2] http://www.tdil.mit/gov.in 

[3] Ranbeer Makin et al. 'Approximate String Matching Techniques for Effective CLIR among Indian 

Languages'. Private Publication. 

[4] Pranav Mistry and Niranjan Nayak. 'AKSHAR: A mechanism for inputting Indie scripts on digital devices'. 

USID2007, June 1 8-20, 2007, Hyderabad, India. 

[5] Prof. R.K. Joshi et al. 'A Phonetic Code-based Scheme for Effective Processing of Indian Language'. 

Internationalization and Unicode Conference, Prague, Czech Republic, March 2003. 

[6] K. Ganesan and G. Siva. 'Multilingual Querying and Information Processing'. Information Technology 

Journal 6 (5), 2007, pp 751-755. 

[7] www.xs4all.nl/~wjsn/hindi/htm 

[8] www.bhashaindia.com 



188 



Vol. 2, Issue 1, pp. 179-189 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[9] A. Kumaran 'Multilingual Information Processing on Relational Database Architectures'. PhD Thesis, IISC 

Bangalore, 2006. 

[10] Justin Zobel and Philip Dart. 'Phonetic String Matching: Lessons from Information Retrieval'. Private 

Publication. 

[11] www.unicode.org 



Authors 

Sandeep Chaware is a Research Scholar at MPSTME, NMIMS, Mumbai and his research 
area is 'Phonetic and Semantic matching Approaches for Hindi and Marathi'. 



Srikantha Rao is a Director at TIMSCDR and Research supervisor at MPSTME, NMIMS, 
Mumbai. 





189 | Vol. 2, Issue 1, pp. 179-189 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Design of Energy-Efficient Full Adder using 
Hybrid-CMOS Logic Style 

1 r\ o 

Mohammad Shamim Imtiaz, Md Abdul Aziz Suzon, Mahmudur Rahman 

^art-Time Lecturer, Department of EEE, A.U.S.T, Dhaka, Bangladesh 

2 Part-Time Lecturer, Department of EEE, A.U.S.T, Dhaka, Bangladesh 

3 Ex- Student, Department of EEE, A.U.S.T, Dhaka, Bangladesh 



Abstract 

We present new designs for full adder featuring hybrid-CMOS design style. The quest to achieve a good- 
drivability, noise-robustness and low energy operations guided our research to explore hybrid-CMOS style 
design. Hybrid-CMOS design style utilizes various CMOS logic style circuits to build new full adders with 
desired performance. We also classify hybrid-CMOS full adders into three broad categories based upon their 
structure. Using this categorization, many full adder designs can be conceived. The new full adder is based on 
XOR-XOR Hybrid CMOS model that gives XOR and XOR full swing output simultaneously. This circuit's 
outperforms its counterparts showing 4%-31% improvement in power dissipation and delay. The output stage 
also provides good driving capability and no buffer connection is needed between cascaded stages. During our 
experiments, we found out that many of the previously reported adders suffered from the problems of low swing 
and high noise when operated at low supply voltages. The proposed full adders are energy efficient and 
outperform several standard full adders without trading of driving capabilities and reliabilities. The new full- 
adder circuits successfully operate at low voltages with excellent signal integrity and driving capability. The 
new adders displayed better performance as compared to the standards full adder. The problem we face during 
the experiment leads us to different zones where efficient circuit can be developed using this new full adder. 

KEYWORDS: Adders, Exclusive OR gate (XOR), Exclusive NOR gate (XNOR), Multiplexer, Hybrid-CMOS 
design style, low power. 

I. Introduction 

The necessity and popularity of portable electronics is driving designers to endeavor for smaller area, 
higher speeds, longer battery life and more reliability. Power and delay are the premium resources a 
designer tries to save when designing a system. The most fundamental units in various circuits such as 
compressors, comparators and parity checkers are full adders [1]. Enhancing the performance of the 
full adders can significantly affect the overall system performance. Figure 1 shows the power 
consumption breakdown in a modern day high performance microprocessor [2]. The data path 
consumes roughly 30% of the total power of the system [19] [23]. Adders are an extensively used 
component in data path and therefore careful design and analysis is required. 

So far several logic styles have been used to design full adders. Each design has its own pros and 
cons. Classical designs use only one logic style for the whole full adder design. One example of such 
design is the standard static CMOS full adder [3]. The main drawback of static CMOS circuits is the 
existence of the PMOS block, because of its low mobility compared to the NMOS devices. Therefore, 
PMOS devices need to be seized up to attain the desired performance. Another conventional adder is 
the complementary pass-transistor logic (CPL) [3]. Due to the presence of lot of internal nodes and 
static inverters, there is large power dissipation. The dynamic CMOS logic provides a high speed of 
operation; however, it has several inherent problems such as charge sharing and lower noise 
immunity. Some other full adder designs include transmission-function full adder (TFA) [4] and 



190 



Vol. 2, Issue 1, pp. 190-202 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

transmission -gate full adder (TGA) [5]. The main disadvantages of these logic styles are that they 
lack driving capability and when TGA and TFA are cascaded, their performance degraded 
significantly [23]. 




Figure 1: Power breakdown in high-performance microprocessors 

The remaining adder designs use more than one logic style for their implementation which we call the 
hybrid-CMOS logic design style. Examples of adders built with this design style are DB cell [6], 
NEW 14-T adder [7], and hybrid pass logic with static CMOS output drive full adder [8] and new- 
HPSC [9] adder. All hybrid designs use the best available modules implemented using different logic 
styles or enhance the available modules in an attempt to build a low power full adder cell. Generally, 
the main focus in such attempts is to reduce the numbers of transistors in the adder cell and 
consequently reduce the number of power dissipating nodes. This is achieved by utilizing intrinsically 
low power consuming logic style TFA or TGA or pass transistors. In doing so, the designers often 
trade off other vital requirements such as driving capability, noise immunity and layout complexity. 
Most of these drivers lacking driving capabilities as the inputs are coupled to the outputs. Their 
performance as a single unit is good but when larger adders are built by cascading these single unit 
full adder cells, the performance degrades drastically [21] [25]. The problem can be solved by 
inserting buffers in between stages to enhance the delay characteristics. However, this leads to an 
extra overhead and the initial advantage of having a lesser number of transistors is lost. 
A hybrid-CMOS full adder can be broken down into three modules [6]. Module-I comprises of either 
a XOR or XNOR circuits or both. This module produces intermediate signals that are passed onto 
Module-II and Module-Ill that generate Sum and C out outputs, respectively. There are several circuits 
available in [1], [6] and [7] for each module and several studies have been conducted in the past using 
different combinations to obtain many adders [1], [6], [10]. 

This paper is structured as follows: Section 2 and its subsections briefly introduce three categorized 
model of full adder. Section 3 and its subsections represent our proposed circuits for three different 
Modules where we present a new improved circuit for the simultaneous generation of the XOR and 
XNOR outputs to be used in Module-I and propose a new output unit for Module-II and Module-Ill 
which consist of XOR-XNOR or Multiplexer. Using the new circuits in Module-I, II and III, we build 
new hybrid-CMOS full-adder cells which is discuss in Section 4. Section 5 briefly exhibits the results 
and discussion. The new adder is optimized for low power dissipation and delay then it is compared 
with the classical static-CMOS, CPL, TFA, TGA, NEW14T, HPSC, and NEW-HPSC full-adder cells. 
The proposed full-adder design exhibits full-swing operation and excellent driving capabilities 
without trading off area and reliability. Section 6 suggests the future work and modification of this 
paper. Section 7 concludes the paper. 

II. Full adder categorization 

Depending upon their structure and logical expression we classified hybrid CMOS full adder cells 
[11] into three categories. The expression of sum and carry outputs of 1-b full adder based on binary 
input A, B, C in are, 

Sum=A©B©Q n 

C 0Ut = A.B + C in (A@B) 

These output expression can be expressed in various logic style and that's why by implementing those 
logics different full adders can be conceived. Moreover, the availability of different modules, as 



191 | 



Vol. 2, Issue 1, pp. 190-202 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

discussed earlier, provides the designer with more choices for adder implementation [21] [25]. Using 
these different modules [8] we suggest three possible structures for full adder and these are as follows. 



Module-II 




Figure 2: (a) General from of XOR-XOR based model, (b) General from of XNOR-XNOR based model, (c) 
General from of Centralized full adder 

2.1 XOR-XOR Based Full Adder 

In this category, the Sum and Carry outputs are generated by the following expression, where H is 
equal to A B and H' is the complement of H. The general form of this category is shown in Figure 
2(a). 

Sum = A B C in = H C in 

c out = A.H' + C in .H 

The output of the sum is generated by two consecutive two-input XOR gates and the C out output is 
the output of a 2-to-l multiplexer with the select lines coming from the output of Module-I. The 
Module-I can be either a XOR-XNOR circuit or just a XOR gate. In the first category, both Module- 1 
and Module-II consist of XOR gates. In the first case, the output of the XOR circuit is again XORED 
with the carry from the previous stage (C in ) in Module- II. The H and H'outputs are used as 
multiplexer select lines in Module-Ill. The Sum adders belonging to this category are presented in 
[12], [13]. 

2.2 XNOR-XNOR Based Full Adder 

In this category, the Sum and Carry outputs are generated by the following expression where A, B and 
C in are XNORed twice to from the Sum and expression of C out is as same as previous category. The 
general form of this category is shown in Figure 2 (b). 



Sum = A@B@C in = H'@C in 
c out = A.H' + C in .H 

In this category, Module-I and Module-II consist of XNOR gates and Module-Ill consists of a 2-to-l 
multiplexer. If the first module uses a XOR-XNOR circuit, then the H' output is XNORed with the 
C in input to produce the Sum output. The static energy recovery full adder (SERF) [14] belongs to 
this category and uses a XNOR gate for Module-I and Module-II and a pass transistor multiplexer for 
Module- III. 

2.3 Centralized Full Adder 

In this category, the Sum and Carry outputs are generated by the following expression. The general 
form this category is shown in Figure 2(c). 

Sum = H @C in = H. C[ n + //'. C in 

c out =A.H' + C in .H 

Module-I is a XOR-XNOR circuit producing H and //'signals; Module-II and Module-Ill are 2-to-l 
multiplexers with H and H' as select lines. The adder in [8] is an example of this category. It utilizes 



192 | 



Vol. 2, Issue 1, pp. 190-202 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

the XOR-XNOR circuit presented in [7] and proposes a new circuit for output Module-Ill. The 
simultaneous generation of H and H' signal is critical in these types of adders as they drive the select 
lines of the multiplexers in the output stage. In another case (i.e. non simultaneous H and H'), there 
may be glitches and unnecessary power dissipation may occur. The final outputs cannot be generated 
until these intermediate signals are available from Module-I [20]. 

III. Proposed Circuit For Module-1,11 and III 

Hybrid CMOS full adders can be divided into three Modules. Each of the Models consists of XOR or 
XNOR or 2 to 1 multiplexer with selection lines. Module- 1 Consist of XOR or XNOR in all three 
categories; Module-II consists of XOR or XNOR for first two categories and 2 to 1 multiplexer for 
last category and Module -III consists of 2 to 1 multiplexer with selection lines in all three categories. 
Finally it can be said that three types of circuits used to from three categorized full adders. Here we 
will propose three new circuits for Module-I, Module-II and Module-Ill. 

3.1 MODULE-I 

Here we will talk about the proposed XOR and XNOR model. From the previous studies, we have 
found that XOR or XNOR gates based on transmission gate theory has limited transistor with 
enormous drawbacks. The drawbacks are the required complementary inputs and the loss of driving 
capability [14]. In general, if the output signals of a circuit come from V DD or V S s directly, we say this 
circuit has driving capability. If the circuit output will drive other circuits, it does better to cascade a 
canonical CMOS buffer to do so. 

To follow without the loss of generality, all the methods we discuss will focus on the XOR function, 
mainly because the XNOR structure is very similar to XOR structure symmetrically. The skill for the 
XOR function can be applied to the XNOR function without question. 

Based on the inverter configuration theory, two inverters are arranged to design XOR function as well 
as XNOR structure. These types of gates do not need the complementary signal inputs as like before 
and the driving property is better but it still have some defects such as no full driving capability on the 
output and more delay time [9]. 

In recent times simultaneous generation of XOR and XNOR has been widely used for Module-I, II 
[9], [14], [15].This feature is highly desirable as non skewed outputs are generated that are used for 
driving the select lines of the multiplexer inside the full adder. Figure 3(a) shows a configuration 
using only six transistors and is presented in [14]. This circuit has been widely used to build full-adder 
cells [9], [14], [15]. The circuit has a feedback connection between XOR and XNOR function 
eliminating the non-full-swing operation [26]. The existence of V DD and GND connections give good 
driving capability to the circuit and the elimination of direct connections between them avoids the 
short circuit currents component. However, when there is an input transition that leads to the input 
vector AB: XX- 11 or AB: XX-00, there is a delay in switching the feedback transistors. This occurs 
because one of the feedback transistors is switched ON by a weak signal and the other signal is at high 
impedance state. This causes the increase in delay. As the supply voltage is scaled down, this delay 
tends to increase tremendously. This also causes the short circuit current to rise and causes the short 
circuit power dissipation to increase and eventually increase the power-delay product. To reduce this 
problem careful transistor sizing needs to be done to quickly switch the feedback transistors [9]. 
We found another improved version of XOR-XNOR circuit [8], [18], [26] which provides a full- 
swing operation and can operate at low voltages. The circuit is shown in figure 3(b). The first half of 
the circuit utilizes only NMOS pass transistors for the generation of the outputs. The cross-coupled 
PMOS transistors guarantee full-swing operation for all possible input combinations and reduce short- 
circuit power dissipation. The circuit is inherently fast due to the high mobility NMOS transistors and 
the fast differential stage of cross-coupled PMOS transistors. But the main drawback was it showed 
worse output at low voltage but at high voltage it showed completely opposite characteristic [18]. 



193 | 



Vol. 2, Issue 1, pp. 190-202 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 





(a) 



(b) 



t 



rn 




AXNORB 
(X) 




AXORB 
(Y) 



t 




AXNORB 
(X) 



(C) 



(d) 




AXORB 
(Y) 



Figure 3: (a) Circuit 1 for XOR-XNOR model (b) circuit 2 for XOR XNOR model (C) Proposed XOR (d) 
proposed XNOR 

We propose a novel XOR-XNOR circuit using six transistors that generates XOR and XNOR outputs 
simultaneously. Figure 3(c) and 3 (d) respectively represent Proposed XOR and XNOR circuit. On 
the 6-transistor design, the new proposed structures require non-complementary inputs and their 
output will be perfect. The initial plan was creating 4-transistor design but it was jeopardized due to 
worse output when both inputs were low for XOR and high for XNOR. Analysis of 4-transistor XOR 
structures, the output signal is the cases of input signal AB= 01, 10, 11 will be complete. When 
AB=00, each PMOS will be on and will pass a poor low signal at the output end. That is, if AB=00 
the output end will display a voltage, threshold voltage, a little higher then low but path driving 
capability exist, due to NMOS being on. Hence though the output is not complete, the driving current 
will increase. For XNOR function, the output in the case of AB= 00, 01, 10 will be complete. While 
AB=11, each NMOS will be on and pass the poor high signal level to the output end. The analysis of 
driving capability is the same as XOR structure. By cascading a standard inverter to the XNOR 
circuit, a new type of 6-transistor XOR is found which will have a driving output, and the signal level 
at the output end will be perfect in all cases. The same property is presented in the 6-transistor XNOR 
structure. The proposed XOR-XNOR circuit was compared to circuits in figure 3(a) and 3(b) based on 
number of transistors, power and delay. In all the criteria our proposed model performs outstandingly. 
The simulation results at 2 V DD and 2V input are shown in Table-I: 

Table 1: Simulation results for the proposed XOR-XNOR Circuit at 50-MHz Frequency and 2V DD 





Circuit [1] 


Circuit [2] 


Propose XOR 


Propose XNOR 


No. of Transistor 


6 


10 


6 


6 


Power (fiW) 


7.524 


8.750 


4.07 


4.07 


Delay (ns) 


0.305 


0.210 


0.108 


0.106 



194 | 



Vol. 2, Issue 1, pp. 190-202 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

3.2 MODULE-II 

Here we will review some of the existing and most frequently used circuits that can be used in the 
different modules of the full adder. From previous studies, we learned about eight different circuits 
[15], [16] which performed best in their available ways with advantages and disadvantages. Among 
eight of them we choose the best two and used the more efficient one for our proposed model. Those 
two circuits are given in figure 4 



Cm 



Cin 



~LT 



r^ 



Sum 



^ 



-Sum 



(a) 



(b) 



Figure 4: Circuits for Module-II 



Figure 4(a) has transmission-function implementation of XOR and XNOR functions. This circuit does 
not have supply rails thereby eliminating short circuit current. Figure 4(b) is essentially the 
complement and has an inverter to produce Sum. This provides good driving capability due to the 
presence of the static inverter. This circuit is one of the best performers among all the circuits 
mentioned in [8] in terms of signal integrity and average power-delay product [6]. Both the circuits 
avoid the problem of threshold loss and have been widely used in adder implementation [15], [16]. 
We employ this circuit for our full-adder design. 

3.3 MODULE-III 



The expression of Module-III is, 



c out - A.H' + C in .H 



This expression is the output of 2 to 1 multiplexer with H and H' as the select lines. The most 
common implementation of the previous expression is using transmission gates (TG). Figure 5(a) 
shows the circuit for a 2-to-l multiplexer using TG. The main drawback of this multiplexer is that it 
cannot provide the required driving capability to drive cascaded adder stages. One solution to this 
problem is to have an output buffer as shown in Fig. 5 (a). This would incur extra delay and an 
overhead of four transistors. 



Cin 

H _ 



A 



~1_ 

O 






"irib 


, B 


Cin 


11 IF 


A 


XOR 


11 IF 




ir 11 




XNOR 


—? sL- 






II — i i-i I 





H" 



1 



cout cin ^ 
H 




(a) 



(c) 



Figure 5: (a) multiplexer using transmission gate (b) Multiplexer based on the static-CMOS logic style (c) 
Multiplexer based on Hybrid-CMOS logic style 

Another possibility is to use the complement of the expression, i.e, 



Cnut — A.H' + C in .H 



195 | 



Vol. 2, Issue 1, pp. 190-202 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

In this case, two inverters will be required to invert the A and C in inputs and one inverter at the 
output. This will result in unbalanced SUM and C out output switching times and extra delay. 
A circuit based on the static-CMOS logic style is presented in [8] [22]. This circuit overcomes the 
problems of the TG multiplexer design. It uses ten transistors and is shown in Fig. 5 (b). This circuit 
possesses all the features of static CMOS logic style such as robustness to voltage scaling and good 
noise margins. 

We propose a hybrid design for Module-Ill. We use the inherently low power consuming TG logic 
style and the robust static-CMOS logic style to create a new hybrid-CMOS circuit. The proposed 
circuit is shown in Fig. 5 (c). The new circuit also utilizes ten transistors and possesses the properties 
of both static-CMOS and TG logic styles. The carry is evaluated using the following logic expression: 

Cout = (A^JycZ^A^ 



A transmission gate preceded by a static inverter is used to implement (A B)C in . H and H' are the 
complementary gate signals to this TG. When H is at logic 1 and H' is at logic 0, this unit propagates 
the C in signal to the output. Two PMOS pull-up transistors in series with two NMOS pull-down 
transistors are used to generate A. B. Complementary A and B signal are not required. When A and B 
are at logic they switch ON both PMOS transistor to generate C out and assign in logic 1. When A 
and B are at logic 1 they switch ON both NMOS transistors to generate C out and assign logic 0. At all 
other times, this section remains OFF. The static inverter at the output produces the desired C out 
output. Table-II shows the results of proposed circuit when compared to the circuit in [15]. 

Table 2: Simulation results for the proposed Module-Ill at 50-MHz Frequency and 2V DD 





Static-CMOS Multiplexer 


Hybrid-CMOS Multiplexer 


No. of Transistor 


10 


10 


Power (fiW) 


1.337 


1.437 


Delay (ns) 


0.1829 


0.1224 



Due to the additional inverter in the proposed design, it consumes slightly more power as compared to 
the circuit in [15]. There is redundant switching at the input since the complement of C in is generated 
even if it is not propagated to the output. This can be avoided by placing the inverter after the TG but 
this causes a problem as charge can leak through the closed TG and cause a reversal of voltage level 
at the output. This tradeoff has to be made but this guarantees excellent signal integrity without any 
glitches. 

IV. Proposed full adders 

As mentioned earlier in Section, the centralized full adders, both XOR and XNOR circuits are present 
(both in module I) that generate the intermediate signals H and//'. These signals are passed on to 
module II and III along with the carry from the previous stage and the other inputs A and B to produce 
and SUM andC 0Ut (for both 1 st and 2 nd category). For the 3 rd category, we use proposed circuits from 
module-I and III and one existing circuit from Module-II. The experiment procedure and the selection 
of our proposed model were very adaptive and symmetrical. Selecting the best circuits from each of 
the module we have created three combinations for three categories and compared it with other three 
combinations using traditional TG 2 to 1 multiplexer. The combinations are compared in terms of 
number of transistor used in circuits, power consumption and delay. Thus we test our proposed 
adder's performance and found it really encouraging. The three categorized adders are shown in 
Figure 7, 8 and 9 respectively. 

In Module-I, the proposed XOR-XNOR circuit requires non-complementary inputs which will show 
perfect output. The analysis of driving capability is the same as XOR structure. By cascading a 
standard inverter to the XNOR circuit, we will have a driving output, and the signal level at the output 
end will be perfect in all cases. The same property is presented in the XNOR structure. Module-II is a 
transmission-function implementation of XNOR function to generate the SUM' followed by an 
inverter to generateSUM. This provides good driving capability to the circuit. Due to the absence of 
supply rails there are no short circuit currents. The circuit is free from the problem of threshold loss 



196 | 



Vol. 2, Issue 1, pp. 190-202 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

amongst all circuits that are used for Module-II [6]. Module -II employs the proposed hybrid-CMOS 
output stage with a static inverter at the output. This circuit has a lower PDP as compared to the other 
existing designs. The static inverter provides good driving capabilities as the inputs are decoupled 
from the output. Due to the low PDP of module II and module III, the new adder is expected to have 
low power consumption. 

V. Result And Discussion 

Using our proposed models we created three categorized designs for hybrid-CMOS adder. First circuit 
based on XOR-XOR based full adder which belongs to first category. Here proposed XOR circuit is 
used as Module-I, II and proposed 2 to 1 multiplexer is used as Module-Ill. Figure 7(a) and 7(b) 
respectively represent the hybrid-CMOS adder (XOR-XOR based full adder) and output C out and 
Sum together. Second circuit based on XNOR-XNOR based full adder of second category where 
proposed XNOR circuit used as Module-I, II and proposed multiplexer used as Module-Ill. Figure 
8(a) and 8(b) represent consecutively the hybrid-CMOS adder (XNOR-XNOR based full adder) and 
outputs of C out and Sum together. The final circuit based on Centralized full adder which belongs to 
our last category. Proposed XOR-XNOR circuit used as Module-I; Proposed transmission-function 
implementation of XOR and XNOR used as Module-II and proposed multiplexer used as Module-Ill. 
Figure 9(a) and 9(b) respectively represents the hybrid-CMOS adder (Centralized full adder) and 
output of C out and Sum together. 



;.:■;■ 























II 












a V{M1:g) 




















C 


i 


















n V(NB4:g} 


























i 


i 
i 


i 


i 


1 


1 


c 

] 


1 


i 


i 
























Zi 


n V(V5:+} 


£,■■■ 




-:.i 


~-T 


:■,- 




:,.- 




1ftjs 



Figure 6: Common input for evaluating all adders 

The performance of these three circuits is evaluated based on their transistor numbers, power 
dissipation and delay. Figure 6 represents the input voltage A, B and C in that used to evaluate all three 
categorized circuits. Based on our result we finally observed that XOR-XOR based hybrid-CMOS full 
adder works more efficiently on the basis of all criteria we have mentioned above. Moreover, we 
have evaluated XOR-XOR based hybrid-CMOS full adder's performances by comparing with all 
conventional full adders. All simulations are performed using PSPICE, HSPICE and MATLAB. 



197 | 



Vol. 2, Issue 1, pp. 190-202 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 








z.ov - 














HP 


i 
















■: 
















% 


SUM 


l.ov - 














~. 




:v - 










i 





















D 


;:V37 


d) 


















i . :■■; 








r- 1 





i 








r- 






:■"." 












[ 




i 

i 


i 


: 












5H» 



































Sua lOua 



(a) 



(b) 



Figure 7: (a) XOR-XOR based Hybrid-CMOS full adder (b) C out and Sum 




SVM 



2 . D V 




























1.0V 




l 


i i 


l 
i 


i 






Ov 




n i 


i 

























□ V(M21:d) 


















2.0V " 




B- 


1 


1 


I 


] 








B 


i.ov - 




1 


1 


1 


1 




1 






1 


'-'■ 


D 








" 








SEI» 





























(a) 



(b) 



Figure 8: (a) XNOR-XNOR based Hybrid-CMOS full adder (b) C out and Sum 

Increase of transistor numbers in chip or digital circuit comes with typical obstacles, even number of 
transistor may have effect on the overall performance of the circuit. Due to this reason, it was one of 
our main concerns for designing the full adder without compromising its performance. Three of our 
proposed designs have twenty four transistors in each and none of them showed any sort of deficiency 
basis on power dissipation and delay. 



198 



Vol. 2, Issue 1, pp. 190-202 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 




2.0V - 


























l.'DV - 


i 
n i 


i 
i 


i 


1 





















n V(MSS:d) 


















2.07 - 
























i 














l.ov - 


l 


l 






l 


i 


i 








-'■' 




3EL» 



























Sua lOua 



(a) (b) 

Figure 9: (a) Centralized Hybrid-CMOS full adder (b) C out and Sum 

The average power dissipation is evaluated under different supply voltages and different load 
conditions and is summarized in Figure 10(a) and 10(b) respectively. Among the conventional 
existing full adders, clearly CPL has the highest power dissipation. The adders TGA and TFA always 
dissipate less power than others and this can be shown in the graph. Between the two, TGA dissipates 
lesser power than TFA and the trend continues at low voltages. The degradation in performance of the 
TFA is higher than the TGA as supply voltage is scaled down. Behind, but closely following the two, 
comes the static-CMOS full adder. Under varying output load conditions, the adder without driving 
capability (TGA and TFA) show more degradation as compared to the ones with driving capability 
(CMOS and CPL). This is as expected since the speed degradation of these designs is highest. 




(a) (b) 

Figure 10: (a) Power vs. Supply Voltage for different full adders (b) Power vs. Load for different full adders 

The static-CMOS full adder shows the best performance amongst the conventional full adders under 
varying load. Among the nonconventional or hybrid-CMOS full adders, the proposed hybrid-CMOS 
full adder and NEW-HPSC adder have the least power dissipation. The proposed full adder consumes 
2% lesser power as compared to the NEW-HPSC adder at 2V D d but when the supply voltage is scaled 



199 | 



Vol. 2, Issue 1, pp. 190-202 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

down, NEW-HPSC adder consumes slightly lesser power. The power dissipation of the proposed 
adder is roughly 25% less than the next lowest power consuming adder (TGA). With increasing 
output load, the power dissipation of these adders remains the least as compared to all the considered 
full adders. 

Figure 11(a) and 11(b) respectively represent the delays of full adders at 2V DD and load (5.6-200fF). 
For easy comparison, Table III shows the delay values. From the observation we have learnt that 
amongst the existing conventional full adders, TGA and TFA (the adders without driving capability) 
have the smallest delays. TFA has slightly lower delay than TGA at higher supply voltages but the 
trend reverses at lower supply voltages. The static-CMOS full adder and CPL full adder follow the 
TGA and TFA adders, CMOS steadily remaining ahead of the CPL adder at each supply voltage. For 
varying load conditions, TGA and TFA have the low delay at small loads, but the speed degrades 
significantly at higher loads. Among the existing full adders, CMOS shows the least speed 
degradation followed by the CPL full adder. This shows that under heavy load conditions, adders with 
driving capability perform better than those without it (TGA and TFA). Due to these reasons, we 
compared the proposed hybrid-CMOS adders to the conventional CMOS adders. 



D*f,*SlHfrrap 



hr/iiLud 




1H Itt 140 lU IH » 



Figure 11: 

(a) Delay vs. Supply Voltage for different full adders (b) Delay vs. Load for different full adders 

Among the nonconventional or hybrid-CMOS full adders, the proposed hybrid-CMOS full adder 
shows minimum delay at all supply voltages when compared to the CMOS, HPSC, NEW14T, and 
NEW-HPSC full adders. At 2V DD , the proposed adder is 30%, 55%, 88%, and 29% faster than 
CMOS, HPSC, NEW14T and NEW-HPSC full adders, respectively. At lower supply voltages, the 
proposed full adder is the fastest. The delay of the proposed hybrid-CMOS adder is slightly higher 
than TGA and TFA but with increasing load, it displays minimum speed degradation. Overall, when 
compared to all adders, the proposed adder has minimum speed degradation with varying load. 

VI. Future work 

In recent Years several variants of different logic styles have been proposed to implement 1 bit adder 
cells [22] [24]. These papers have also investigated different approaches realizing adders using 
CMOS technology; each has its own pros and cons. By scaling the supply voltage appears to be the 
most well known means to reduce power consumption. However, lowering supply voltage increases 
circuit delay and degrades the drivability of cells designed with certain logic style. One of the most 
important obstacles decreasing supply voltages is the large transistor count and V th loss problem. 



200 



Vol. 2, Issue 1, pp. 190-202 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

In this paper, we used Hybrid CMOS logic style to design our proposed circuit. This type of logic 
design provides designer flexibility to work on CMOS area to overall performance of a circuit. 
Different modules give us the opportunity to create new application basis on the requirements. By 
optimizing the area of CMOS in different modules more efficient designs can be found [19] [23]. But 
decreasing area size of different modules brings obstacles that can create a negative impact on the 
overall circuit's performance. So not compromising the negative impact, designer may work on the 
size and number of the transistors as minimal level as possible. Moreover, a slight improvement in the 
area of power dissipation, delay, PDP can create huge impact on the overall performance and that can 
be one of the main concerns for future work. Most of the conventional adders showed lower power 
consumption at low voltage and higher power consumption at high voltage but our proposed model 
overcome that obstacle and showed lower power consumption in every kind of input voltage. As 
different application can be generated using this different modules, designers should take a good look 
at the power consumption at different input voltage. Another important concern for designing circuits 
is delay. Decrease of delay and low input voltage might have an impact on the speed of overall 
circuits. Due to this reason delay is another area where designer can work in future. 

VII. Conclusion 

Hybrid CMOS design style become popular because it provides designer more freedom to work on 
the performance of single CMOS design to overall circuit. Based upon the application designers can 
choose required modules as well as efficient circuit from different modules for the implementation. 
Even by optimizing the transistor sizes of the modules it is possible to reduce the delay of all circuits 
without significantly increasing the power consumption, and transistor sizes can be set to achieve 
minimum PDP. Using the adder categorization and hybrid CMOS design style, many full adders can 
be conceived. As an example, a novel full adder designed using hybrid CMOS design style is 
presented in this paper that evaluated low power dissipation and delay. The proposed hybrid-CMOS 
full adder has better performance than most of the conventional full-adder cells owing to the novels 
design modules proposed in this paper. It performs well with supply voltage scaling and under 
different load conditions. We recommend the use of hybrid-CMOS design style for the design of high 
performance circuits. 

References 

[1]. H.T. Bui, Y. Wang and Y. Jiang, "Design and analysis of low-power 10-transister full adders using 

XOR-XNOR gates," IEEE Trans. Circuits Syst. II, Analog Digit. Signal Process, Vol. 49, no. 1, pp. 25- 

30, Jan. 2002. 
[2]. V. Tiwari, D. Singh, S. Rajogopal, G. Mehta, R. Patel and F. Baez, "Reducing power in high- 
performance microprocessors" in Proc. Conf. Des. Autom, 1998, pp, 732-737 
[3]. R. Zimmermann and W. Fichtner, "Low power logic styles: CMOS versus pass-transistor logic," IEEE 

J. Solid-State Circuits, vol. 32, no. 7, p. 1079-1090, July 1997 
[4]. N. Zhuang and H. Wu, "A new design of the CMOS full adder," IEEE J. Solid-State Circuits, vol. 27, 

no. 5, pp. 840-844, May 1992. 
[5]. N. Weste and K. Eshraghian, "Principles of CMOS VLSI design," in a system perspective, Reading, 

MA: Addison- Wesley, 1993. 
[6]. A.M. Shams, T. K. Darwish and M.A. Bayoumi, "Performance analysis of low power 1-bit CMOS full 

adder cells," IEEE Trans. Very Large Scale Integer. (VLSI) Syst, vol. 10, no. 1, pp. 20-29, Feb. 2002. 
[7]. Jyh-Ming Wang, Sung-Chuan Fang and Wu-Shiung Feng, "New Efficient Designs for XOR and 

XNOR Functions on the Transistor Level", IEEE Journal of Solid-State Circuits, vol. 29, no. 7, July 

1994. 
[8]. Summer Goel, Ashok Kumar, Mahdy A. Bayoumi, "Design of robust, energy efficient full adders for 

deep sub micrometer design using hybrid CMOS logic style," 
[9]. J. Wang, S. Fang, and W. Feng, "New efficient designs for XOR and XNOR functions on the transistor 

level," IEEE J. Solid-State Circuits, vol. 29, no. 7, pp. 780-786, Jul. 1994. 
[10]. M. Sayed and W. Badway, "Performance analysis of single-bit full adder cells using 0.18, 0.25 and 

0.35um CMOS technologies," in Proc. Int. Symp Circuits Syst. 2002, pp. III-559-III-562. 
[11]. S. Goel, S. Gollamudi, A. Kumar and M. Bayoumi, "On the design of low energy hybrid CMOS 1-bit 

full-adder cells," in Proc. Midwest Symp, Circuit Syst, 2004, pp. 11-209-212 



201 



Vol. 2, Issue 1, pp. 190-202 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[12]. H. A. Mahmoud and M. Bayoumi, "A 10-transistor low-power high speed full adder cell," in Proc. Int. 

ymp. Circuits Syst, 1999, pp. 1-43-46 
[13]. A. Fayed and M. A. Bayoumi, "A low-power 10 transistor full adder cell for embedded architectures," 

in Proc. IEEE Int. Symp. Circuits Syst, 001, pp. IV-226-229. 
[14]. D. Radhakrishnan, "Low- voltage low-power CMOS full adder," IEE Proc. Circuits Devices Syst., vol. 

148, no. 1, pp. 19-24, Feb. 2001. 
[15]. M. Zhang, J. Gu, and C. H. Chang, "A novel hybrid pass logic with static CMOS output drive full- 
adder cell," in Proc. IEEE Int. Symp. Circuits Syst., May 2003, pp. 317-320. 
[16]. C.-H. Chang, J. Gu, and M. Zhang, "A review of 0.18-_m full adder performances for tree structured 

arithmetic circuits," IEEE Trans. Very Large Scale Integer. (VLSI) Syst., vol. 13, no. 6, pp. 686-695, 

Jun. 2005. 
[17]. H. Lee and G. E. Sobelman, "New low- voltage circuits for XOR and XNOR," IEEE Proc. 

Southeastcon., pp. 225-229, 1997. 
[18]. H. T. Bui, A. K. Al-Sheraidah, and Y. Wang, "New 4-transistor XOR and XNOR designs," in Proc. 

2nd IEEE Asia Pacific Conf. ASICs, 2000, pp. 25-28. 
[19]. Hubert Kaesline, Digital Integrated Circuit Design from VLSI Architectures to CMOS fabrication. 

Cambridge University Press, New York, 2008 
[20]. S. Goel, M. Elgamel, M. A. Bayoumi, and Y. Hanafy, "Design methodologies for high-performance 

noise-tolerant XOR-XNOR circuits," IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 53, no. 4, pp. 

867-878, Apr. 2006. 
[21]. S. Wariya, Himanshu Pandey, R.K.Nagaria and S. Tiwari, "Ultra low voltage high speed 1-bit CMOS 

adder," IEEE Trans. Very Large Scale Integer, 2010 
[22]. Shiv Shankar Mishra, S.Waria, R.K.Nagaria and S. Tiwari,"New design Methodologies for high speed 

low power XOR-XNOR Circuits," journal of World Academic Science, Engineering and Technology 

(WASET) 09, vol. 55, no. 35, pp-200-206, July 2009 
[23]. Digital Design (3 rd Edition) by M. Morris Mano, Publisher: Prentice hall: 3 edition, August, 2001 
[24]. R. Pedram, M. Pedram, "Low power design methodologies,", kluwer, Norwell, MA, 1996 
[25]. K. Navi, O. Kaehi, M. Rouholamini, A. Sahafi, S. Mehrabi, N. Dadkhahi, "Low power and High 

performance 1-bit CMOS fill adder for nanometer design," IEEE computer Society Annual Symposium 

VLSI (ISVLSI), Montpellier fr, 2008, pp. 10-15 
[26]. M. Vesterbacka, "A 14- transistor CMOS full adder with full voltage swing nodes," Proc. IEEE 

workshop Signal Processing System, October 1999, pp. 713-722 



Authors 

Mohammad Shamim Imtiaz was born in Dhaka, Bangladesh in 1987. He received his 
Bachelor degree in Electrical and Electronic Engineering from Ahsanullah University of 
Science and Technology, Dhaka, Bangladesh in 2009. He is working as a Part-Time 
Lecturer in the same university from where he has completed his Bachelor degree. 
Currently he is focusing on getting into M.Sc Program. His research interests include 
Digital System Analysis and Design, Digital Signal Processing, Digital Communication & 
Signal processing for data transmission and storage, Wireless Communication. 



Md Abdul Aziz Suzon received B.Sc degree in 2011 from Ahsanullah University of 
Science and Technology, Dhaka, Bangladesh in Electrical and Electronic Engineering .He 
is working as a Part-Time Lecturer in Ahsanullah University of Science and Technology. 
Currently he is focusing on getting into M.Sc Program. His research interest includes digital 
circuit design, VLSI design, Renewable and sustainable energy, Digital communication. 



Mahmudur Rahman was born in Dhaka, Bangladesh in 1989. He received his Bachelor 
degree in Electrical and Electronic Engineering from Ahsanullah University of Science and 
Technology, Dhaka, Bangladesh in 2011. His research interest includes Digital circuit 
design, VLSI design, Alternative and renewable energy, Wireless communication, 
Microcontroller based inverters. Currently he is focusing on getting into Masters Program. 




202 



Vol. 2, Issue 1, pp. 190-202 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Exam Online: E-Enabling Extended Learning, 
Answer and Essay Examinations 

Abdulghader. A. Ahmed, Dalbir S., Ibrahim M. 

School of Computer Science, Faculty of Information Science and Technology 

University Kebangsaan Malaysia 



Abstract 

This study reviews the determinant factors for increased motivation of online course. This 
development in Information and communication technologies (ICT) has leads to major changes in 
learning-teaching environment. However, teacher's enthusiasm, the roles of instructors warm and 
friendliness among teaches and students are one of the most important factors for motivation of online 
course. Students reflections about flexibility are main factors of motivation for online courses 
independence and freedom of learning can create motivation in an online learning environment. 
Relevance of course materials, well-planned and organized class sessions, students' active 
involvement in classroom learning, use of various instructional techniques and illustration with clear 
examples motivate the students. However, communication and collaboration between students are 
important factors as they determine the conduciveness of online learning environment/adaptation to 
technical infrastructure, process of the course and measurement evaluation during online course 
studies. 

KEYWORDS' E-learning, Exam Online, Motivation, Online community 

I. Introduction 

The integration of Information and communication technologies (ICT) as well as the Internet have 
contributed immensely to educational changes with flexible, open and more electronically distributed 
learner-controlled forms of learning (Bossu, Smyth & Stein, 2007). Its widespread and rapid growing 
significance could transform the educational sectors and influences academic performance. E-learning 
created new learning/teaching environments system with pedagogical, technological and 
organizational components focusing on ideal three components to successfully implementation and 
create balance (Jochems, Merrienboer & Koper, 2004; Garrison and Anderson, 2003). unique 
strategies to integrate student populations differs online learning across institutions (Hiltz 1993 & 
Aliva et al. 1997), and national boundaries (Jarvenpaa & Leidner, 1999 and Yoo et al., 2002). 
Motivation among student to activate their respective career goal is the main component of the 
learning environment. Motivation can be as intrinsic and extrinsic however, both form of motivation 
in learning is very important in students' engagement in the learning experiences. Intrinsic motivation 
is refers to individual supportive interest, self -requirement, self-determination, self-regulation as well 
as the autonomy of learning while extrinsic motivation is the external factors that stimulate learners 
such as behaviours of teachers, learning topics, learning-teaching strategies, teaching-learning 
process, interaction among students and teachers. Report on motivational perspectives to understand 
behaviour predict the acceptance of technology. Intrinsic and extrinsic motivation have been found to 
be key drivers of behavioural intention (Vallerand 1997 & Venkatesh 1999). Woldkowski defined 
intrinsic motivation as an evocation, an energy called forth by circumstances that connect with what is 



203 



Vol. 2, Issue 1, pp. 203-209 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

culturally significant to the person. Intrinsic motivation isbuilt in learning theories and is used as a 
constructive measure for user perceptions of technologies (Woldkowski 1993 & Venkatesh 2003). 
Extrinsic motivation encourages students to commit themselves to instructional goals and however; 
increases student's achievement earning them reasonable grade or degree. Motivation is a variable 
that affects student's learning. Students in the virtual learning environment need external motivation 
in order to stimulate and to support their participation in virtual learning environment. Deci and Ryan 
(1985) defined extrinsic motivation as the performing of behaviour to achieve a specific reward. From 
student's perspective, extrinsic motivation on learning may include and not limited to higher grade in 
exams, awards as well as in prizes winning. Extrinsic motivation could be seen as a factor that 
influences learning and partly determinant factor to student grade. 

Rovai's (2001) reported the need for learning communities and describe four essential elements of 
classroom community such as spirit, trust, interaction and learning. He stressed that spirit implies the 
creation of group identity couple with the feeling of belonging to a specific group. Trust he added, is 
established when group members give honest feedback to others and expect to receive similar 
feedback. Abundance of research suggests the importance of participant interaction in online learning 
(Arbaugh 2004; Brower 2003; Shea et at. 2004 & Swan 2003). Mutual interaction exists when 
students benefit from each members of the group. Students learn when their respective group shares 
valuable ideas among themselves. However, spirit and trust could pose some definitional and 
operational challenges such that interaction and learning becomes relatively direct. Participating 
strategies increases as learning community recognizes the value of interaction and learning online 
(William Wresch J.B. Arbaugh, & Michael Rebstock 2005). The nature of participant interaction 
influences and partly determines the level of success in online environments. In contrary, little 
attention has been paid to examine the nature of interaction across large sample of participants from 
different online environments. However, this could possibly be as a result of newness of the online 
learning and the previous online settings. 

II. E-Learning Learning 

While building trust, relationships are constrained by the distances that prevent face-to-face meetings 
and complicated by cultural differences. Kim and Bonk's (2002) studied participation variables among 
students in Finland, South Korea, and the US and concluded that the range of responses can be seen in 
students with respect to particular participation practices and culture. The study concludes that 
Finnish students were more likely to compose group email responses, and more likely to post 
summaries of comments. 

It has been reported that American students participated in email discussions more than their Finnish 
peers, a result explained by the authors as Finns tend to keep silent and not to speak too much, 
whereas silence is not habitual with most Americans (Livonen, Parma, Sonnewald & Poole-Kober 
1998). Other study asserted that the interactive learning style typical of current classroom 
conferencing software such as blackboard is most welcomed by peer-oriented learners such as those 
in the U.S. it was found Asian students relies heavily on direction from their teachers, even in an 
online environment Liang & McQueen 1993). However, participation rates for Asian students were 
influenced by faculty involvement, while American students sought regular involvement with respect 
to their peers. These studies confirm that participation behaviours vary with culture and peers. 
Study by Arbaugh et al. (2004) reveals that participation and interaction in distance education formats 
measures student perceptions of interaction as well as participation. Students can however, 
underestimate their actual level of participation. Such estimation need not to be the only source of 
data for participation studies. Online courses could provide archival records of student and instructor 
participation during course period together with track participation by individuals and groups over the 
course. Study on the trends by Andrusyszyn et al. (2000) shows those changes in participation rates 
exist as students grow more accustomed to the technology and task assignments. 

III. E-Learning Community Culture 

Four essential elements of classroom community were described by Rovai (2000) such as spirit, trust, 
interaction, and learning. His observations were supported by the importance of trust relationships 
described by Jarvenpaa et al. (1998), Maznevski et al. (2000) and Leidner (1999). 



204 



Vol. 2, Issue 1, pp. 203-209 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

It has been suggested that online relationships may not be as effective as face-to-face meetings 
although there are some evidence that personal relationships may develop over time (Chidambaram 
1996; Desanctis et al.1999 & Jarvenpaa 1999). The development of those relationships is constrained 
further with deadline like end of a course. However, need for efficient communication may take 
precedence over more relational-based communication. Fundamental aspect of virtual team 
effectiveness, the presence of personal relationships among the entire team members seems to be 
more difficult to establish in courses with members that are online. 

E-learning provides configurable infrastructure that integrates learning material, tools, and services 
into a single solution that creates and delivers training or educational content effectively, quickly, and 
economically (Zhang, Zhou, Briggs, & Nun maker 2006). In many studies, comparisons have been 
made between the effectiveness of online learning and face-to-face learning. Russell (1999) made an 
inventory of many of these media comparison studies and concluded that there is no significant 
difference between the average performances of learners in the case of face-to-face learning compared 
to learners exposed to distance learning methods. In addition, Ross and Bell (2007) added that this 
could be dependent on the level of learning found no significant difference in performance at lower 
levels of abstraction among students in the traditional setting when compared to online students, 
students in the traditional setting outperformed online students with respect to higher order learning 
through analysis and synthesizing information. 

Internet-based learning provides opportunities for learners to chosen time and location besides; it 
allows participants to interact with each other with wide range of online resources (Xu & Wang 
2006). Based on the nature of materials and interaction with others, online virtual spaces designed for 
education as well as for training can be either for knowledge construction and group collaboration. 
Knowledge construction encompasses objectivist and constructivist strategy while collaboration is 
grouped as individual or group (Benbunan-Fich & Arbaugh 2006). Collaborative activities allow 
learners greater opportunities for increased social presence and a greater sense of online community 
with positive online course outcomes (Gunawardena & Zittle 1997). 

The combination of knowledge construction with the presence of group collaboration describes four 
possible web-based learning environments transfer individual, group and constructs individual and 
group. Besides, anxiety and uncertainty could be reduced as learners communicate with their 
colleagues (Hiltz et al. 2002). It can be surmised that the participant interaction variables as well as 
performance depends on the nature of the online environment. 

IV. Security in E-Learning 

E-learning delivers examinations via a web browser. However, it is important to secure the browser as 
to prevent student access to the internet, the local file system as well as email. It is important that 
students entering the E-learning system download and run small windows. This will disable system 
keys (e.g., ctrl-alt-del, alt-tab, etc.), installs a keyboard hook to trap browser hot-keys which could be 
used to open new browser windows, launches Internet Explorer in kiosk mode with no address bar, 
toolbars, or buttons visible or available at the E-learning login page. 

After these strategies have been implemented, candidates can navigate and exit the browser by using 
the interface provided by E-learning. Similar strategy is available using commercial secure browsers 
such as Respondus LockDown Browser (Respondus 2007). However; once logged, they will be 
unable to re-login without being provided with additional invigilator password. Therefore, they cannot 
leave the invigilated environment and re-access the examination. 

V. Benefits Associate with Online Learning 

An effective online learning environment promotes interactivity and collaboration in the learning 
process. Assessing students' progress in an online environment improves quality and success in Web 
courses (Hazari et al. 1999). To achieve pedagogical improvements through online learning for 
teaching and promoting learning, instructors should empower themselves through the use of 
assessment tools that monitor student's progress (Hazari et al. 1999). The learner-cantered stratgy 
helps students develop critical thinking skills and allows instructors to assess students' progress (Odin 
1997). 



205 



Vol. 2, Issue 1, pp. 203-209 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Video serves as a sophisticated medium in e-learning because; it is capable of presenting information 
in an attractive manner. Studies by Wieling (2010) revealed the effectiveness of instructional video on 
learning outcomes However, the instructional video used in early studies was primarily either 
broadcasted through TV programs and on CD-ROM. Recent advances in multimedia and 
communication technologies have resulted in improved learning systems through the use of video 
components for instruction. 

Carnegie Mellon University just-in-time lecture project observed that video based education and 
training systems support the same level of teaching and learning effectiveness as face-to-face 
instruction (Zhang et al., 2006). Online video recordings of lectures allow students to view lectures 
they have missed or to re-view difficult lectures to improve understanding. Chiu, Lee, and Yang 
(2006) investigated the viewing behaviour of students in a Chinese grammar course when online post- 
class lecture videos were made available. They divided students in two groups based on their viewing 
activity (top 50% and bottom 50%) and found no difference in course grades between the two groups 
corrected for their GPA. 

Additionally, they found that students had a preference for recordings of their own lectures as 
compared to lectures of a parallel group. 

Ross and Bell (2007) on the other compared the performance of students in a quality management 
course with access to face-to-face lectures as well as the online lecture video recordings to students 
who only had access to the online lecture recordings. Using a regression analysis they found that the 
course score of students in the first group with access to the face-to-face lectures was predicted 
positively by their GPA, negatively by their age, positively by their homework performance and 
negatively by the number of lectures they viewed online. For students who did not have access to the 
face-to-face lectures, the course score was positively predicted by their GPA, negatively by their age, 
positively by their homework performance and positively by the number of lectures they viewed 
online. 

Perceived learning outcome is the observed results in connection with the use of learning tools. 
Perceived learning outcome was measured with performance improvement, grades benefit; and 
meeting learning needs. Previous studies shows that perceived learning outcomes and satisfaction are 
related to changes in the traditional instructor's role in an online learning environment. The recent 
advances in computer networking technologies and the World Wide Web (Web) break the physical 
and temporal barriers of access to education. The online learning environment frees students from the 
constraints of time and place, and it can be made universally available. As online courses improves in 
educational institutions, assessing students' learning in an online environment is one of the challenges 
faced by educators. 

The Exam Online is currently being improved on the basis of the two live pilots, for future work 
however, Inclusion of differentiated mark schemes for individual questions, integrated into the 
marking interface and Offline marking supports personal computers and laptops with later 
synchronisation however; the main system are helpful. Other useful modifications include the 
integration with back end system for outputting results. Integration with a free-text computerized 
marking system provides automatic marking of short answer questions as in Intelligent Assessment 
Technologies (2007). Support for drawing diagrams when answering questions, potentially on-screen 
(Thomas 2004) with options for hand written and paper based submission of calculation steps. In 
addition, simple question and answer measures into the marking process enhances accessibility for 
sight impaired students areas requiring modification. 

VI. Limitations 

The flexibility of asynchronous distance education is valued since students and lecturers need not be 
online at the same moment however, flexibility is advantageous in an international context where time 
zones necessarily distribute student's responses. Research examining time intervals for discussion 
responses could be helpful in this context. Studies by Liang et al. (1999) described cultural differences 
in participation patterns. To account for the differing cultural differences, the learning experience 
should develop model of online learning effectiveness based on course software, learning theories, 
course content, and participant characteristics as well as cultural or institutional characteristics Hiltz 
& Arbaugh 2003). 



206 



Vol. 2, Issue 1, pp. 203-209 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Difficulties with establishing trust relationships online as well as variables cultural components of 
participation behaviours constrain the initiation of international online courses. Online programs 
provide additional international learning opportunities to their students. Macfayden & Hawkes (2002) 
tracked six online international education projects and found general satisfaction with the efforts. 
Troutman (1991) reported that students who feel secure in their own personal use of computers also 
feel positive toward the use of computers in the schools. 

Furst et al. (2004) highlighted that challenges such as personal relationship, adding new members 
restart the team development process which could disrupt the effort expended by the original team 
members to develop a team identity and resolve conflicts early in their development. A number of 
studies of online learning reported that participation patterns in online courses decline as the course 
progresses (Hiltz & Wellman 1997; Berger 1999; and Arbaugh 2000). Active participation through 
the program period requires extensive effort. In addition, it was pointed out that increase in the class 
size makes it more difficult to develop a sense of online community 

While most studies conducted at American institutions show strong relationship between learner and 
instructor, learners interaction and online learning outcomes (Arbaugh 2005), the perceptions and 
expectations of the German students suggest that the role of participant interaction may not be as 
strong in German institutions suggesting that a particular need for multi-national studies of the 
relationship between participant interaction and learning outcomes in online courses (Arbaugh & 
Hiltz 2003). 

Instructors are often challenged with designing online discussion and assignments that encourage 
students to evaluate information, assimilate information as well as making comparisons and 
connections (Odin 1997). An assessment tool that monitors student's progress enhances the learning 
process however; assessment should be a continuous in an online learning environment. I have been 
asserted that an assessment tool must draw the instructor and students into assessment procedures 
(Prime 1998). Miller et al. (1998) added that for assessment to be useful as part of a learning process, 
it must be visible and related to the learning goals with assigned grades or marks for the data collected 
to measure progress. Educational material and online learning has challenged the effectiveness of the 
traditional educational approach in universities and other education institutions. Consequentially, 
these institutions struggle to restructure their strategies in providing education and delivering 
knowledge. There are great expectations surrounding the development and use of online courses 
owing to its versatility, flexibility and personalization potential. A strong supportive program office 
responsible for student advising, faculty support, administrative and financial support, technical 
support, and orientation of new students however, comprehensive guide is essential for online 
learning environment Online students should have access to the learning resources available to on- 
campus students and must also be able to obtain course materials from either their university's online 
bookstore or from Internet booksellers. 

VII. Conclusions 

E-learning electronically support learning and teaching process through computer network that 
enables transfer of skills and knowledge. E-learning system improves learner's knowledge by 
providing on-line access to integrated information, advice, and learning experiences. E-learning 
system has been developed to deliver lectures and summative essay style examinations through 
appropriate setting. The system supports existing examination processes by providing better and more 
comprehensive examination experience for an increasingly digital cohort and supports efficient blind 
marking process. Initial pilots confirmed that the system provides effective and efficient means of 
deploying traditional essay style examinations on-screen and that it as well improves in many ways 
upon the existing paper-based process. The system is expected to undergo further development and 
roll-out as it complexity varies with tradition and cultural. 

Acknowledgements 

The research was funded by The General People's Committee for Higher Education, Bane waleed 
University, Libya. 



207 



Vol. 2, Issue 1, pp. 203-209 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

References 

[I] Andrusyszyn et al., 2000 M. Andrusyszyn A. Moen, C. Iwasiw, T. Ostbye, L.Davie and T. Stovring 
et al., Evaluationof electronic collaborative international graduate nursing education: The Canada 
Norway experience, Journal of Distance Education 115:. 1-15. 

[2] Arbaugh, 2004 J.B. Arbaugh, Learning to learn online: A study of perceptual changes betweenmultiple 

online course experiences, The Internet and Higher Education 7 (3), pp. 169-181. 
[3] Arbaugh, in press (2005) J.B. Arbaugh, Is there an optimal design for online MBA courses?, 

Academy of Management Learning and Education 4 (2). 
[4] Arkkelin, 2003 Arkkelin, D. (2003). Putting prometheus' feet to the fire: student evaluations of 

Prometheus in relation to their attitudes towards and experience with computers, computer self-efficacy 

and preferred learning 
[5] Boomsma, 1987 A. Boomsma, Structural equation modeling by example Applications in educational, 

sociological,and behavioural research (pp. 160-188), Cambridge University Press, Cambridge, 

England. 
[6] Clark, R. C, & Mayer, R. E. (2008). E learning and the science of instruction. San Francisco :Pfeiffer. 
[7] Cox, M. J., & Marshall, G. (2007). Effect of ICT: Do we know what we should know? Education 

andlnformation Technologies, 12(2), 59-70. 
[8] Daley et al., 2001 B.J. Daley, K. Watkins,S.W. Williams, B. Courtenay, M.Davis and Mike, 

Exploringlearning in a technology-enhanced environment, Educational Technology and Society 4 (3). 
[9] Davies, 2003 R.S. Davies, Learner intent and online courses, The Journal of Interactive Online 

Learning 2 (1). 
[10] Davis et al., 1992 F.D. Davis, R.P. Bagozzi and P.R. Warshaw, Extrinsic and intrinsic motivation to 

use computers in the workplace Journal of Applied Social Psychology 22: 111 1-1 132. 

[II] Deci and Ryan, 1985 E.L. Deci and R.M.Ryan, Intrinsic motivation and self determination in human 
behavior, Plenum, New York. 

[12] D. Xu and H. Wang, (2006). Intelligent agent supported personalization for virtual learning 

environments, Decision Support Systems 42: 825-843. 
[13] Ding, N. (2009). Computer- Supported Collaborative learning & gender.Ph'd dissertation. University 

Groningen. 
[14] Diseth, A. (2001). Validation of a Norwegian version of the approaches and study skillsinventory for 

students (ASSIST): Application of structural equationmodelling. Scandinavian Journal of Educational 

Research,45(4), 381-394. 
[15] Fink, 2003 L.D. Fink, Creating significant learning experiences: An integrated approach to designing 

college courses, Jossey-Bass, San Fransisco. 
[16]Furst et al., 2004 S.A. Furst, M. Reeves, B. Rosen and R.S. Blackburn, Managing the life cycle of 

virtual teams, Academy of Management Executive 18: 6-20. 
[17]Felder, R. M., & Silverman, L. K. (1988). Learning and teaching styles in engineering education. 

Engineering Education, 78(7), 674-681. 
[18] Felder, R. M., & Soloman, B. A. (1991). Index of learning styles. 
[19] Felder, R. M., & Spurlin, J. (2005). Applications, reliability and validity of the index of learning 

styles. International Journal of Engineering Education, 21(1), 103-112. 
[20] Garrison, D. R. & T. Anderson. (2003). E learning in the 21st century: a framework for research 

andpractice, London: RoutledgeFalmer. 
[21] Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research,77(l), 81. 
[22] Hartnett, J., 1999. Interacting with interactions. Inside Technology Training (July/ August); 

http://www.ittrain.com/learning online/7-8- 99-learning-nuts bolts.htm) Retrieved September 3,1999. 
[23]Hazari, S. and Schnorr, D., 1999. THE Journal 26, p. 11 (June); 

http://www.thejournal.com/magaine/current/feat01.html) Retrieved July 22, 1999. 
[24] Hill, R.B., 1997. The design of an instrument to assess problem solving activities. Journal of 

Technology Education 9, p. 1; http://borg.lib.vt.edu/ejournals/J E/jte-v9nl/hill.html) 
[25]Hiltz, 1993 S.R. Hiltz, Correlates of learning in a virtual classroom, International Journal 

ofManMachine Studies 39:. 71-98. 
[26] Hodgson & Watland, 2004. Hodgson and P. Watland, Researching networked management learning, 

Management Learning 35:. 99 116. 
[27] Irani, 1998 Irani, Communication potential, information richness and attitude: a study of computer 

mediated communication in the ALN classroom, ALM Magazine2 
[28] Jarvenpaa & Leidner, 1999 S.L. Jarvenpaa and A.E. Leidner, Communication and trust in global virtual 

teams, Organisation SciencelO: 791-815. 



208 



Vol. 2, Issue 1, pp. 203-209 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[29] Jarvenpaa Jarvenpaa, K. Knoll and D.E. Leidner, 1998. Is anybody out there? Antecedents of trust in 

global virtual teams, Journal of Management Information Systems 14 (4), pp. 29-64. 
[30] Kim & Bonk, 2002 K.J. Kim and C.J. Bonk, Cross-cultural comparisons of online collaboration, 

Journal of Computer Mediated Communication 8: 1-32. 
[31] Jochems, W.; Merrienboer, J. V & R.Koper. (2004). Integrated e learning : implications forpedagogy, 

technology and organization, New York: Routledge Falmer. 
[32] Kaufman, D. M. (2002). Teaching and learning in higher education: Current trends, retrieved 

fromhttp ://www. sfu.ca/lidc/research/ka fman/Lif elongLearning.html 
[33] MacFayden & Hawkes, MacFayden, L. & Hawkes, B. H. 2002. Report on a survey of current 

uses of ICTs in Canadian international education activities. Vancouver, BC: University of British 

Columbia and Canadian Bureau for International Education. 
[34] Miller, A.H., Imrie, B.W. and Cox, K.,1998. Student assessment in higher education, Kogan Page, 

London. 
[35]Neeley, L., Niemi, J.A. and Ehrhard, B.J. ,1998. Classes going the distance so people don't have to: 

Instructional opportunities for adult learners. THE Journal 4:. 72-73 (November); 
[36] Odin, J.L., 1997. ALN: Pedagogical assumptions, instructional strategies, and software 

solutions, University of Hawaii at Manoa, Honolulu, HI;http://www. hawaii.edu/aln/aln_te.htm) 

Retrieved September 5. 
[37] R. Benbunan-Fich and J.B. Arbaugh, (2006). Separating the effects of knowledge construction and 

group collaboration, Information and Management 33: 778-793. 
[38] Shea et al., 2004 P.J. Shea, E.E. Fredericksen, A.M. Pickett and W.E. Pelz, Faculty 

development, student satisfaction, and reported learning in the SUNY learning network. In: T.M. Duffy 

and J.R. Kirkley, Editors, Learner-centered theory and practice in distance education: Cases from 

higher education, Lawrence Erlbaum Associates, Mahwah, NJ , pp. 343377. 
[39] Swan, 2003 K. Swan, Learning effectiveness: What the research tells us. In: J. Bourne and J.C.Moore, 

Editors, Elements of quality online education: Practice and direction, Sloan Consortium, Needham, 

MA, pp. 13-45. 
[40] S.R. Hiltz and M 2002 Turoff, What makes learning networks effective? Communications of the ACM 

4: 56-59. 

Authors Information 

Abdulghader. A. Ahmed completed his undergraduate degree in computer science at 
7th October University Bane wiled, Libya in 2001. He is a master candidate in computer 
science at faculty of computer science & Information Technology University 
Kebangsaan Malaysia (UKM). 




Dalbir Singh received the degree in Computer Science from the Universiti Sains 
Malaysia, in 2002. He received the Ph.D. degree in Computer Science from the 
Universiti Malaya in 2009. Currently, he is a senior lecturer at National University of 
Malaysia. His research interest includes Human Factor in Information System. 



Ibrahim Mohamed received the degree in Accounting & Finance from the Liverpool 
JM University, in 1996. He received the Masters degree in Information Technology from 
the National University of Malaysia in 1999. Currently, he is a lecturer at National 
University of Malaysia. His research interests include business data modelling. 



P 




209 



Vol. 2, Issue 1, pp. 203-209 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Noise Modeling of SiGe HBT Based on the 

Characterization of Extracted Y- and Z- 

Parameters for HF Applications 

Pradeep Kumar and R.K. Chauhan 
Department of Electronics & Communication Engineering 
M.M.M. Engineering College Gorakhpur-273010, INDIA. 



Abstract 

In last several decades silicon- germanium (SiGe) technology has come into the global electronics marketplace. 
Commercial SiGe HBT facilitates transceiver designs and recommends transistor-level performance metrics 
which are competitive with the best III-V technologies (InP or GaAs), while sustaining strict fabrication 
compatibility with high yielding, low-cost, Si CMOS foundry processes on large wafers. This work depicts the 
complete an ample process to model the noise characteristics of a high frequency 0.1 jum SiGe HBT based on a 
direct parameter extraction technique. A modeling and characterization of noise parameters of Silicon- 
Germanium Hetrojunction Bipolar transistor is examined in this issue. Initially, Noise in SiGe Hetrojunction 
Bipolar Transistors is conferred in detail. Later, a linear noisy two-port network and its equivalent circuit 
model are presented for extracting and characterizing the noise parameters, for example, noise resistance (R n ), 
optimum source admittance (G Sop t> B Sop t) and minimum noise figure (NF min ) along with its modeling significance. 
In next step, a novel idea that explains the impact of Ge concentration on these noise parameters is also 
portrayed. The noise characteristics of the SiGe HBTs are advanced to those of III-V semiconductor devices. A 
corroboration of objective validity of the noise modeling scheme and the extraction noise parameter is 
accomplished in the form of Y-, and Z-parameters. These results have been validated using a viable numerical 
device simulator ATLAS from Silvaco International 

Keywords: sice hbt, r w NF min , B Sopt , g Sop , 
I. Introduction 

The multibillion semiconductor industry is rapidly using devices/transistors working in several GHz 
regions and is pushing to demonstrate useful solid-state transistors, and resultant circuits built from 
them, capable of operating near the THz regime. There are two major driving forces for SiGe solid- 
state devices: 1) high frequency communications and radars and 2) various niche THz applications. 
Recent research has focused on expanding THz options from two-terminal devices (e.g., Schottky 
diodes) to three-terminal devices (transistors) for both application areas. In high-frequency 
communications and radars, higher bandwidth transistors are desirable in a number of applications. 
Optical fiber communications require active amplifiers in decision circuits, multiplexers, and phase- 
lock loops operating at 100-GHz clock frequency and above. High current-gain and power-gain cutoff 
frequencies (f T and f max ) are also demanded in microwave, millimeter-wave, and submillimeter wave 
transceiver designs, where progressive improvements in transistor bandwidth enable the evolution of 
communications and radars ICs operating to higher frequencies. One of the key concerns in high 
frequency applications is their noise behavior. Therefore, accurate noise modeling of SiGe HBT is 
required [1]. SiGe HBTs were first demonstrated in the late 1980s [2]. It quickly became accepted in 
the field of wireless communication applications, in the form of wireless transceiver circuits because 
the higher performance than the Si bipolar devices and superior integration level than the III-V 
devices [3] [4] [5]. The low noise capability is one of the chief reasons for the success of the SiGe HBT 
in the field of wireless, RF and optical applications [6] [7] [8] [9]. 



210 | 



Vol. 2, Issue 1, pp. 210-219 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

In past few years, various small-signal SiGe HBT models have been developed using numerous 
parameter extraction methods with the intention of optimizing their frequency response [10]. Since 
these SiGe devices reach the cut-off and maximum oscillation frequencies (f T , f max ) beyond 500 GHz 
(half THz) due to the technology characteristics that's why they are suitable for RF, microwave and 
optical applications. Over and above, SiGe HBTs are competent devices at low-cost due to their 
simple coupling with Si technology, in contrast with other technologies (III-V) that offer higher 
velocities but at higher costs. This is the most important reason why these devices are widely used in 
electronic industries [9][11]. 

The defects and non-idealities in semiconductor devices can be computed perceptively by Low- 
frequency electrical noise. This directly or indirectly impacts the device performance and reliability. 
Thus, it is of major importance to be able to characterize the noise in semiconductor devices. The 
interest in low-frequency noise in electronic devices has been motivated by at least two factors. First 
the theoretical and experimental studies of the noise itself are of major interest. The low-frequency 
noise has a tremendous impact on devices and circuits. It sets the lower limit of detectable signals, and 
it converts to phase noise and thereby reduces the achievable spectral purity in communications 
systems. It is therefore of prime importance to be able to characterize the noise from electronic 
devices. Equally important is the information the noise carries about the microscopically physical 
processes taking place. In electronic devices, noise is caused by the random movement of discrete 
charge carriers, and their interaction with the environment in which they move. Hence, they carry 
useful information about that environment, e.g., the interior of a resistor or other semiconductor 
device [12]. 

Accurate transistor models which describe the high frequency noise behavior of the device are of 
great importance for the low noise circuit design and moreover, a physics-based equivalent circuit 
model on the noise behavior of the device. To determine the large number of unknowns of a HBT 
including the intrinsic elements and the extrinsic capacitances, extraction method based on small- 
signal 7i topology is used. Conventional procedures or methods based on simple bias measurements 
work very well if the extrinsic elements of the HBT have been previously determined. This approach 
may be used through different procedures — DC, cut-off measurements, or optimization. However, it 
is often very difficult to accurately determine the values of parasitic elements of the HBT, since the 
usual DC and cut-off techniques offer poor performance for SiGe HBT devices. In order to avoid this 
drawback, a new technique has been developed which does not require any additional measurements 
except for the scattering (S)-parameters at different biases. Linear models with a n topology have been 
tested to fit the measured S parameters properly. The base resistance, which has a significant impact 
on the high frequency noise characteristics of the transistor, can be obtained in a consistent way, as an 
accurate determination of the outer elements simplifies the equivalent circuit to a conventional model 
[13]. 

In this paper, an accurate noise model of SiGe HBT is presented by estimating the behavior of its 
noise parameters. The noise parameters for instance minimum noise figure (NF min ) noise resistance 
(R n ) and optimum source admittance Y s , op t are calculated for this device having 0.1 jam base width. 
The effect of Ge concentration on these noise parameters is also investigated. Following this 
motivation, in second section, we discuss various low frequency noise-sources in SiGe HBT. In the 
next section we introduce a noise model to extract the various noise parameters such as R n? G Sop b B Sop t 
and NF min for analyzing the performance of SiGe HBT in high frequency regime. In the fourth section, 
we discuss the simulation results based on ATLAS. Finally, in section fifth, we concluded with 
general observations as well as protrusions of this work. 

II. Semiconductor Low-Frequency Noise Sources 

2.1 Thermal Noise 

Inside ohmic device, the charge carriers at temperature T collide with phonons which in turn cause 
Browninan random motion with a kinetic energy proportional to T. This yields open circuit voltage 
fluctuations with zero average value and nonzero rms value. This value is given by [12], 



211 | 



Vol. 2, Issue 1, pp. 210-219 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



- 4hfBR m 

\ e At-1 

where vn is the rms value in Volts, h = 6.63x 10" 34 Js is Planck's constant, k = 1.38xl0" 23 JK-1 is 
Boltzmann's constant, B is the bandwidth of the system in Hz, f is the center frequency of the band in 
Hz and R is the resistance in Ohms. Here we consider only the first two terms of a series expansion of 
the exponential, e hf/kT -l ~ hf/kT. By using the approximation and converting to voltage spectral 
density v 2n /B, we get [12], 

S v = 4kTR (2) 

Hence, the thermal noise is a white noise. In other words, this is a noise with a frequency independent 
spectrum for frequencies up to the validity of the approximation, f < kT/h ~ 6250 GHz at 300 K, or f 
~ l/(27iRC), or f ~ 1/ T co n ~ 10 12 Hz. Here C is the parasitic capacitance parallel to R and x co n the 
mean time between collisions of free charge carriers. Thermal noise is also identified as Nyquist noise 
or Johnson noise. Thermal noise is usually the white noise floor studied at high frequencies for the 
MOSFETs and resistors [12]. 

2.2 Shot Noise 

The corpuscular nature of charge transport causes the shot noise. Walter Schottky discovered shot 
noise in radio tubes in 1918. He developed what has been recognized as Schottky' s theorem. Under 
steady-state conditions, the time-averaged current is constant, while the arrival times of the electrons 
are not equally spaced in a tube. This is due to the electrons when they leave the cathode at random 
times. This leads to fluctuations in the measured current, and, it can be described by simple Poisson 
statistics. It is mandatory that there is a DC current present or there is no shot noise, and thermal noise 
would dominate. Shot noise can be observed in for example Schottky-barriers and in PN-junctions. In 
these places the current results from the random emission of charged particles that are independent 
and discrete. The short circuit current spectral density is given by [12], 

Si = 2ql ^ (3) 

Where q = 1.6 x 10" 19 C and I is the DC-current in Ampere. In PN junctions, the shot noise is white up 
to a frequency given by the reciprocal of the transit time, i.e., as long as the fluctuations are slower 
than the rate of recombination. Shot noise is normally the white noise floor. This is observed for the 
bipolar devices, for example, the HBTs and the lasers [12]. 

2.3 Generation-Recombination Noise 

The fluctuations in the number of free carriers associated with random transitions of charge carriers 
between energy states cause Generation-Recombination (GR) noise. These random transitions of 
charge carriers occur mostly between an energy band and a discrete energy level (trap) in the 
bandgap. For a two terminal sample with resistance R, the spectral densities are depicted as [12], 



S R Sv Sn \AAH 4 Tn 



R 2 V 2 N 2 K l + {27Tf TN f 



(4) 



Where, SV, SR and SN are spectral densities of voltage, resistance and number of carriers, 
respectively. NO = hNi is the average number of free carriers. While x N is the trapping time. The 
resultant is of. The Lorentzian type spectrum is approximately constant below a frequency given by f 

9 

= 1/(2 x N ), and rolls off like 1/ / at the higher frequencies. These noise signatures are found in all the 
device types [12]. 

III. Noise Modeling 

The analytical expression for R n , NF min , B Sop t, G Sop t are advantageous for gaining additional intuitive 
insight into device optimization for noise. This can be accomplished using analytical Y-parameter 
equations. For this purpose a linear noisy two-port network is demonstrated in figurel [10]. In order 
to make such analytical expression practical, the accuracy must be balanced against simplicity of 



212 | 



Vol. 2, Issue 1, pp. 210-219 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

functional form. The power spectral densities of the input noise current ( § . ), the input noise 

voltage ( $ ) and their cross-correlation ( § . * ) are given by [6], 



Vn 








n HI /i J W ~ *■ 


\ - ■ \ 


\ " + 


+ . D i^Jj 














v, 






■V 


v 1 




Noise free 
block 

> i 


V 2 








ln © 




- 1 . 


1 f 






— 


1 


1 



Figure 1: a linear noisy two-port network. 

V naV na Sic ^Qlc 



V n 



A/ 



b2l| 2 l^lf 



c _ 1 rial na _ , O /c _ , ^H I c 

#21 



*« 



A / |#2l| 



c. * = 

°l„V„ I |2 

>21 



(5) 
(6) 

(7) 



In further step, we state the Y-parameters in terms of fundamental device parameters, for example |3, 
g m etc. For the purpose of designing the Niu's method is followed. The small-signal equivalent circuit 
in simplified is shown in figure 2 [6]. The base resistance is not important for the input impedance at 
frequencies smaller than f T . Thus it can be ignored for simplicity, even though it is noteworthy as a 
noise voltage generator. 



1 



;B ; fb; ; 

O Vv"- 



<-■ 



\JA 



Cbe 



gm\* 



<a> 



^■ :v > 



Figure 2: Equivalent circuit for the y-parameter derivation used in analytical noise modeling. 
The Y-parameters can be obtained as [6], 



y, 



P 



+ J«>Ci 



(8) 



213 | 



Vol. 2, Issue 1, pp. 210-219 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



y n = -j(oc bc 
y i2 = jcoCbc 



(9) 
(10) 

(11) 



Where g m = qkT/I c , and Q = C be + C bc . The C be consists of the EB diffusion capacitance Sii. x Ge x , and 
Q> e = ^-te+ g m r with t being the transit time, and any other EB parasitic capacitances. The Q is related 
to f T and C bc is the total CB junction capacitance, through [6], 



fr = 



2xCi 



(12) 



The oscillation frequency is expressed as [12], 



J max 



fl 



ZkCcbRb 



(13) 



The noise resistance can be determined as [6], 



S 1 

Rn-——-r b + - 

4kT 2 s m 



(14) 



This equation indicates that R n is directly proportional to the base resistance. R n also declines with I c 
at lower I c , and then stays constant. 

The optimum source admittance can be expressed as [6], 



r s,opt 



8 m 1 , {a>Ci) (1 



T-RnP 2g m Rn 2g m Rn 



(15) 



B 



~l{SiJ 



s,opt ' 



cod 

2g m Rn 



(16) 



In general, the admittance increases with collector current and frequency. In the case when diffusion 
capacitance leads the Q, then B s , op t becomes independent of I c , as Q is proportional to g m . The 
absolute value of B s , op t enhances with frequency. 

The minimum noise figure is obtained as [6], 



^7^ = 1 + -^ + 



lg m Rn . 2Rn{aCi) „ 1 



P 



- + 



(1- 



^g m Rn 



NF min = \ + - + ^2g, 



■n k 



P 



■ + 



\J T J 



(17) 



(18) 



Thus the noise figure NF for two port amplifier with input admittance of Y s can be given as [14], 



214 | 



Vol. 2, Issue 1, pp. 210-219 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



NF=NF mi n + 



— \v -v I 2 

\I S I s,opt\ 

Gs l ' 



(19) 



Where Y s is the source admittance and G s is the real part of Y s . 

IV. Simulation Results & Discussion 

Based on the above physics based model, the values of various noise parameters are calculated for n- 
p-n SiGe HBT (figure 3) and investigated for various different Ge concentrations in this paper. 
Simulation is carried out using ATLAS from SILVACO International. Average Ge concentration in 
the base region considered in our calculations is varied from 8% -25%. Higher to this are not 
supported by present epitaxial technologies and beyond it the improvement associated with Ge seizes 
may be due to lattice constant mismatch [15]. This paper is a next step to our last paper on SiGe HBT 
[15] for high frequency applications. With the intention of getting excellent accord between analytical 
and simulated characteristics, all the important physical effects, for example impact ionization (II) is 
appropriately modeled and accounted for the simulation as well [15]. 

With the purpose of depicting the complete image of the noise performance, a study of the variation 
of the noise parameters depending on frequency is employed in figure 4 and 5. The figure 4 describes 
the dependency and variations of noise parameters (R n , NF min , B Sop t, G Sop t) as a function of frequency. 
Figure 4(a) shows the variation of minimum noise figure NF min as a function of frequency and it can 
be concluded that NF min of SiGe HBT increases with the increase in frequency. This result matches 
with the analytical expression of NF min as in equation (14) which predicts that the NF^ increases 
monotonically with frequency. It was found that at 65 GHz, simulated NF min is only 2.70 dB at 65 
GHz. This is an admirable result. While at cutoff frequency its valve is calculated about 9.82 dB. The 
variation of optimum source admittance Y s , op t as a function of increasing frequencies are described in 
figures 4 (b) and (d). The figure 4(b) depicts that behavior of it real part (G s , op t) with frequency. It is 
found that G s , op t increases with the frequency and its calculated value at cut-off frequency is 0.09 mS. 

Its imaginary part \Bs ,opt\ as a function of frequency is plotted in the figure 4(d) which concluded that 

the imaginary part is also monotonically increases with frequency. At cut-off frequency, its value is 
calculated as 0.012 mS. These results are also matched with the analytical expression of G s , op t and 
B s , op t as in equations (15) and (16). The negative value of imaginary part signifies the requirement of 
an inductor for reactive noise matching. 




Figure3. The cross-section of the simulated SiGe HBT. 



215 | 



Vol. 2, Issue 1, pp. 210-219 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

The figure 4(c) demonstrates that the behavior of noise resistance for a frequency range. From the 
equation (14) it is clear that R n is directly proportional to base resistance (r b ). From figure 4 (c), it is 
concluded that noise resistance R n is weakly depended on frequency. At cut-off frequency, value of R n 
is calculated 0.2Q. This behavior almost fits with its analytical predictions. The following results are 
demonstrated for maximum oscillation frequency 16.8 THz and corresponding cut-off frequency 13.5 
THz[15]. 




Figure 4. Noise parameters versus frequency for SiGe HBT. (a) NF min vs. Frequency Plot (b) G Sop t vs. 
frequency plot (c) Noise Resistance vs. frequency plot (d) mod of B Sopt vs. frequency plot. 





2.1 2.2 2.3 2.4 

collector current (arnp) 



2.5 2.6 2.7 
x10" 3 



(b) 



216 | 



Vol. 2, Issue 1, pp. 210-219 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



0.016 






































0.015 




















0.014 




















55 




















g- 0.013 




















DD 




















0.012 




















0.011 




















0.01 
1 


















7 


8 


1.9 


2 


2. 


1 2.2 2.3 2.4 


2.5 


2.6 


2 










collector current(arnp) 






x 10" 3 



(c) 

Figure 5. Noise parameters versus collector current (amp.) for SiGe HBT. (a) NF min vs. collector current plot (b) 
Gsopt vs collector current plot (c) B Sopt vs collector current 

The figure 5 exhibits the variations of above noise parameters as a function of collector current. NF min 
vs. collector current plot is shown in figure 5(a) and it is concluded that the NF min increases 
monotonically with collector current of SiGe HBT. While the admittance parameters G s , op t and 
absolute value of B s , op t are also increases with the increment in the collector current of device as 
shown in figure 5 (b) and 5 (c). These plots of noise parameters vs. frequency and noise parameters 
vs. collector current are figured with the help extracted Y-, Z- parameters from ATLAS [16]. 



4.6 




















1 


1 


1 


1 


1 






4.5 


















4.4 


















4.3 


















4.2 
f 4.1 


















^ 4 


















3.9 


















3.8 


















3.7 


















3.6 
0. 


















D8 


0.1 


0.12 


0.14 


0.16 


0.18 





2 








Ge 









(a) 



0.18 


































0.175 


















0.17 


















0.165 


















© 0.16 


















$ 0.155 


















0.15 


















0.145 


















0.14 


















0.135 
0. 




, 


, 


, 


, 


, 






D8 


0.1 


0.12 


0.14 


0.16 


0.18 


0.2 








Ge 









(b) 



0.016 


































0.015 


















0.014 


















f§- 0.013 


















on 


















0.012 


















0.011 


















0.01 
0. 


















D8 


0.1 


0.12 


0.14 


0.16 


0.18 





2 








Ge 









(c) 

Figure 6. Noise parameters versus Ge concentrations (a) Effect of Ge cone, on NF min (b) Effect of Ge cone, on 
G Sop t (c) Effect of Ge cone, on B Sopt 

The effect of germanium concentration over these noise parameters is also investigated in this work. 
Some observations can be made on the basis of the figure 6 which reveals the impact of Ge 
concentration on above noise parameters (R n , NF min , B Sop t, G Sop t)- With this analysis it can be proved 
the values of noise figure NF min is increased with the increment in Ge concentrations as in figure 6(a). 
At 0.2 Ge concentration, admirable NF min with fine value of 4.52 dB is achieved. While figure 6(b) 

and 6(c) display the impact of Ge on optimum source admittance G s , op t and LBs j0/ J • It is concluded 

with the help of these two figures that these optimum source admittance parameters are increased with 



217 | 



Vol. 2, Issue 1, pp. 210-219 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

the increment in Ge concentrations. At cut-off frequency the calculated Noise parameters are summarized in 
the Table- 1 for the noise model of high frequency SiGe HBT. 



Table-1: Summary of Noise parameters of SiGe HBT at cut-off frequency 



Noise Parameters 


Value 


NF min (dB) 


9.82 


Rn/50 (Q) 


0.2 


G s ,opt(niS) 


0.09 


Absolute B Sopt (mS) 


0.012 



Now we will discuss some practical issues of employing this noise model. This model is used for 
observing the noise behavior of transceiver circuits for mobile wireless communication links because 
these applications demand highly sensitive circuits and in these applications, the dynamic range and 
sensitivity of high frequency wireless link depends on HF noise of transistors used in low noise 
amplifiers [17]. Further, this model can be helpful for estimating the noise performance of millimeter 
wave- band pass mass market applications, for instance, wireless HDMI and USB, several GHz 
WLAN and automotive radars [18]. In addition, this proposed model can be useful for approximating 
the noise characterization that cover the realm of 160 Gb/s fiber optic transmission and MMW 
imaging [19]. Overall, these noise parameters are extremely valuable for designing the low-signal RF 
amplifier which results in the high power gain and stable function of the amplifier as well as low 
noise level in wide frequency range. 

V. Conclusion 

In this work, physics based model and its impact on circuit for low-frequency noise in SiGe HBT has 
been discussed. In this paper a comprehensive analysis has been done and noise parameters based on 
equivalent noise model are extracted. It is concluded on the basis of above noise analysis that NFmin 
increases with frequency. An excellent value of simulated NF min i- e - 2.70 dB at 65 GHz is achieved. 

While the Noise Resistance R n is weakly depend on frequency. On the other hand, G Sop t and Ib&^J 

increase with frequency and collector current. A novel analysis is also presented which states that the 

noise figure NF min as well as the optimum source admittance i.e. G Sop t and Lb&^J of SiGe HBT 

increases with the Ge contents. At 0.2 Ge concentration, admirable NF min with fine value of 4.52 dB is 
attained. This model is used for building the low-signal high frequency amplifier. Such viable noise 
model can estimate the noise behavior of several GHz WLAN and automotive radars as well as 
millimeter wave imaging. 

References 

[1] K. Kumar, and A. Chakravorty, "Physics based modeling of RF noise in SiGe HBTs", IEEE 

proceedings of International workshop on Electron Devices and Semiconductor Technology 

IEDST'09',pp. 1-4,2009. 
[2] G. L.Patton, D. L. Harame, M. C. J. Stork, B. S. Meyerson, G. J. Scilla and E. Ganin, "SiGe-base, 

poly-emitter heteroj unction bipolar transistors" VLSI Symposium. Technical Digest, pp. 35-36, 

1989. 
[3] Han-Yu Chen, Kun-Ming Chen, Guo-Wei Huang and Chun- Yen Chang, "Small-Signal Modeling of 

SiGe HBTs Using Direct Parameter-Extraction Method", IEEE Transactions on Electron Devices, 

vol. 53, no. 9, 2006. 
[4] Ankit Kashyap and R.K. Chauhan, "Effect of the Ge profile design on the performance of an n-p-n 

SiGe HBT-based analog circuit", Microelectronics journal, MEJ: 2554, 2008. 
[5] Pradeep Kumar and R. K. Chauhan, "Electrical parameter characterization of bandgap engineered 

Silicon Germainium HBT for HF applications", proceedings of International conference on 

Emerging trends in signal processing and VLSI design, GNEC Hyderabad, Jun. 11-13, pp. 1157- 

1163,2010. 



218 



Vol. 2, Issue 1, pp. 210-219 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[6] Guofu Niu, "Noise in SiGe HBT RF Technology: Physics, Modeling, and Circuit Implications" 

Proceedings of IEEE, vol. 93, no. 9, 2005. 
[7] J. D. Cressler, "SiGe HBT technology: a new contender for Si-based RF and microwave circuit 

applications", IEEE Trans. Microw. Theory Tech., vol. 46, issue 5, 572, 1998. 
[8] Guofu Niu, Zhenrong Jin, John D. Cressler, Rao Rapeta, Alvin J. Joseph, and David Harame, 

"Transistor Noise in SiGe HBT RF Technology" IEEE Journal of Solid-state circuits, vol. 36, no. 9, 

2001. 
[9] A. Pacheco-Sanchez, M. Enciso-Aguilar and L. Rodriguez-Mendez, "Full comparison between 

analytical results, electrical modeling and measurements for the noise behavior of a SiGe HBT", 

IEEE proceedings of ANDESCON-2010, pp. 1 - 5, 2010. 
[10] F. Jun, "Small-signal model parameter extraction for microwave SiGe HBTs based on Y- and Z- 

parameter characterization", Journal of Semiconductors, vol. 30, no. 8, pp. 1-4, 2009. 
[11] N. Zerounian, E. Ramrez-Garca, F. Aniel, P. Chevallier, B. Geynet and A. Chantre, "SiGe HBT 

featuring f T 600 GHz at cryogenic temperature", International SiGe & Ge: materials, processing, 

and device symposium of the joint international meeting of the 214th meeting of ECS, 2008. 
[12] Jarle Andre Johansen, "Low-frequency Noise Characterization of Silicon-Germanium Resistors and 

Devices", thesis University of Troms0, NO-9037 Troms0, Norway. 
[13] Kenneth H. K. Yau, and Sorin P. Voinigescu, "Modeling and extraction of SiGe HBT noise 

parameters from measured Y-parameters and accounting for noise correlation", SiRF, p.p. 226-229, 

2006. 
[14] Neelanjan Sarmah, Klaus Schmalz and Christoph Scheytt, "Validation of a theoretical model for 

NF min estimation of SiGe HBTs", German Microwave Conference, pp. 265-267, 2010. 
[15] Pradeep Kumar and R. K. Chauhan, "Device Parameter Optimization of Silicon Germanium HBT 

for THz Applications", International Journal on Electrical Engineering and Informatics, vol. 2, no. 

4, pp.343-355, 2010. 
[16] ATLAS User's Manual Device Simulation Software, SILVACO International, 2004. 
[17] M. S. Selim, "Accurate high-frequency noise modeling in SiGe HBTs ", Design Tools/Sftware, 

pp.24-32, 2006. 
[18] Y. Tagro, D. Gloria, S. Boret, S. Lepillet and G. Dambrine, "SiGe HBT Noise Parameters 

Extraction using In-Situ Silicon Integrated Tuner in MMW Range 60 - 110GHz", IEEE BCTM 6.1, 

pp. 83-86, 2008. 
[19] P. Sakalas, J. Herricht, M. Ramonas$, and M. Schroter, "Noise modeling of advanced technology 

high speed SiGe HBTs", IEEE proceedings, pp. 169-172, 2010. 

Authors Biographies 

Pradeep Kumar was born in Allahabad, India in 1985. He received his B.Tech. degree in 
Electronics & Communication Engineering in 2006. He initially joined VINCENTIT 
Hyderabad in 2006 and thereafter worked as a lecturer in Dr. K.N.M.I.E.T. Modinagar, 
Ghaziabad between 2007 and 2008. He is currently pursuing the M.Tech. degree in Digital 
Systems from Madan Mohan Malviya Engineering College, Gorakhpur, India. His M.Tech. 
thesis is dedicated towards the modeling and device parameter optimization of Silicon- 
Germanium HBT for THz applications. 

R. K. Chauhan was born in Dehradoon, India in 1967. He received the B.Tech. degree in 

Electronics & Communication Engineering, from G.B.P.U.A.T - Pantnagar, in 1989 and 

M.E. in Control & Instrumentation, from MNNIT- Allahabad in 1993 and Ph.D in Electronics 

Engineering, from IT-BHU, Varanasi, INDIA in 2002. He joined the department of ECE, 

Madan Mohan Malviya Engineering College, Gorakhpur, India as a lecturer, in 1993, as an 

Assistant Professor since 2002 and thereafter as an Associate Professor since Jan, 2006 to till 

date in the same institute. He also worked as a Professor in Department of ECE, Faculty of 

Technology, Addis Ababa University, Ethiopia between 2003 to 2005. He is reviewer of Microelectronics 

Journal, CSP etc.. His research interests include device modeling and simulation of MOS, CMOS and HBT 

based circuits. He was selected as one of top 100 Engineers of 2010 by International Biographical Centre 

Cambridge, England. 




trl 



219 | 



Vol. 2, Issue 1, pp. 210-219 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Dielectric Properties of North Indian Ocean 

Sea water at 5 GHz 

A.S. Joshi 1 , S.S. Deshpande 2 , M.L.Kurtadikar 3 

Research Scholar, J.E.S. College, Jalna, Maharashtra, India. 

2 Rashtramata Indria Gandhi College, Jalna, Maharashtra, India. 

P.G. Department of Physics and Research centre, J.E.S. College, Jalna, Maharashtra, India. 



Abstract 

This study presents dielectric properties of North Indian Ocean seawater. In all, fourteen seawater samples are 
collected from Arabian Sea, Lakshadweep Sea, Tip of Bay of Bengal Sea, deep Indian Ocean and Equatorial 
region. The Von Hippie method is used to measure dielectric properties, both real part s' and imaginary s", at 5 
GHz and 30 °C using automated C-Band microwave bench set up. The dielectric constant s' and dielectric loss 
s" are calculated using least square fitting technique. The salinity measurement of seawater samples are done 
on autosalinometer. Making use of salinity values of all samples and for 5 GHz and 30 °C, static dielectric 
constant and dielectric loss are estimated by Klein-Swift model and Ellison et al. model. Experimental and 
theoretical results are compared. This study emphasizes latitude and longitudinal variations of salinity and 
dielectric properties. The laboratory data obtained are significant for microwave remote sensing applications in 
physical oceanography. 

KEYWORDS* Seawater Permittivity, Salinity, North Indian Ocean, 5 GHz microwave frequency . 

I. Introduction 

Indian Ocean is third largest ocean of the world and has unique geographic setting. The Tropical India 
Ocean (TIO), in particular is significant to oceanographers and meteorologists as it experiences the 
seasonally reversing monsoon winds and is land locked on northern side. Remote sensing [1-2] of 
ocean sea surface salinity, sea surface temperature is important in the areas like seawater circulations, 
climate dynamics, atmosphere modeling, environmental monitoring etc. For microwave remote 
sensing applications over ocean radar and radiometer, precise values of emissivity and reflectivity are 
required. The surface emissivity is a complex function of dielectric constant of surface seawater. This 
complex function is composed of two parts, the real part is known as the dielectric constant (e') and is 
a measure of the ability of a material to be polarized and store energy. The imaginary part (e") is a 
measure of the ability of the material to dissipate stored energy into heat. The two are related by the 
expression: 

£*=£ f -j£" ... 1 

The dielectric constant in turn is governed by electrical conductivity and microwave frequency under 
consideration. The conductivity is governed by salinity and temperature of seawater [3-4]. There are 
variations in salinity and temperature of ocean resulting variation in dielectric properties and hence in 
emissivity at that particular location. These variations follow certain pattern latitude and longitude of 
the location, due to dynamic features of the ocean. 

This work focuses on measurement of dielectric properties of seawater samples at 5 GHz at 30°C. 
The study emphasizes on latitude and longitudinal variations in salinity and dielectric properties. 
Knowing the dielectric constant and dielectric loss, the parameters like emissivity, brightness 



220 



Vol. 2, Issue 1, pp. 220-226 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

temperature, scattering coefficient can be interpreted, as they are interdependent. Making use of the 
measured salinity values of all samples, static dielectric constant and dielectric loss are estimated by 
Klein-Swift model [5-6] and Ellison et al. model [7-8] for 5 GHz and 30 °C. The laboratory data 
obtained are significant for interpretation of microwave remote sensing applications, and helps in 
designing active and passive microwave remote sensors. 

II. Material and Methodology 

2.1. Seawater Sampling 

By participating in ORV Sagar Kanya scientific cruise SK-259, organized by NCAOR in May-June 
2009 that is summer monsoon period, seawater samples were collected from Arabian Sea, 
Lakshadweep Sea, Tip of Bay of Bengal Sea, deep Indian Ocean and from equatorial regions of 
Tropical Indian Ocean. Surface seawater at different locations were drawn through bucket 
thermometer and two bottles of the samples were preserved around 4°C by standard procedure. Out of 
two bottles, one of the samples was used to determine the salinity parameter at that location using an 
Autosalinometer 8400B in the laboratory onboard Sagar Kanya vessel and the other sample of the 
same location was brought to the Microwave Research Lab, J.E.S. College, Jalna, Maharashtra for 
dielectric measurement. 

2.2. Temperature and Salinity Measurement 

The bucket thermometer is used to measure the temperature of surface seawater. Salinity 
measurements of seawater samples were done using 8400B AUTOSAL onboard ORV Sagar Kanya 
laboratory. This instrument is semi-portable, semi-automatic and is used in the land based or sea- 
borne laboratory to determine salinity levels of saline seawater samples and standard seawater sample 
by measuring their equivalent conductivity. The instrument reading is displayed in terms of 
conductivity ratio. Inputting the conductivity ratio to the software available in the computer lab, 
salinity value of the sample is calculated. The software calculates salinity using the following 
formula. The equation is based on the definitions and the algorithm of practical salinity formulated 
and adopted by UNESCO/ICES/SCOR/IAPSO Joint Panel on oceanographic tables and standards, 
Sidney, B.C., Canada, 1980 [9-10]. 



S = 



a + a 1 R 15 + a 2 Ri 5 + a 3 R 15 
^ +a 4 R 15 + a 5 R 15 + AS , 



...2 



AS = 



(T - 15) 



(l + 0.0162(T-15)) 



b +bX/ 2 + b 2 R 15 

,3/2 



5/2 



+b 3 R^ + b 4 Ri5 + b5R D 1 / 5 



Where £aj = 35.0000,2 bj = 0.0000, 

For, 2 < S < 42, and for - 2°C < T < 35 °C. 

Table 1 . Values of the coefficients a and b 



H^^K^^^H^H 





0.0080 


0.0005 


1 


-0.1692 


-0.0056 


2 


25.3851 


-0.0066 


3 


14.0941 


-0.0375 


4 


-7.0261 


0.0636 


5 


2.7081 


-0.0144 



2.3. Measurement of Dielectric Properties 

There are several methods of dielectric measurement of liquid [11]. In present work, the dielectric 
properties of seawater samples are measured using Von Hippie Method [12] for which automated C- 



221 | 



Vol. 2, Issue 1, pp. 220-226 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Band microwave bench , as shown in figure 1, is used. The MW bench consists of a low power 
tunable narrow band VTO-8490 solid-state microwave source; having frequency range of 4.3-5.8 
GHz. Tuning voltage is kept at 7 volts, throughout the experiment, which corresponds 5 GHz 
frequency. The other components of the bench setup are: an isolator, coaxial to waveguide adapter, 
attenuator, SS tuner, slotted line and the liquid dielectric cell. 































MICROWAVE 
SOURCE 




MICROWAVE 
POWER 
SUPPLY 




PERSONAL 
COMPUTER 








































ISOLATOR 




CO-AXIAL TO 

WAVEGUIDE 

ADAPTER 




ATTIIUATDR 




SLIDING 
SCREW 
TUNNER 




AUTOMATED 
SLOT LINE 




Liquid 

DIELECTRIC 
CELL 































Figure 1. Block diagram of a C-band microwave bench. 



Microwave generated by the VTO propagate through the rectangular waveguide to the liquid cell. A 
desired power level in the line is adjusted with the attenuator. A slotted section with a tunable probe is 
used to measure the power along the slot line. The crystal detector (1N23) in the probe is connected to 
a microammeter and to the PC to read, acquire and store the data. The empty liquid dielectric cell is 
connected at the output end of the bench. The bench is tuned to get symmetrical standing wave pattern 
in the slot line. The positions of minima are noted from the pattern from which wavelength >u g of the 
wave-guide can be calculated. The probe position on the slot line is kept constant at the first minima 
of the standing wave pattern in the slot line. The liquid dielectric cell is then filled with the sample 
under consideration. The plunger of the liquid cell is initially set in a position such that the thickness 
of the liquid column below the plunger is zero. By moving the plunger away from this position, data 
of microwave power is recorded for different plunger positions. The data of plunger positions and the 
corresponding power are acquired and stored in a file which is further used to calculate dielectric 
constant z' and dielectric loss £"using the least square fit program. The parameters a, |3, P , 5 are used 
as the fitting parameters, where a= attenuation factor, ^^propagation constant, P =maximum power, 
and 5= phase factor. The computer program also takes care of calculating error in dielectric constant, 
Ae', and error in dielectric loss, As". 
The dielectric properties of seawater samples can be calculated using the relations 



s = K -o + 



1 f (a 2 - (3 2 ) 



4tt 2 



...4 



and 



8 = 



^a[3 
2tt 2 



where X is the free space wavelength which can be calculated using the formula 
1 _ 1 1 

2 C 

WhereA c = 2a = 2 * 4.73 cm = 9.46 cm, 'a' being the broader side of the C-band rectangular wave- 
guide. 



222 | 



Vol. 2, Issue 1, pp. 220-226 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

III. Results and Discussions 

The Sea Surface Temperature of collected samples is found to be between 27 °C to 30 °C (Table 1.). 
Winds over the North Indian Ocean reverse twice during a year. They blow from the southwest during 
May - September and from the northeast during November - January with the transition-taking place 
during the months in between. Forced by these winds, circulation in the Indian Ocean has a general 
eastward direction during summer (May - September) and westward during winter (November - 
January). During summer, period when seawater samples were collected the monsoon current flows 
eastward as a continuous current from the western Arabian Sea to the Bay of Bengal [13-14]. These 
circulations are shown in Figure 2. 



JQ r N -k 



1Q"H 



0" - 



1D L 5 



ZD'S 



i , ■ n ' i' . i -- " . — ■■' ' "■ ■ 

' -:^ : ^^':^^^Tiiifi^!i~:ni^;i i - : :. : : :■-: 







3fl L E *D J E SD'E GO'*: 70"E 80 L E 9fl'E lflO'E llQ'E 



Figure 2. Schematic diagram of major surface currents in the TIO during the southwest (summer) monsoon. 
The thickness represents the relative magnitude of the current (adapted from Shenoi et al., 1999a) [15]. 



The Arabian Sea has high salinity (usually in the range 35 to 37) due to excess of evaporation over 
rainfall. In Table 2, the samples S-01 and S-03 are from Arabian Sea and have higher salinity values 
compared to other samples. 

Table 2. The temperature and salinity values of seawater samples. 



Sample 


Latitude 


Longitude 


Temperature 

°C 


Salinity 


S-01 


N 08° 30' 


E75°47' 


30 


35.0238 


S-02 


N 08° 06* 


E78°31' 


27 


34.6434 


S-03 


N 07° 36' 


E76°18' 


30 


35.1564 


S-04 


N 07° 39' 


E78°38* 


27 


34.9782 


S-05 


N06°49' 


E76°52' 


30 


34.7117 


S-06 


N 06° 00* 


E 79° 09' 


27.5 


35.0079 


S-07 


N05°ir 


E 78° 00* 


29 


34.6353 


S-08 


N 05° 12' 


E 79° 39* 


28 


34.5746 


S-09 


N04°33' 


E78°25* 


29 


34.7316 


S-10 


N 04° 25' 


E80°16' 


28 


34.5082 


S-ll 


N 03° 00* 


E81°20* 


27 


34.4115 


S12 


N 02° 46' 


E81°28* 


28.5 


34.3808 


S13 


N01°27' 


E 82° 24' 


28 


34.8350 


S14 


N 00° 37* 


E 82° 40* 


29 


34.9186 







223 | 



Vol. 2, Issue 1, pp. 220-226 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

In contrast, the Bay of Bengal has much lower salinity due to the large influx of fresh water from river 

discharge and high amount of rainfall. The samples S-02 and S-04 although located on similar latitude 

as S-01 and S-03 respectively, differ longitudinal wise and for these Lakshadweep Sea samples, 

drawn at mouth of Bay of Bengal Sea, decrease in salinity is seen. 

The samples S-05, S-07, S-08 are from deep IO and S-06, S-08, S-10, although located on similar 

latitude, differing in longitude, are towards east, and are from border of Bay of Bengal Sea and 

Arabian Sea. The salinity values of these samples are found less than the former ones. 

As we move towards Equator there is slight decrease in salinity in case of the samples S-ll, S-12, but 

near the equatorial regions, S-13, S-14, a sudden slight increase in salinity value is found. This is due 

to high evaporation in low-pressure equatorial regions [16]. 

The dielectric constant 8', dielectric loss e", error in dielectric constant Ae' and error in dielectric loss 

Ae", at 5 GHz, at 30 °C and with varying salinity, latitude and longitude wise in North Indian Ocean 

are given in Table 3. The magnitude of dielectric constant is found to be 66. It is found that dielectric 

constant is decreased with increase in salinity. The dielectric loss values are in range of 53 to 58. 

Table 3. The experimentally measured values of dielectric constant t, dielectric loss e", error in 
dielectric constant Ae' , error in dielectric loss A£"of all seawater samples at 5 GHz. 



Sample 


Latitude 


Longitude 


Salinity 


£' 


£" 


As' 


As" 


S-01 


N 08° 30* 


E75°47' 


35.0238 


66.5285 


53.8442 


6.6474 


2.3641 


S-02 


N 08° 06* 


E78°31' 


34.6434 


66.7812 


53.9340 


7.4309 


2.6384 


S-03 


N 07° 36* 


E76°18' 


35.1564 


66.5103 


56.9364 


6.9828 


2.5926 


S-04 


N 07° 39* 


E78°38' 


34.9782 


66.5876 


53.8652 


7.5291 


2.6767 


S-05 


N06°49' 


E76°52' 


34.7117 


66.7164 


53.9111 


6.9207 


2.4583 


S-06 


N 06° 00* 


E 79° 09' 


35.0079 


66.5675 


56.4844 


7.0274 


2.5917 


S-07 


N05°ll* 


E 78° 00' 


34.6353 


66.8153 


57.7529 


6.6314 


2.4801 


S-08 


N 05° 12* 


E 79° 39' 


34.5746 


66.8288 


53.9509 


7.6745 


2.7241 


S-09 


N04°33' 


E78°25' 


34.7316 


66.7093 


53.9086 


6.9227 


2.9227 


S-10 


N 04° 25* 


E80°16' 


34.5082 


66.8354 


53.9645 


7.8615 


2.7907 


S-ll 


N 03° 00* 


E81°20' 


34.4115 


66.9019 


58.549 


6.4895 


2.4498 


S12 


N 02° 46* 


E81°28' 


34.3808 


66.954 


55.9341 


6.0384 


2.0061 


S13 


N01°27' 


E 82° 24' 


34.8350 


66.6813 


55.7891 


7.2306 


2.6381 


S14 


N 00° 37* 


E 82° 40' 


34.9186 


66.6072 


56.7009 


7.5652 


2.7969 





The values calculated in Tables 4 and 5 are by using Klein and Swift and Ellison et al. models 
respectively. Comparison of measurement results with these respective models shows that real part, 
dielectric constant e' values are well in agreement. However, our experimental loss factor is higher by 
a magnitude of about 20 as compared with the theoretical models. The percentage error in 
measurement in dielectric constant and loss is of the order of 7 and 2 respectively. 

Table 4. The calculated relaxation time, x(ps) , static dielectric constant £ s , dielectric constant e' and dielectric 
loss z" using Klein-Swift Model at 5 GHz. 



Sample 



Latitude 



Longitude 



Salinity 



x(ps) 



S-01 
S-02 
S-03 



N 08 u 30' 
N 08° 06' 



N 07 u 36' 



E 75 u 47' 


35.0238 


69.6620 


7.0859 


66.6042 


34.7258 


E78°31' 


34.6434 


69.7340 


7.0875 


66.6715 


34.5412 


E76°18* 


35.1564 


69.6369 


7.0853 


66.5807 


34.7901 



224 | 



Vol. 2, Issue 1, pp. 220-226 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 





S-04 


N 07° 39* 


E78°38' 


34.9782 


69.6707 


7.0861 


66.6123 


34.7037 


S-05 


N06°49' 


E76°52' 


34.7117 


69.7211 


7.0872 


66.6594 


34.5743 


S-06 


N 06° 00* 


E 79° 09* 


35.0079 


69.6650 


7.0859 


66.6068 


34.7187 


S-07 


N05°ll' 


E 78° 00* 


34.6353 


69.7355 


7.0875 


66.6729 


34.5372 


S-08 


N 05° 12' 


E 79° 39' 


34.5746 


69.7469 


7.0877 


66.6836 


34.5077 


S-09 


N04°33' 


E78°25' 


34.7316 


69.7173 


7.0871 


66.6559 


34.584 


S-10 


N 04° 25* 


E80°16' 


34.5082 


69.7595 


7.0880 


66.6953 


34.4755 


S-ll 


N 03° 00* 


E81°20' 


34.4115 


69.7777 


7.0884 


66.7123 


34.4284 


S12 


N 02° 46' 


E81°28' 


34.3808 


69.7835 


7.0885 


66.7177 


34.4135 


S13 


N01°27' 


E 82° 24' 


34.8350 


69.6977 


7.0867 


66.6376 


34.6342 


S14 


N 00° 37* 


E 82° 40* 


34.9186 


69.6819 


7.0863 


66.6228 


34.6747 


Table 5. The calculated relaxation time x(ps) . 


, static dielectric constant £ S! 


, dielectric constant z' and dielectric 


loss z" using 


Ellison et. al. 


model at 5 GHz. 












Sample 


Latitude 


Longitude 


Salinity 


£s 


x(ps) 


£' 


£" 


S-01 


N 08° 30' 


E75°47' 


35.0238 


67.9616 


7.4153 


64.8768 


33.8678 


S-02 


N 08° 06* 


E78°31' 


34.6434 


68.0491 


7.4230 


64.9537 


33.7003 


S-03 


N 07° 36' 


E76°18' 


35.1564 


67.9311 


7.4126 


64.8500 


33.9261 


S-04 


N 07° 39* 


E78°38* 


34.9782 


67.9721 


7.4162 


64.8861 


33.8477 


S-05 


N06°49' 


E76°52* 


34.7117 


68.0334 


7.4216 


64.9399 


33.7304 


S-06 


N 06° 00* 


E 79° 09* 


35.0079 


67.9652 


7.4156 


64.8800 


33.8608 


S-07 


N05°ll' 


E 78° 00* 


34.6353 


68.0510 


7.4232 


64.9554 


33.6967 


S-08 


N 05° 12* 


E 79° 39* 


34.5746 


68.0649 


7.4244 


64.9676 


33.6700 


S-09 


N04°33' 


E78°25' 


34.7316 


68.0288 


7.4212 


64.9359 


33.7391 


S-10 


N 04° 25* 


E80°16' 


34.5082 


68.0802 


7.4258 


64.9811 


33.6408 


S-ll 


N 03° 00' 


E81°20' 


34.4115 


68.1025 


7.4278 


65.0006 


33.5982 


S12 


N 02° 46* 


E81°28* 


34.3808 


68.1095 


7.4284 


65.0068 


33.5847 


S13 


N01°27' 


E 82° 24' 


34.8350 


68.0050 


7.4191 


64.9150 


33.7846 


S14 


N 00° 37* 


E 82° 40* 


34.9186 


67.9858 


7.4174 


64.8981 


33.8214 





ACKNOWLEDGEMENTS 

We are thankful to ISRO for providing the C-Band Microwave Bench Setup under RESPOND project 
of Dr. M.L. Kurtadikar. Special thanks to NCAOR, Goa, for allowing participation in SK-259 cruise 
of ORV Sagar Kanya, for seawater sample collection. 

REFERENCES 

[1] Fawwaz T Ulaby, Richard K Moore and Adrian K Fung (1986). Vol. 3 Artech House Inc. 

[2] Eugene A. Sharkov (2003). Passive Microwave Remote Sensing of the Earth, Springer, Praxis Publishing, 

UK. 
[3] Smyth, C.P., (1955). Dielectric Behaviour and structure, McGRAW-HILL Book company Inc, New York. 



225 | 



Vol. 2, Issue 1, pp. 220-226 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[4] Hasted, J.B, (1973). Aqueous Dielectrics, Chapman and Hall Ltd, London. 

[5] Stogryn, A., (1971) Equation for calculating the dielectric constant of saline water, IEEE transactions on 

microwave theory and Techniques, vol.19, pp 733-736. 
[6] Klein, L.A., and. Swift ,C.T ,(1977). An improved model for the dielectric constant of seawater at 

microwave frequencies, IEEE J. Oceanic Eng., OE-2: pp 104-111. 
[7] Ellison, W., Balana, A., Delbos, G., Lamkaouchi, K., Eymard, L., Guillou, C, and Prigent, C, (1996). Study 

and measurements of the dielectric properties of sea water, Tech. Rep. 11197/94/NL/CN, European Space 

Agency. 
[8] Ellison, W., Balana, A., Delbos, G., Lamkaouchi, K., Eymard, L., Guillou, C., and Prigent, C. (1998). New 

permittivity measurement of seawater," Radio Science, Vol. 33: pp 639-648. 
[9] Lewis, E.L. (1978). Salinity: its Definition and calculation. /. Geophys. Res. 83:466. 
[10] Lewis, E.L. (1980). The practical salinity scale 1978 and its antecedents. IEEEJ. Oceanic Eng. OE-5:3.pp 

14. 
[11] Udo Kaatze (2010). Techniques for measuring the microwave dielectric properties of materials, IOP 

publishing, Metrologia, Vol. 47, pp 91-113. 
[12] Von Hippie A (1954). Dielectrics & Waves, Wiley, New York. 
[13] Prasanna Kumar S, Jayu Narvekar, Ajoy Kumar, C Shaji, P Anand, P Sabu, G Rijomon, J Josia, K.A. 

Jayaraj, A Radhika and K.K. Nair (2004). Intrusion of Bay of Bengal water into Arabian Sea during winter 

monsoon and associated chemical and biological response, American Geophysical Research, vol. 31, 

L15304, doi:l 0.1 029/2004 GL020247. 
[14] Gangadhara Rao, L.V., Shree Ram, P., (April 2005). Upper Ocean Physical Processes in the Tropical 

Indian Ocean, monograph prepared under CSIR scientist scheme, National Institute of Oceanography 

regional centre, Visakhapatnam, pp 4-32. 
[15] Shenoi, S.S.C, Saji, P.K and Almeida, A.M (1999a). Near-surface Circulation and kinetic energy in the 

tropical Indian Ocean derived from Lagrangian drifters, J Mar. Res. Vol. 57, pp. 885-907. 
[16] Shankar .D, Vinayachandra P.N, Unnikrishnan (2002). The Monsoon Currents in the North Indian Ocean, 

Progress in Oceanography, 52(1) pp 63-120. 

Authors 

Anand Joshi was born in Aurangabad, India in 1981. He received B.Sc. degree in Physics, 
Mathematics, Computer Science and M.Sc. degree in Physics from Dr. Babasaheb Ambedkar 
Marathwada University, Aurangabad, Maharashtra, India in 2002 and 2004 respectively. He is 
currently pursuing a Ph.D. (Physics) degree under the guidance of Dr. M.L.Kurtadikar, 
Postgraduate Department of Physics and Research Centre, J.E.S. College, Jama, Maharashtra, 
India. His research interests include Dielectric measurements, Microwave Remote sensing 
Applications and Astrophysics. 

Santosh Deshpande was born in Parbhani, India in 1974. He received M.Sc. degree in Physics 
from Swami Ramanand Teerth Marathwada University, Nanded, Maharashtra and M.Phil 
degree in Physics from Algappa University, Tamil Nadu, India in 2000 and 2008 respectively. 
He is currently working as Assistant Professor of Physics in the RMIG College, Jama, 
Maharashtra, India. He is also pursuing a Ph.D. degree under the guidance of Dr. 
M.L.Kurtadikar, Postgraduate Department of Physics and Research Centre, J.E.S. College, 
Jama, Maharashtra, India. His research interests include Dielectric measurements, Microwave 
Remote sensing Applications and Astrophysics. 

Mukund L. Kurtadikar was born in Nanded, India in 1951. He received the Master of 
Science (Physics) and Ph.D. (Physics) degrees from Marathwada University of Aurangabad, 
India in 1973 and 1983 respectively. He is currently working as Associate Professor of Physics 
in the Postgraduate Department of Physics of J. E. S. College, Jama, Maharashtra, India. His 
research interests include Microwave Remote Sensing Applications, dielectric measurements of 
soils, seawater, rocks, snow, vegetation etc. He also works on Photometry of Variable Stars 
using Small Optical Telescope and Scientific Exploration of Historic Monuments. He is a 
Science Communicator. 





226 | 



Vol. 2, Issue 1, pp. 220-226 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



An Efficient Decision Support System for 
Detection of Glaucoma in Fundus Images using 

ANFIS 

S.Kavitha 1 , K.Duraiswamy 2 

^sst.Prof, Nandha Engineering College, Erode, India. 
2 Dean, K.S.Rangasamy College of Technology, Tiruchengode, India. 



Abstract 

This paper proposes a computer aided decision support system for an automated detection of glaucoma in 
monocular fundus images. Identification of Glaucoma using fundus images involves the measurement of the 
size, shape of the Optic cup and Neuroretinal rim. Optic Cup detection is a challenging task because of the 
interweavement of cup with the blood vessels. A new color model technique based on pallor in fundus images 
using K means clustering is proposed to differentiate between the Optic cup to disc boundary. The method 
differs by initial optic cup region detection followed by the erasure of blood vessels. In addition to the shape 
based features, textural features are extracted to better characterize the pathological subjects. Optimal set of 
features selected by Genetic algorithm are fed as input to Adaptive Neuro fuzzy inference system for 
classification of images into normal, suspect and abnormal categories. The method has been evaluated on 550 
images comprising normal and glaucomatous images. The performance of the proposed technique is compared 
with Neural Network and SVM Classifier in terms of classification accuracy and convergence time. 
Experimental results shows that the features used are clinically significant for the accurate detection of 
glaucoma. 

KEYWORDS' Optic Cup, Clustering, Glaucoma, Genetic Algorithm, Neuroretinal Rim, ANFIS. 

I. Introduction 

Glaucoma the leading cause of blindness and asymptomatic in the early stages and its detection is 
essential to prevent visual damage [1]. About 2% of the population between 40- 50 years old and 8% 
over 70 years old have elevated intraocular pressure (IOP) [2] which increases their risk of significant 
vision loss and blindness. Digital color fundus image has emerged as a preferred imaging modality for 
large scale eye screening programs due to its non-invasive nature. The less expensive fundus images 
are used in the proposed work rather than the expensive techniques such as Optical Coherence 
Tomography (OCT) and Heidelberg Retinal Tomography (HRT). 

Optic Disc detection is an important issue in retinal image analysis as it is a significant landmark 
feature and its diameter is used as a reference for measuring distances and sizes. The optic disc and 
cup were located by identifying the area with the highest average variation in intensity among 
adjacent pixels [3]. Automatic detection of optic disc is performed by region of interest based 
segmentation and modified connected component labelling. Boundary tracing technique was applied 
to detect the exact contour of optic disc. A quantitative analysis is performed on the neuroretinal rim 
area to assess glaucoma. [4]. In this approach, a potential set of pixels belonging to cup region is first 
derived based on the reference color obtained from a manually selected point. Next, an ellipse is fit to 
this set of pixels to estimate the cup boundary. A variant of this method obtains the cup pixels via 
thresholding of the green color plane [5]. To handle large inter-image intensity variations that arise 
due to complex imaging, additional information such as small vessel bends ('kinks') which 



227 | 



Vol. 2, Issue 1, pp. 227-240 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

anatomically mark the cup boundary have been used in [6]. A Deformable model was presented for 
the detection of Optic Disc and cup boundaries. The method improves snake model and is robust to 
edges and ill defined edges [7]. Optic disc is detected using local image information around each point 
of interest in a multi dimensional feature space. Optic cup is detected by making use of the vessel 
bends at the cup boundary. Bends in a vessel are detected using a region of support concept and a 
multistage strategy followed by a local spline fitting to find the desired cup boundary. The method 
captures OD boundary in a unified manner for both normal and challenging cases without imposing 
any shape constraint on the segmentation result. Segmentation results shows consistency in handling 
geometric and photometric variations found across the dataset [8]. 

A deformable model guided by regional statistics is used to detect the OD boundary. Cup boundary 
detection scheme is based on Lab color space and the expected cup symmetry. This method uses 
sector wise information and give rise to fewer false positives and hence better specificity. Error value 
computed is less for a normal image than for a glaucomatous image [9]. Optic disc and cup are 
extracted in order to determine cup to disc ratio. Optic disc is extracted using a variational level set 
method and the detected contour is uneven due to influence of blood vessels. Detection of cup 
boundary was performed using intensity and threshold level set approach. Thresholding techniques 
produced better results for both high and low risk retinal images. An ellipse fitting is used to 
smoothen the boundary [10, 11]. 

Cup to disc ratio was measured using a vertical profile on the optic disc on the blue channel of the 
color image to diagnose glaucoma. Sensitivity of 80% and a Specificity of 85% [12] were achieved 
using vertical CDR measurement for seventy nine images. An algorithm to detect glaucoma using 
mathematical morphology was developed using fundus images. Developed Neural network system 
identified glaucoma automatically with a sensitivity of 100% and specificity of 80% [13]. A 
framework for the detection of glaucoma based on the changes in the optic nerve head using 
orthogonal decomposition method was used in [14]. The changes in the optic nerve head were 
quantified using image correspondence measures namely LI norm, L2 norm, correlation and image 
Euclidean distance. A novel cup segmentation method based on support vector clustering algorithm 
[15] is described for the purpose of supporting glaucoma diagnosing in ophthalmology. 30 geometric 
features were computed on the extracted cup region and the technique achieved 94.5% sensitivity and 
97.5% specificity when trained with SVM classifier.3D images are generally not available at primary 
care centres due to their high cost. Therefore a solution built around these imaging equipments is not 
appropriate for large scale screening program. An automated classifier is developed based on adaptive 
neuro-fuzzy inference system (ANFIS) to differentiate between normal and glaucomatous eyes from 
the quantitative assessment of summary data reports of the Stratus optical coherence tomography 
(OCT) images. With stratus OCT parameters as input a good discrimination was achieved between the 
eyes [16]. A novel method [17] is proposed to detect glaucoma using a combination of texture and 
higher order spectral features from fundus images. Features extracted have a low p value and are 
clinically significant. An accuracy of more than 91% is obtained with a random forest classifier 
combined with z score normalization and feature selection methods. 

Most of the works related to Glaucoma detection based on fundus images concentrate only on the Cup 
to Disc Ratio (CDR). CDR was found to be inconsistent sometimes to detect Glaucoma since the 
patients may have severe visual loss with a small CDR as in Figure 1 and vice versa. Cup/disc ratio 
staging system does not account for disc size and that focal narrowing of the neuroretinal rim present 
between the optic disc and optic cup is not adequately highlighted. So a method has been proposed to 
accurately detect Glaucoma based on CDR, Neuroretinal rim area to find the rim loss and textural 
features in order to detect pathological subjects correctly. 

"""-v. ""\ ^ Optic disc 

J! 

Neuro retinal rim 
Optic cup 

Figure 1. Optic nerve drawings with identical cup/disc ratios but with unequal rim width 




228 



Vol. 2, Issue 1, pp. 227-240 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Organization of the paper is as follows. Section 3 describes the proposed method for Optic cup 
detection, feature extraction from Intrapapillary and Peripapillary regions, selection of optimal 
features by genetic algorithm and the classifier used. Section 4 presents the experimental results and 
performance analysis of the proposed method. Finally the paper concludes in section 5. 

II. Materials Used 

Fundus images used in this work are captured by Topcon TRC50 EX mydriatic fundus camera with a 
50° field of view at Aravind Eye hospital, Madurai. The image size is 1900x1600 pixels at 24 bits true 
color image. Doctors in the ophthalmic department of the hospital approved the images for the 
research purpose. 

III. Proposed Method 

An efficient segmentation of optic disc and optic cup is essential to get a better localization of 
Neuroretinal rim area to diagnose various stages of glaucoma. As glaucoma progresses, the optic cup 
becomes larger and hence the cup to disc ratio is higher. Further the blood collects along the 
individual nerve fiber that radiate outwards from the nerve [17]. Such physiological changes are 
manifested in the fundus images and the experiments shows that the cup to disc ratio and texture 
features are able to quantify such difference in eye physiology. 



Input Image 



Pre-processing 



OD Segmentation 
OC Segmentation 



Intrapapillary information 



Texture analysis 



Feature selection 



glaucoma 



True 




Classifier 



normal 





false / 


1 


r 




y 


r 


Suspect 




Suspect 




Figure 2. Flowchart of the proposed method 



3.1 Pre-processing 



The flowchart of the proposed work is shown in Figure 2. RGB color retinal images are pre-processed 
using anisotropic diffusion filter in order to remove noise. The advantage of anisotropic diffusion [18] 
is that there is no need to know about the noise pattern or power spectrum previously and also it will 



229 



Vol. 2, Issue 1, pp. 227-240 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

provide better contrast while removing the noise. The filter iteratively uses diffusion equation in 
combination with information about the edges to preserve edges. 
The equation for anisotropic diffusion is defined as: 

Idiv(c(*x,y,t)VI)=c(x,y,t)VI) +Vc .VI (1) 

where div is the divergence operator,V is a gradient operator, c is the conduction coefficient function 
Anisotropic diffusion filtering introduces a partial edge detection step into the filtering process so as 
to encourage intra-region smoothing and preserve the inter-region edge. Anisotropic diffusion is a 
scale space, adaptive technique which iteratively smoothes the images as the time t increases. The 
time t is considered as the scale level and the original image is at the level 0. When the scale 
increases, the images become more blurred and contain more general information. 

3.2 Detection of optic cup 

Optic disc is detected using region of interest based segmentation and the bounding rectangle 
enclosing the region of interest is set as 1.5 times the disc width parameter. In this paper a new 
approach for the segmentation of Optic cup is proposed. The proposed method shown in Figure 3 is 
aimed to detect the optic cup exactly to calculate the Neuroretinal rim area present between the disc 
and cup. Unlike most of the previous methods in the literature, proposed method differs by initial 
optic cup region detection followed by the erasure of blood vessels to get a higher accuracy. 




Figure 3. Systematic Representation of the color model 

The optic cup and disc areas usually differ in color, known as pallor. This method makes use of this 
difference in pallor to delineate the cup-disc boundary. Observations on the retinal images show that 
the actual cup pallor differs between different patients and even between images of the same retina 
due to changes in the lighting conditions. So the prior knowledge of color intensity of the optic cup 
cannot be fixed. 



Input Image 



Optic Disc Masking 



K-Means Clustering 



Centroid Color Mapping 



Cup Color Identifying 



Cup Boundary Extraction 



Morphological Operations 



llipse Fitting 



Figure 4. Flow Diagram of the proposed method 



230 



Vol. 2, Issue 1, pp. 227-240 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Optic cup is detected using the technique proposed in Figure 4. In order to detect the optic cup region 
from the surrounding region, color space analysis, a segmentation algorithm based on histogram 
analysis and k means clustering followed by morphological operations has been developed. Since 
color space transformation plays a significant role in image processing, this step incorporates color 
information into the segmentation process, where the original RGB image is transformed to different 
color spaces and it has been found that L*a*b* space consists of a luminosity layer 'L*', chromaticity- 
layer 'a*' indicating where color falls along the red-green axis, and chromaticity-layer 'b*' indicating 
where the color falls along the blue-yellow axis. All the color information is in the 'a*' and 'b*' layers. 
Optic cup is obtained clearly in this color space when compared with the other color spaces as shown 
in Figure [5]. These spaces serve as feature vectors for k means clustering. Color difference is 
measured using Euclidean distance metric. 






Figure 5. Color space conversion 
In the proposed color model for the detection of optic cup, Optic disc consists of regions viz Optic 
cup, Interior Optic Disc, Exterior Optic Disc and Blood vessels. So the number of clusters is selected 
as four (K=4) manually using domain knowledge. Since the CIE L*a*b* feature space is three 
dimensional, each bin in the color histogram has N d -1 neighbors where N is the total number of bins 
and d is the number of dimensions of the feature space. N is experimented for various values like 5, 
10, 15 and the value of N is chosen as 10 by trial and error method and d is 3. Then the 3D colour 
Histogram is computed. The technique uses histogram information of 3 ID color components to find 
the number of valid classes. Disc is masked with a radius equal to greater than the size of the optic 
Disc. The masked image is fed to the clustering process in order to group the pixel values into 
regions. Number of clusters for k means clustering is determined automatically using Hill Climbing 
technique [19]. Peaks are identified by comparing the pixels with the neighboring bins and the number 
of peaks obtained indicates the value of K, and the values of these bins form the initial seeds. The 
initial seeds for the algorithm was selected from local maximum of the 3D color histogram of the CIE 
L*a*b color space. These formed seeds are then passed to K mean clustering. K-Means is an 
unsupervised clustering algorithm [20] that classifies the input data points into multiple classes based 
on their inherent distance from each other. The algorithm assumes that the data features form a vector 
space and tries to find natural clustering in them. K Means Clustering process is explained in the steps 
below 

1. Number of clusters k is taken as four. Lower value of k leads to an increase in the cup size. Higher 
value results in the predominance of blood vessels. An incorrect value of k gives a sub optimal 
result. 

2. Initialize cluster centres |il... |Lik .Choose k data points and set cluster centres to these points and 
make them as initial centroids. The data points are grouped into k clusters such that similar items 
are grouped together in the same cluster 

3. For each data point, nearest centroid is found and the data point is assigned to the cluster associated 
with the nearest centroid. Centroid is the mean of the points in the cluster. 

4. Update the centroid of each cluster based on the items in that cluster. The new centroid will be the 
mean of all points in the cluster. 



231 | 



Vol. 2, Issue 1, pp. 227-240 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

For a given cluster assignment C of the data points, cluster means m k is computed as in equation 2. 



i:C(i)=k t 1 jyr 



(2) 



For a current set of cluster means, each observation is assigned as in equation 3. 



C(i) = arg minp. - m k | 

l<k<K 



i = l...,N 



(3) 



The centroid is taken and data is mapped to the closest one, using the absolute distance between them. 
5. The above two steps are iterated until convergence and when there are no new re-assignments it is 
stopped. 

K-means minimizes within-cluster point scatter shown in equation 4 

^c)4ii iih-*X=i"*iik-' B *f (4) 

^ k=\ C(i)=k C{j)=k k=\ C(i)=k 

where x u . . . , x n are data points or vectors or observations 

m k is the mean vector of the k th cluster 

N k is the number of observations in k th cluster 

C(i) denotes cluster number for the i th observation. 

K-Means clustering groups the pixels within the Optic disc into the above mentioned four regions. 

Each cluster has a centroid. Then each region is filled with the corresponding region's centroid color. 

From these 4 regions, the region corresponding to optic cup can be easily identified by its centroid 

color. Each pixel within a cluster is then replaced by the corresponding cluster centre color. The 

brightest centroid color corresponds to the optic cup shown in Figure 6. Thus an initial boundary of 

optic cup is obtained. Pixels that are not classified are assigned to the closest cluster based on a 

weighted similarity measure between the clusters on the centre and the pixel in the image. L a* b* 

color space and k means clustering is more suitable to detect optic cup for normal and pathological 

subjects and exhibits a high Rand Index and lower Variation of Information ( Vol ), Global 

Consistency measure (GCM) and Boundary displacement (BDE) when compared with the other color 

spaces. 





a bed 

Figure 6. a. Input Image b. clustered outputs for N =5 c. N=10 d. N=15 

Boundary of the cup can be obtained using the equation by first eroding the image by the structuring 
element B and then performing the set difference between A and the eroded image. 



P (A) =A-(A@B) 



(5) 



The impact of blood vessel region within the cup is removed by morphological operations. This is 
performed by a dilation followed by erosion operation in the region of interest. A circular window of 
maximal vessel width as radius is used for dilation and erosion. A 3x3 structuring element is used in 
this work. Mathematically the functions are expressed using equations 6 and 7. 

Dilation of the image A by B 



232 | 



Vol. 2, Issue 1, pp. 227-240 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

A®B = {P€Z2:P = a + b, a€A andb€B } (6) 

Erosion is defined by 

A@B = {P€Z2:P+b€Z for every b€B} (7) 

This step helps to reject outliers inside or outside the cup region and helps to get approximate cup 
region. Ellipse Fitting Algorithm based on least squares fitting algorithm is used to smooth the cup 
boundary. The modified optic cup boundary obtained is then fitted with ellipse as in Figure 7. Few 
sample results for diverse images are shown in Figure 8 for optic cup boundary extraction and 
Neuroretinal rim area. 

3.3 Feature Extraction 

Transformation of images into its set of features is known as feature extraction. Features used in this 
work are based on intrapapillary and peripapillary information from the retinal images. Interpapillary 
parameters refers to the features extracted from optic disc and optic cup. Cup to disc ratio (CDR) and 
neuro retinal rim area to disc diameter are extracted from the segmented optic disc and optic cup. 
CDR was calculated by taking the ratio between the diameter of the Optic cup and disc in the vertical 
direction. CDR > 0.3 indicates glaucoma and CDR < 0.3 is considered as normal. In glaucoma, 
structural changes in optic nerve head precede the functional changes. The conventional cup-disc ratio 
does not measure the actual rim loss which has a more diagnostic value in glaucoma. Neuroretinal rim 
tissue indirectly indicates the presence and progression of glaucomatous damage and it is related to 
the disc size. Neuro retinal rim area is calculated by subtracting the area of the optic cup from area of 
optic disc. Normally the rim is widest in the inferior temporal sector followed by the superior 
temporal sector, the nasal and the temporal horizontal sector. So Rim to Disc ratio used to estimate the 
width of the neuroretinal rim is considered as an important feature in the diagnosis of glaucoma. 
Texture analysis is performed in order to better characterize the abnormal images .Image diagnosis is 
based on the texture of the segmented portion of the image compared to that of the standard retinal 
texture image values. Texture extraction is the process of quantifying the texture patterns within a 
specified neighbourhood of size M by N pixels around a pixel of interest. Features are chosen in order 
to allow the discrimination between healthy and pathological subjects. The textural properties are 
derived by using first-order statistics and second-order statistics computed from spatial gray-level co- 
occurrence matrices (GLCM). GLCM is a second order measure as it includes the relationship 
between the neighbourhood pixels. For an image of size m x n, a second order statistical textural 
analysis is performed by constructing GLCM [21]. The data is normalized and contains feature vectors 
computed around each pixel. The normalized feature vector contains altogether 12 features computed 
over the window size of 'n x n' pixel matrix. Texture analysis is used to estimate the peri papillary 
information. Normal, suspect and abnormal classes are represented using relevant and significant 
features to classify the input images. In order to avoid the problem of dimensionality, it is desirable to 
have a smaller feature set. Twelve features are used in this work among which two are extracted from 
the segmented optic disc and cup region and 10 are extracted from the texture analysis. Features used 
are cup to disc ratio, rim to disc ratio, mean, standard deviation, skewness, kurtosis, contrast, 
correlation, inverse difference moment, variance, energy and entropy. 

3.4 Feature selection 

Feature selection refers to the problem of dimensionality reduction of data which consists of large 
number of features initially. The objective is to choose optimal subsets of features from the image. 
The sequential forward floating selection (SFFS) algorithm [22] and Genetic algorithm was 
experimented individually to find the best feature set for classification. The algorithm employs a 
"plus 1, take away r" strategy. Features are added sequentially to an initially empty feature set but, at 
every iteration features are also removed if that improves performance. In this way "nested" groups 
of good features can be found. 

Genetic algorithm was used to select the most significant features [23] characterizing the shape of the 
disc and cup region. Since Genetic algorithms are relatively insensitive to noise, they seem to be an 
excellent choice for the basis of a more robust feature selection strategy to improve the performance 

233 | Vol. 2, Issue 1, pp. 227-240 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

of classification system. In this work, each of the twelve features are represented by a chromosome 
(string of bits) with 12 genes (bits) corresponding to the number of features. An initial random 
population of chromosomes is formed to initiate the genetic optimization. A suitable fitness function 
is estimated for each individual. The fittest individuals are selected and the crossover and the mutation 
operations are performed to generate the new population. This process continues for a particular 
number of generations and finally the fittest chromosome is calculated based on the fitness function. 
Features with a bit value "1" are accepted and the features with the bit value of "0" are rejected. 
Feature set selected from Genetic algorithm provides significant six features namely cup to disc 
ratio, rim to disc ratio, skewness, contrast, correlation and inverse difference moment. In addition to 
the six features selected by Genetic Algorithm kurtosis is another parameter selected by SFFS 
algorithm. Each of the features is normalized between to 1 and the weighted features are used for 
training and testing of instances. 

3.5 Adaptive Neuro-Fuzzy Inference System as Classifier (ANFIS) 

Adaptive Neuro Fuzzy Inference Systems combines the learning capabilities of neural networks with 
the approximate reasoning of fuzzy inference algorithms. ANFIS uses a hybrid learning algorithm to 
identify the membership function parameters of Sugeno type fuzzy inference systems. The aim is to 
develop ANFIS-based learning models to classify normal and abnormal images from fundus image to 
detect glaucoma. An adaptive neural network is a network structure consisting of five layers and a 
number of nodes connected through directional links. The first layer executes a fuzzification process, 
second layer executes the fuzzy AND of the antecedent part of the fuzzy rules, the third layer 
normalizes the fuzzy membership functions, the fourth layer executes the consequent part of the fuzzy 
rules and finally the last layer computes the output of the fuzzy system by summing up the outputs of 
the fourth layer [24]. Each node is characterized by a node function with fixed or adjustable 
parameters. Learning or training phase of a Neural network is a process to determine parameter values 
to sufficiently fit the training data. Based on this observation, a hybrid-learning rule is employed here, 
which combines the gradient descent and the least-squares method to find a feasible set of antecedent 
and consequent parameters. 

In order to obtain a set of rules and avoid the problems inherent in grid partitioning based clustering 
techniques, subtractive clustering is applied. This technique is employed since it allowed a scatter 
input-output space partitioning .The subtractive clustering is one-pass algorithm for estimating the 
number of clusters and the cluster centres through the training data. Parameters used for clustering are 
shown in Table 1. 

Table 1. Parameters used for clustering 



Range of influence 


0.5 


Squash factor 


1.25 


Accept ratio 


0.5 


Reject Ratio 


0.15 



IV. Experimental Results 



# • • 



a b c d e f 

Figure 7. Steps in the detection of optic cup: a. Input image b. Mask image c. Color model d. Initial cup 

boundary e. Image smoothing f. Ellipse fitting. 



234 | 



Vol. 2, Issue 1, pp. 227-240 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



• # • • 



(a) 



» - « * 



(b) 



« •* • 



(c) 

Figure 8. Few sample results a) Input Images b) Cup boundary for the corresponding inputs c) Neuroretinal 
Rim area present between the disc and cup area shown in arrow mark 

V. Performance Analysis 

A) Optic cup detection 

i) To assess the area overlap between the computed region and ground truth of the optic cup pixel 
wise precision and recall vales are computed 



Precision = 



Recall = 



TP 

TP+FP 
TP 
TP+FN 



(7) 
(8) 



where TP is the number of True positives, FP is the number of false positive and FN is the number of 
false negative pixels. 

ii) Another method of evaluating the performance is using F Score given by 

Precision. Recall 
F = 2 * 



Precision + Recall (9) 

Value of F score lies between 0-1 and score will be high for an accurate method. Average F score for 
thresholdind and component analysis are compared and listed in Table 2. 

Table 2. F score for cup segmentation 



Images 


Threshold 


Component 
analysis 


Proposed 
approach 


1 


0.67 


0.72 


0.89 


2 


0.69 


0.70 


0.86 


3 


0.66 


0.67 


0.81 



235 | 



Vol. 2, Issue 1, pp. 227-240 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



4 


0.63 


0.73 


0.86 


5 


0.54 


0.60 


0.78 


6 


0.71 


0.79 


0.90 


7 


0.73 


0.78 


0.90 


8 


0.67 


0.71 


0.86 


9 


0.68 


0.72 


0.85 


10 


0.64 


0.76 


0.87 



B) Performance analysis of the proposed technique 

In the proposed system, six features are selected and hence the number of input variables is six. A 
sample of fuzzy if then rules is framed for fundus images classification. In a fundus image, Fuzzy if 
then rules form the input for the ANFIS architecture. ANFIS is initialized with 100 iterations and 
0.001 step size value for parameter adaptation. Dataset used for fundus image classification is shown 
in Table 3. A 10 fold cross validation of data is used in the proposed work. From the available dataset, 
data is split into setl and testing set. Next, setl is further divided into training and validation set. Then 
the classifier is trained using training set and tested on validation set. The process is repeated by 
selecting various combinations of training and validation set. The classifier which gives best 
performance is then selected and used to get performance in the testing set. 

Table 3.Dataset for Fundus Image Classification 



Images 


Training 
Data 


Test 
Data 


No of 
Images/Class 


Normal 


50 


130 


180 


Suspect 


50 


120 


170 


Abnormal 


50 


150 


200 


Total 


150 


400 


550 



In this work 150 images are used for training and 400images for testing. 150 images, 50 from each of 
the class for training and 400 images (130 normal, 120 suspect and 150 abnormal) for testing were 
used for classification. The schematic of the ANFIS structure obtained for the proposed system is 
shown in Figure9. 




Figure9. ANFIS Structure for the proposed technique 

Number of nodes used in the architecture is 79. 35 linear parameters and 60 nonlinear parameters are 
generated with 5 fuzzy rules. Root Mean square error is 0.022 when testing the data against the FIS 
structure. Classification accuracy is the ratio of the total number of correctly classified images to the 
total number of misclassified images. Table 4 shows the performance measure of the classifiers. 

Table ^Classification accuracy results of the classifier 



Images 


No of 

test 

images 


ANFIS 


Back 


CCI 


MI 


CA(%) 


CCI 


MI 


CA(%) 


Normal 


130 


127 


3 


97.6 


125 


5 


96.1 


Suspect 


120 


119 


1 


99.1 


117 


3 


97.7 


Abnormal 


150 


149 


1 


99.3 


147 


3 


98 



236 | 



Vol. 2, Issue 1, pp. 227-240 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

CCI = Correctly Classified Images, MI = Misclassified Images, CA = Classification Accuracy 

Performance of each classifier is measured in terms of sensitivity, specificity, and accuracy. 
Sensitivity is a measure that determines the probability of results that are true positive such that the 
person has glaucoma. Specificity is a measure that determines the true negatives that the person is not 
affected by glaucoma. Accuracy is a measure that determines the results that are accurately classified. 
The same dataset is used for neural network based Back propagation classifier. MATLAB (version 
7.0) is used for implementation of the work. Comparative analysis performed between the classifiers 
based on correctly classified images, is shown in Table 5. Comparative performance of the classifier 
using the optimal feature subset selection is shown in Figure 10. 

Table 5. Performance measure of the classifiers 



Classifier 


Specificity(%) 


Sensitivity(%) 


Accuracy(%) 


ANFIS 


97.6 


99.2 


98.7 


BACKPROPAGATION 


96.1 


97.7 


97.25 



With Performance evaluation classification by means of Area under receiving operating 
characteristics (ROC), Classification with optimal feature selection achieves 0.99 A z , 0.0437standard 
error and 0.7427 computation seconds for ANFIS and 0.93 A z with 0.0123 standard error for Back 
propagation. Classification without optimal feature selection has 0.91 A z with 0.01405 standard error 
and 4.16 seconds for computation. Convergence time and RMSE of ANFIS is very less compared to 
Back Propagation Neural Network. ANFIS gives a good classification performance when compared 
to back propagation in terms of convergence time and Root mean square error. 




Figure 10. Comparative performance of the classifier 

Impact of individual features on the detection of Glaucoma is given in Table 6. Textural 
features when with the shape characteristics namely rim to disc ratio and cup to disc ratio 
there was a good improvement in the accuracy. Graph showing the performance evaluation 
are shown in Figure 1 1. 

Table 6. Performance analysis of the features 



Features 


Sensitivity(%) 


Specificity(%) 


Accuracy(%) 


Cup to Disc Ratio 


93.5 


95.3 


94 


Rim to Disc area 


97 


96.1 


96.8 


First order texture 


92.5 


89.2 


91.5 


Second order texture 


95.1 


90.7 


93.7 


CDR. RDR, 

selective textural features 


99.2 


97.6 


98.7 



237 | 



Vol. 2, Issue 1, pp. 227-240 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



100 

98 
96 
94 
92 
90 
88 
86 
84 




i ■ ■ ■ ■ 



n 1 



.* .* / 



oV 



&' 



I Sensitivity(%) 
l Specificity(%) 

Accuracy(%) 









Figure 11. Performance measure of individual features. 



VI. Conclusion 

K Means clustering used in the proposed work focuses on the pallor information at each pixel thereby 
enabling rapid clustering and achieves a very good accuracy in detecting the optic cup. It is simple 
and easy to implement an unsupervised method rather than a supervised approach. Hill climbing 
technique and k means clustering provides a promising step for the accurate detection of optic cup 
boundary. Vertical CDR or superior or inferior rim area parameters may be more specific in identifying 
the neuroretinal rim loss along the optic disc compared to an overall cup-to-disc diameter ratio. Textural 
features are considered in this work in order to effectively detect glaucoma for the pathological subjects. A 
hybrid method involving textural features along with CDR, Neuroretinal Rim area calculation 
provides an efficient means to detect glaucoma. ANFIS achieves good classification accuracy with a 
smaller convergence time compared to Neural network classifiers. Performance of the proposed 
approach is comparable to human medical experts in detecting glaucoma. Proposed system combines 
feature extraction techniques with segmentation techniques for the diagnosis of the image as normal 
and abnormal. The method of considering the neuroretinal rim width for a given disc diameter with 
the textural features can be used as an additional feature for distinguishing between normal and 
glaucoma or glaucoma suspects .Progressive loss of neuroretinal rim tissue gives an accurate result to 
detect early stage of glaucoma with a high sensitivity and specificity. The proposed system can be 
integrated with the existing ophthalmologic tests, and clinical assessments in addition to other risk 
factors according to a determined clinical procedure and can be used in local health camps for 
effective screening. 

Acknowledgement 

The authors are grateful to Dr.S.R.KrishnaDas, Chief Medical Officer and Dr. R. Kim, Chief- Vitreo- 
Retinal Service, Aravind Eye Hospital, Madurai for providing the fundus Photographs and support 
for our work. 

References 

[1] W.H.O, "World Health Organization Programme for the prevention of blindness and deafness -global 

initiative for the elimination of avoidable blindness," Document no.: WHO/PBL/97.61 Rev.l; Geneva: 

1997. 
[2] Glaucoma Research foundation. (2009) . [online] .Available 

htttp://www. glaucoma.org/learn/glaucoma_facts.php. 
[3] CSinthanayothin, J.F.Boyce, C.T.Williamson, "Automated detection of the Optic disc, Fovea and 

retinal blood vessels from digital color fundus images," British Journal of Ophthalmology, 38, pp 902- 

910, 1999. 
[4] S.Kavitha, S.Karthikeyan, Dr.K.Duraiswamy, "Neuroretinal rim Quantification in Fundus Images to 

Detect Glaucoma", IJCSNS International Journal of Computer Science and Network Security, Vol.10, 

No.6,ppl34-139, June 2010. 



238 



Vol. 2, Issue 1, pp. 227-240 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[5] J. Liu, D. Wong, J. Lim, H. Li, N. Tan, and T. Wong, "Argali- An automatic cup-to-disc ratio 

measurement system for glaucoma detection and analysis framework , "Proc. SPIE, Medical Imaging, 

pages 72603K.-8, 2009. 
[6] D. Wong, J. Liu, J. H. Lim, H. Li, X. Jia, F. Yin, and T. Wong, "Automated detection of kinks from 

blood vessels for optic cup segmentation in retinal images," Proc. SPIE, Medical Imaging, page 72601 J, 

2009. 
[7] J. Xu, O. Chutatape, E. Sung, C. Zheng, and P. Chew, "Optic disk feature extraction via modified 

deformable model technique for glaucoma analysis", Pattern Recognition, 40(7):2063-2076, 2007. 
[8] Gopal Datt Joshi, Jayanthi Sivaswamy, S.R.Krishnadas, "Optic Disc and Cup Segmentation from 

monocular Color retinal images for Glaucoma Assessment, " IEEE Transactions on Medical 

Imaging,2010. 
[9] Gopal Datt Joshi, Jayanthi Sivaswamy, Kundan Karan, S. R. Krishnadas, "Optic Disk And Cup 

Boundary Detection Using Regional Information" Proceedings of the 2010 IEEE international 

conference on Biomedical imaging: from nano to Macro,2010. 
[10] J.Liu,J.H.Lim,and H.Li, "ARGALI": An automatic cup to disc ratio measurement system for Glaucoma 

analysis using Level set Image processing," in SPIE Medical Imaging, San Diego, USA, Feb 2008. 
[11] J. Liu, D. W .K Wong, J. H. Lim, X. Jia, F. Yin, H. Li, W. Xiong, T.Y. Wong, "Optic Cup and Disc 

extraction from Retinal Fundus Images for Determination of Cup- to- Disc Ratio, " in proceedings 

of 2008, IEEE Engineering pp 1828-1832. 
[12] Yuji Hatanaka, Atsushi Noud, Chisako Muramats, Akira Sawad, Takeshi Hara,Tetsuya Yamamoto, 

Hiroshi Fujita, " Vertical cup-to-disc ratio measurement for diagnosis of glaucoma on fundus images", 

Proc. of SPIE Vol. 7624 7624301,2010. 
[13] J.Nayak, U.R.Acharya, P.S.Bhat,A.Shetty and T.C.Lim, "Automated Diagnosis of glaucoma using 

digital fundusimages," Journal of Medical Systems,Vol.33,No.5,pp.337-346,August 2009. 
[14] K. Stapor, "Support vector clustering algorithm for identification of glaucoma in ophthalmology," 

bulletin of the polish academy of sciences technical sciences, "Vol. 54, No. 1, 2006. 
[15] M.Balasubramanian, S.Zabic, C.Bowd,H.W.Thompson,P.Wolenski,S.S. Iyengar, "A framework for 

detecting glaucomatous progression in the optic nerve head of an eye using proper orthogonal 

decomposition," IEEE Transactions on Information Technology in Biomedicine,Vol.l3,No.5,pp.781- 

793,September2009. 
[16] Mei-Ling Huang, Hsin-Yi Chen and Jian-Jun Huang, "Glaucoma detection using adaptive Neuro-fuzzy 

inference system, Expert Systems with Applications, Science Direct,32, 458-468, 2007. 
[17] U.Rajendra Acharya,Sumeet Dua,Xian Du,Vinitha Sree .S and Chua Kuang Chua "Automated 

diagnosis of glaucoma using texture and higher order spectra features, "IEEE Transactions on 

information technology in biomedicine,Vol.l5,No.3,May 201 1. 
[18] P. Perona and J. Malik, "Scale space and edge detection using anisotropic diffusion", IEEE 

Transactions on Pattern Analysis and Machine Intelligence, Vol. 12, No. 7, pp.629 - 639, July 1990. 
[19] T.Ohashi, Z.Aghbari, and A.Makinouch, "Hill-climbing algorithm for efficient color-based image 

segmentation, In IASTED International Conference On Signal Processing, Pattern Recognition, and 

Applications (SPPRA 2003), June 2003. 
[20] Tapas Kanungo, David M. Mount, Nathan S. Netanyahu, Christine D. Piatko, Ruth Silverman, and 

Angela Y. Wu, "An Efficient k-Means Clustering Algorithm: Analysis and Implementation ,'TEEE 

transactions on pattern analysis and machine intelligence, Vol. 24, No. 7, July 2002. 
[21] J.H.Tan,E.Y.K.Ng and U.R.Acharya, "Study of normal ocular thermogram using textural 

parameters, "Infrared Physics and Technology, Vol. 53, No.2,pp.l20-126,March 2010. 
[22] Jain, A., Zongker, D, "Feature selection: evaluation, application, and small sample performance," IEEE 

Transactions on Pattern Analysis and Machine Intelligence ,Vol.l9 ,No. 2, 153-158,1997. 
[23] David. E. Goldberg, 'Genetic algorithms in search, optimization and machine learning, Addison- 

Wesley,pearson education ,pp59-86,1989. 
[24] J.S.R Jang, C.T Sun, E. Mizutani, Neuro-fuzzy and soft computing: approach to learning and machine 

intelligence, London: prentice-Hall international, vol.8, no. 8. 1997. 
Authors 

S. Kavitha received B.E. degree in Electronics and Communication Engineering and M.E. 
degree in Applied Electronics from Government College of Technology, Coimbatore.She worked 
in Amrita Institute of Technology and Science, Coimbatore for five years. She is working as 
Assistant Professor in Nandha Engineering College,Erode since 2004. Her research interest 
includes Digital Image Processing, Neural Networks and Genetic Algorithm. She is a life member 
oflSTE. 




239 | 



Vol. 2, Issue 1, pp. 227-240 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



K. Duraiswamy received his B.E. degree in Electrical and Electronics Engineering from P.S.G. 

College of Technology, Coimbatore in 1965 and M.Sc.(Engg) from P.S.G. College of 

Technology, Coimbatore in 1968 and Ph.D. from Anna University in 1986. From 1965 to 1966 

he was in Electricity Board. From 1968 to 1970 he was working in ACCET, Karaikudi. From 

1970 to 1983, he was working in Government College of Engineering Salem. From 1983 to 

1995, he was with Government College of Technology, Coimbatore as Professor. From 1995 

to 2005 he was working as Principal at K.S. Rangasamy College of Technology, Tiruchengode and presently he 

is serving as Dean of K. S. Rangasamy College of Technology, Tiruchengode, India. He is interested in Digital 

Image Processing, Computer Architecture and Compiler Design. He received 7 years Long Service Gold Medal 

for NCC. He is a life member in ISTE, Senior member in IEEE and a member of CSI. 






240 



Vol. 2, Issue 1, pp. 227-240 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Step-Height Measurement of Surface 

functionalized mlcromachined mlcrocantilever 

using Scanning White Light Interferometry 

Anil Sudhakar Kurhekar and P. R. Apte 
Deptt. of Electrical Engg., Indian Institute of Technology Bombay, Powai, Mumbai, India 



Abstract 

Micro-cantilever arrays with different dimensions are fabricated by micromachining technique onto silicon <1 
0> substrate. These sputtered Gold-Coated micro -cantilevers were later surface functionalized. Scanning 
Electron Microscopy, Atomic Force Microscopy and Optical SWLI using LASER probe are employed to 
characterize the morphology and image measurement of the micro -cantilever arrays, respectively. Compared 
with conventional AFM and SPM measurement technique, the proposed method has demonstrated sufficient 
flexibility and reliability. The experimental results have been analyzed and presented in this paper for MEMS 
Micro -cantilevers. The scanning White Light Interferometry based two point high resolution optical method is 
presented for characterizing Micro -cantilevers and other MEMS micro -structures. The repeatable error and the 
repeatable precision produced in the proposed image measurement method is nanometre confirmable. In this 
piece of work, we investigate the micro -structure fabrication and image measurement of Length, Width and 
Step-Height of micro -cantilever arrays fabricated using bulk micromachining technique onto Silicon <100> 
substrate. 

KEYWORDS' Scanning Electron Microscopy; Atomic Force Microscopy; Micro-cantilever; Optics; Image 
Measurement; Silicon (100), Scanning White Light Interferometry. 

I. Introduction 

Step height measurement is required in many fields including semiconductors, micro-circuitry and 
printing. Small steps are often measured using a profilometer, calculating the least-squares straight 
line through the data, and then identifying the areas above and below this as being step and substrate. 
The step height is calculated using a least-square fit to the equation: Z = aX + b + h V where a, b, h 
are unknowns and V takes the value of +1 in the higher regions and -1 in the lower regions. The 
unknowns a and b represent the slope and intercept of the line. The step height is calculated as twice 
the value of the third unknown, h. This approach is fine for samples where the flatness of the step and 
substrate both are good. 

Accurate measurement of dimensions of microstructures using optical method has received much 
attention because of their potential advantages over conventional AFM/SPM techniques. [1] A 
common method to fabricate the micro-cantilevers is to pattern the deposited continuous film using 
bulk or surface micromachining technique. [12] However, these methods are demonstrated perfect only 
for sub-micron micro-cantilever arrays. As the micro-cantilever size decreases to nanometres, 
interesting behaviour may be expected. In particular, reduced micro-cantilever size results in change 
of domain structure [2] and will affect the characterization of the micro-cantilever. One of the other 
methods to fabricate the nanometres micro-cantilever array is laser micromachining the deposited 
material onto a silicon substrate. C2 ' 3] Until now, the conventional image measurement technique for 
planar micro-structural properties of micro-cantilever on silicon <100> substrates have been studied. 
[6] However, the applicability of optical methods for microstructure arrays is established. In this piece 
of work, we investigate the micro-structure fabrication and image measurement of Length, Width and 



241 | 



Vol. 2, Issue 1, pp. 241-248 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Step-Height of micro-cantilever arrays fabricated using bulk micromachining technique onto Silicon 
<100> substrate. 

II. The Method 

Small steps are often measured using a profilometer, calculating the least-squares straight line through 
the data, and then identifying the areas above and below this as being step and substrate. The step 
height as depicted in figure 1 is calculated using a least-square fit to the equation: Z = aX + b + h V 
where a, b, h are unknowns and Y takes the value of +1 in the higher regions and -1 in the lower 
regions. 




Figure 1. Three-dimensional profile of a 500-mirometer standard step height obtained from scanning white-light 
interferogram. The repeatability of the measurement is 10 nm. 

The unknowns a and b represent the slope and intercept of the line. The step height is calculated as 
twice the value of the third unknown, h. This approach is fine for samples where the flatness of the 
step and substrate both are good. 

We use optical method for precision measurements using interferometry. The ideal way to analyze 
complex interference data electronically is to acquire a high density of data points per interference 
fringe to capture all the detail of the interference signal. There are, however, practical limits to the 
amount of data that can be stored, processed, and displayed. This is particularly true of scanning 
white-light interferometry (SWLI) for surface topography measurement. These instruments use 
broadband sources together with mechanical translation of the object or reference surface to measure 
large discontinuous surface features. Typically, a SWLI instrument acquires three to five intensity 
values per interference fringe per pixel and process millions of data values to generate a single three- 
dimensional image. The volume of data involved means that 500 micrometers of depth range can 
require several minutes just for data acquisition. A piezoelectric actuator (PZT) is used to translate the 
object in a direction parallel to the optical axis of the interferometer over an interval of several tens of 
micrometers. The resulting interference pattern for a single pixel resembles the data simulation. The 
traditional way of measuring surface topography with such a system is to calculate the fringe contrast 
as a function of scan position and then relate the point of maximum contrast to a surface height for 
each pixel in the image. There are several ways to calculate the fringe contrast for this purpose, 
including measuring the maximum and minimum intensity values, by standard phase-shift 
interferometry formulas or digital filtering. These fringe-contrast techniques have in common a high 
density of image frames over the range of PZT translation. 

The proposed method uses the fundamental physical concept of frequency domain processing of 
interferograms is that a complex interference pattern may be considered the incoherent superposition 
of several simple single-frequency interference patterns. Each of these single-frequency patterns may 
be represented by the simple formula, 

7 = l + cos(0) (1) 

Where, 



242 | 



Vol. 2, Issue 1, pp. 241-248 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

= k-Z (2) 

Here k is the angular wave number and the distance Z is the phase-velocity optical path difference in 
the interferometer. For simplicity we assume a perfectly compensated and intensity-balanced 
interferometer. From Eq. (2) it is clear that 



. d(J>y 



'dk 



(3) 



Hence one way to measure distances is to calculate the rate of change of phase with spatial frequency. 
To calculate this rate of change, we need phase values § over a range Ak of frequencies centered 
around a mean frequency ko. 

A simple linear fit to the phase data provides the rate of change d(])/dk and the mean phase § \ This 
information can be used to calculate the distance in either one of two ways. The rate of change of 
phase can be used alone to calculate distance with Eq. (3). Alternatively, this preliminary calculation 
can be used to remove the 2n ambiguity in the mean phase (JV, which may then be used in an inverted 
form of Eq. (2) for high-precision measurements. 

The Laser probe is used to select the points of interest on the device structure. The Measurement of 
Length and Width is relatively simple because the two-points placed shall be in-plane. However, for 
the measurement of Step-Height, the two-points shall not be in-plane. So, we fix one point (Maximum 
Z) using the marker and the other point can be placed where the height need to be measured 
(Minimum Z) as shown in Figure 2. 






Ad 



Step I-Fix Maximum Z 



Step IE-Place probe at any other point 



Figure 2. Proposed Two-point optical Method for Etch-Depth Measurement 

The difference between Maximum Z and Minimum Z of the markers (Ad) shall give the Step-Height. 
The film Step-Height (Ad) is directly proportional to Wavelength of the Laser Light and is inversely 
proportional to twice the refractive index of the film being etched. Thus, the change in film Step- 
Height, using this proposed method, is measured with the relation (4), 



Z = Ad = A/2-rj 



(4) 



Where, X is the wavelength of the laser light and r| is refractive index of the etched layer. 

III. Experimental 

Two procedures were used to fabricate micro-cantilever arrays. Firstly, Silicon<100> substrates were 
deposited by silicon-di-oxide using a thermal oxidization procedure as described previously. [4] The 
oxide deposition rate was about 1.5 nm/min and the gas flow rate of 18 seem. After the oxidization 
and Patterning, the residual Silicon was removed by an anisotropic etchant. [5] Secondly, micro- 
cantilever arrays were deposited by RF magnetron sputtering from a gold target at room temperature. 
The sputtering chamber was firstly pumped down to 180 milli Torr. Then, the deposition of Chrome 
was carried out under Ar atmosphere with about 180 milli Torr and the gas flow rate of 18 seem. 
During the deposition process, the continuous film of gold was also deposited onto a silicon substrate 
under the same condition for the convenience of measuring the film thickness. 



243 | 



Vol. 2, Issue 1, pp. 241-248 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

The sputtered chrome-gold layer has affinity with thiophenol molecules. [7] Considering this fact, we 
have dip-coated the piezo-resistive micro cantilevers with 1 micromole thio-phenol in ethanol solution 
for 3 Hrs. and then, rinsed with ethanol, for 2 minutes. The surface becomes functionalized. 
The surface morphology of the micro-cantilever arrays was investigated by the scanning electron 
microscopy (SEM, JEOL 2000). The grazing incidence LASER diffraction (GILD), which avoids the 
effect of substrate to the pattern, was used to study the image measurement of the microstructure. The 
mechanical properties at the temperature 300 K were measured by Atomic Force Microscopy. 

Image measurement of significant parameters of surface functionalized micro machined micro 
cantilevers such as Length Width and Step- Height were obtained using SEEBREZ® optical multi- 
sensing system with laser probe and Taylor-Hobson's Form Talysurf® 3-D surface profiler machine 
with 3-D Telemap Gold 3-D profile software. The Coni-spherical Stylus with the base radius of 2 
micrometers was used for the contact-mode measurements. 

The co-ordinate measurements were done with SEEBREZ® optical multi-sensing system with laser 
probe. This system has an auto focus facility. After the sample was prepared for measurements, the 
origin of the wafer co-ordinates was put relatively. Then Maximum z-coordinate was fixed with a 
laser-beam marker. The Measurement of Length and Width is relatively simple because the two- 
points placed shall be in-plane. However, for the measurement of Step-Height, the two-points shall 
not be in-plane. So, we fix one point (Maximum Z) using the marker and the other point can be placed 
where the height need to be measured (Minimum Z). The difference between Maximum Z and 
Minimum Z of the markers (Ad) shall give the Step-Height. The change in the film Step-Height (Ad) 
is directly proportional to Wavelength of the Laser Light and is inversely proportional to twice the 
refractive index of the film being etched. Thus, the change in film Step- Height, using this proposed 
method, is measured with the relation (5), 

Z = Ad = A/2-7] (5) 

Where, X is the wavelength of the laser light and r| is refractive index of the etched layer. We have 
measured Length, Width and Step- Height using the proposed method, which is relatively easy and 
accurate. The x, y and z co-ordinates of any part of the structure is measured accurately. 

IV. Results And Discussion 

Figure 2 shows the proposed two-point-high-resolution and accurate optical method for image 
measurement of the micro-cantilever array. Figure 3 shows the SEM picture of the micro-cantilever 
array with a thickness of 500 nm. 




Figure 3. Scanning Electron Microscopy Micrograph of Micro cantilever on silicon<l > surface. 



244 | 



Vol. 2, Issue 1, pp. 241-248 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

The micro-cantilever has a wider parameter distribution, which the mean parameters deviation is 
approximately 10 nm. Further analysis of SEM shows that there is some etch-product reminiscent at 
the bottom of the trapezoidal etch-pit. However the side-walls of the etch-pit are smooth. Figure 4 
shows the Atomic Force Microscopy of the sample. It shows that the film with micro-cantilevers is 
free standing in a trapezoidal micro-cavity. Further analysis shows that the crystallite size distributes 
in the range of 5-10 nm, calculated with the Scherrer formula. 




Figure 4. Atomic Force Microscopy Micrograph of Surface Functionalized Micro-cantilever on silicon<l > 

surface 

To get the etch-depth information, the micro machined sample was kept on the base of a Taylor- 
Hobson's- Form Talysurf® 3-D surface profiler machine. The Coni-spherical Stylus was reset to 
original position after deciding the frame of x-y travel. The machine starts building etch-profile 
slowly as the coni-spherical stylus moves over the sample. Three-Dimensional Profile generated using 
Taylor Hobson's Talysurf Profiler machine of the sample are shown in Fig. 5, with the contact mode 
using coni-spherical stylus moving parallel to the film plane. 




Figure 5. 3-D Surface Profiles (Silhouettes) of Micro cantilever. 



245 



Vol. 2, Issue 1, pp. 241-248 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

The image generated using the stylus based profilometry is in conformity with the Scanning Electron 
Microscopy. Figure 6 and Figure 7 depicts the width and length measurement with SEEBREZ 
machine with high accuracy and repeatable precision. 




i 



Figure 6. Width measurement of Micro cantilever. 




Cursor 1 


Cursor 2 


X = 0.0715 mm 


X = 0.157 mm 


Y = 0.507 mm 


Y = 0.507 mm 


Z = 1 3.6 urn 


Z = 1 6.3 |am 


Horizontal distance 


0.0848 mm 


Height difference 


2.72 |jm 


Oblique distance 


0.0848 mm 



Figure 7. Length measurement of Micro cantilever. 
A two-dimensional profile to recover the depth information is depicted in Figure ! 



246 



Vol. 2, Issue 1, pp. 241-248 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



im i 


\ 

■ i i i 1 i i 


I I 


1 1 1 1 1 1 1 




Le ng lh = 


0.7 mm P 


= 21j5 pm 3cde 


-*rjp 

■ i i i 


i 1 i i 


■ i 1 i i 


i i 1 i i i i 1 i i i i 




20- 
15 ■ 




i ■ ■ i ■ i 
i i i i i i 












■ 
i 




1 

■ 












. _ _ L ' 




10- 


1 - " 


_ . T _ . 


1~ - 


■~ i — 
i 

■ 
i 


j _ _ _ 


- "J 

■ 


— 1 - - 






" - 1 


--,--■ 


_ 1 

. _ L _ . J 












i 

! 
1 




i 
I 
i 










5- 
0- 


1 






i 






' 


■ 


i 


! 






i 


« 


i / 1 




i- 


i 


i 




i 1 


i 


i 


1 
---- 






*\__ 




"^1 




* T 

i 
I 


" " T " " 


- V " 


T" " r~ " 


i 

i . -" 


- - r : - 






--.(■- 

■ 




! 
> 


. __- n - - 

, 1 




-10- 
■15 ■ 


■" 


1 








i 
i 




i 

i 


i 
i 


1 

1 


1 
I 

! 






i 
1 
! 


1 
1 
1 


, I 


■ -v 


( 


1 1 1 1 | 1 1 

1 0J05 


11 





.1 0.15 


0.2 


025 


03 


0.35 


,,,,, 
o.t 


i 1 i ; 

0.46 


I|M 

05 


1 1 | 1 1 

0.55 


1 1 1 1 1 1 1 | 1 1 1 1 jf 

0JS 0J05 0.T mm 



Figure 8. Etch-profile of Micro cantilever. 
Table 1 depicts the designed and measured dimensions using the proposed method. 

Table 1: Dimensions of Micro cantilever: Designed and Measured Using proposed Method 





Micro-Cantilever 


Sr.No. 


Dimensions 


Designed 


Measured after micro-machining 


1 


Length 


200 micrometers 


184 ±0.01 Micrometers 


2 


Width 


60 micrometers 


50+0.01 Micrometers 


3 


Step-height 


200 Nanometres 


180+0.01 Nanometres 



V. Conclusion 

In summary, Micro-cantilevers array have successfully been fabricated on silicon <100> substrate 
using bulk micromachining technique, deposited with a chrome-gold layer using the RF magnetron 
sputtering method. These Micro-cantilever arrays were surface functionalized using 1 micro-mole 
Thio phenol in ethanol solution for 3 Hours, after rinsing with ethanol solution for 2 minutes. The 
Micro cantilever Surface become functionalized for mass sensing. The Length, Width and Step-height 
measurement of micro-cantilever is obtained with the proposed high-resolution and accurate two- 
point optical non-contact method. 

Etch profile is very important in assessing the etch-uniformity, side-wall smoothness and etch-depth. 
Etch-profile also infers the shape, the slope and the etch-depth of the micro cavity, in which the micro 
cantilevers are free-standing. It is obvious, from the inspection of etch-profile of the trapezoidal micro 
cavity, that the side-walls of this anisotropically etched trapezoidal micro cavity are smooth, since the 
profile is not jagged. However, the bottom of the anisotropically etched micro cavity is not smooth, 
since the profile line is jagged containing hills and dales where surface tension in the inks or paints 
causes either a rounding or dimpling of the "step", and the stresses caused by the curing process can 
cause distortion in the substrate. The curvature of the substrate might well be sufficient to prevent the 
use of the simple least-squares line fit. The step and substrate areas are then treated as line segments, 
allowing the curvature of the substrate to be removed, resulting in a straight-line representation of the 
substrate. The step heights are calculated from this line in the areas adjacent to each step. 
The inspection of Etch-profile and Scanning Electron Micrograph confirm that the side-walls are 
smooth and at the bottom of the trapezoidal micro-cavity there is some etch product reminiscent. With 
this substantial conclusion, we propose a high-resolution accurate method for exact measurement of 



247 | 



Vol. 2, Issue 1, pp. 241-248 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Length, Width and Etch-Depth of the micro-machined micro-cantilever. Further, this method can be 
extended for the measurement of significant parameters of other out-of-the-plane MEMS structures. 
The principal disadvantage with use of under sampled data is of course a reduced signal-to-noise ratio 
not only because fewer data are acquired but also because of aliasing of noise at higher frequencies 
than the sample rate. However, this reduction in signal-to-noise ratio may in many circumstances be 
offset by the ability to average several scans during a shorter period and with less computer storage 
than is required by conventional methods. The rapid scans also reduce the sensitivity of the instrument 
to certain environmental effects, such as mechanical drift due to temperature and vibration. 

Acknowledgements 

The author acknowledges the Microelectronics Group, Nanoelectronics Centre, Department of 
Electrical Engineering, Department of Chemistry, Department of Physics, Suman Mashruwala 
Micromachining Laboratory - Department of Mechanical Engineering, Indian Institute of Technology 
Bombay, Powai, Mumbai and Precision Engineering Division, Bhabha Atomic Research Centre, 
Trombay, Mumbai, INDIA. 

References 

[1] P. R. Apte, U.D. Vaishnav, S.G. Lokhare, V. R. Palkar, S.M. Pattalwar, "Micromechanical components 

with novel properties", Proc. Of SPIE, 3321, 287-297 (1996) 

[2] Marc J. Madou, "Fundamentals of Micro-fabrication: The Science of Miniaturization", CRC Press, 

(2002) 

[3] Xinlin Wang, "Femtosecond laser direct fabrication of metallic cantilevers for micro-corrosion-fatigue 

test" J. Micromech. Microeng. 17 1307 (2007) 

[4] Deal Bruce, "The Oxidation of Silicon in Dry Oxygen, Wet Oxygen and Steam" J. Electrochem.Soc. 

Vol.lO.issue 6, pp 527-533 (1963) 

[5] Elin Steinsland, Terje Finstad and Anders Hanneborg, "Etch rates of (100), (111) and (1 10) single- 

crystal silicon in TMAH measured in situ by laser reflectance interferometer", Sensors and Actuators A: 
Physical. Vol.86, issue (1-2), pp.73-80 (2000) 

[6] Z Sun and A Weckenmann, "Reflective Properties of Typical Micro structures Under White light 

Interferometer" Meas. Sci. Technol Vol.22, Number 08, 103 (2011) 

[7] John A. Seelenbinder, Brown Chris W and Urish Daniel W, "Self- Assembled Monolayers of 

Thiophenol on Gold as a Novel Substrate for Surface-Enhanced Infrared Absorption", Applied Spectroscopy. 
Vol. 54, Issue 3, pp. 366-370 (2000). 

[8] F. Remade, E.S. Kryachko," Thiophenol and thiophenol radical and their complexes with gold clusters 

Au5 and Au6", Journal of Molecular Structure, Volume 708, Issues 1-3, 1 December 2004, Pages 165-173, 
ISSN 0022-2860, 10.1016/j.molstruc.2004.02.056. 
Authors 

A. S. Kurhekar was born in India, in Year 1966. He received the Bachelor degree from the Amaravati 
University of India, in Year 1988 and the Master in Engineering degree from the Dr. B. A. M. Unversity 
of India, in Year 1993, both in Electronics engineering. He is currently pursuing the Ph.D. degree with 
the Department of Electrical Engineering, Indian Institute of Technology, Bombay, India. His research 
interests include MEMS Layout, Design, Simulation and Fabrication. 

P. R. Apte was born in India, in Year 1947. He received the Bachelor degree from the Indore University 

of India, in Year 1968 and the Master of Technology degree from Indian Institute of Technology, 

Kanpur, India, in Year 1993, both in Electronics engineering. He was conferred Ph.D. by University of 

Mumbai in 1988. He is currently a Professor with the Department of Electrical Engineering, Indian 

Institute of Technology, Bombay, India. He was a member of the team that made the first TTL IC in 

India in 1972. Since then, He has more than 20 years experience in MOS/Bipolar IC design and 

fabrication. He has worked at Stanford IC labs for 2 years (1977-78) as a visiting research associate. To 

his credit, there are 79 publications in the Silicon Technology, which includes Journal and International 

conference papers. His current research interest includes MEMS - Design, Layout, Mask-making, Fabrication, Packaging, 

Testing, Reliability etc. 




248 



Vol. 2, Issue 1, pp. 241-248 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Experimental Investigation on Four Stroke 

Ceramic Heater Surface Ignition C.I. Engine Using 

Different Blends of Ethyl Alcohol 

R.Rama Udaya Marthandan 1 , N.Sivakumar 2 , B. Durga Prasad 3 

Research scholar,Dept. of Mech. Engg., Sathyabama Universityjndia 

2 Asst. Professor, Sun College of Engineering and Technology, Kanyakumar Dist. , India 



3 



Associate professor, JNTU, Ananthapur,A.P,India 



Abstract 

In this paper an experimental investigation on the performance of surface ignition ceramic heater four stroke 
CI engine fueled with pure diesel(BODIOOEO) and ethanol-diesel blends containing 10%, 20%, 25% and 30% 
by volume of ethanol are evaluated, n-butanol (B) additive is used to solubility of ethanol(E) in diesel(D), 
that acts as a bridging agent through molecular compatibility and bonding to produce a homogeneous 
blend.The ethanol - dieselfuel affects blend stability, viscosity, lubricity, corrosiveness and safety. The tests are 
carried out on 10HP ceramic heater surface ignition single cylinder diesel engine under steady state 
operating conditions. The engine is run at various speeds of 1250rpm andl500 rpm.The relevant parameters 
such as brake thermal efficiency (BTE), brake specific fuel consumption (BSFC) and emisions are calculated for 
pure diesel and ethanol-diesel blends by B5D85E10, B5D75E20, B5D70E25and B5D65E30.The Partially 
Stabilized Zirconia(PSZ) ceramic heater is used to reduce the emissions by 220 ppm ofNOx, under half load for 
the blends ofB5D85E10 gives minimum CO emissions and unburned HC emissions by 24 ppm from the engine 
and improve engine output behavior to 2%. 

KEYWORDS' ethanol,n-butanol,emissions, ceramic heater. 

I. Introduction 

Ethanol is one of the possible fuel for diesel replacement in CI engines[l]. It can be made from raw 
materials such as sugarcane, sorghum, corn, barley, cassava, sugar beets etc. A biomassbased 
renewable fuel, ethanol has cleaner burning characteristics, and a high octane rating. The application 
of ethanol as a supplementary compression-ignition fuel may reduce environmental pollution, 
strengthen agricultural economy, create job opportunities, reduce diesel fuel requirements and thus 
contribute in conserving a major commercial energy source[2]. 

A surface ignition ceramic heater[3] CI engine is able to operate at higher temperature enabling 
combustion of fuel at complete resulting to increase combustion efficiency. This should increase 
engine performance, decrease fuel consumption and reduce pollution[4]. Ceramic heater provides 
instant heat within seconds of turning, which helps save fuel and reduce emissions. It is mounted 
through the engine head, that heats up and warms air moved over its surface, due to its inherent self- 
regulating characteristics. Ceramic heater for diesel combustion would represent a simple low cost 
and easy approach in diesel engine performance [5]. 

In the C.I engines a premixed fuel air vapor is drawn in during the suction stroke, a single high intense 
spark passes across the electrode, producing a core of flame from which the combustion spreads to the 
envelope of mixture surrounding it at a fast rate. The above two methods evidently show that the fuel 
properties of the first method will not be suitable for the second, and hence if we need to have an 
engine with multi fuel capability, the nature of combustion should be very different from the above 



249 



Vol. 2, Issue 1, pp. 249-257 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

methods. This is where the concept of surface ignition comes into active consideration. Surface 
ignition indicates the beginning of combustion from a hot surface. It will be interesting to know that 
almost all fuels exhibit this property to varying degrees, the alcohols being highly susceptible to this 
kind of combustion [8]. 

II. Experimental Work 

A different speeds stationary four stroke surface ignition[8] ceramic heater[9] CI engine is selected 
for the experiment. The major specifications of the engine are given in Table land properties of fuel 
used are given in Table 2. The engine is connected with dynamometer, air stabilizing tank, diesel and 
ethanol blends consumption measuring device , exhaust gas analyzer etc [6]. A ceramic heater is fixed 
inside the cylinder and connected by 12 volt DC battery to heating combustion chamber.Diesel fuel 
and ethanol-diesel blends with additive by B0D100E0, B5D85E10, B5D75E20, B5D70E25 and 
B5D65E30 are tested. The ethanol and additive is obtained from the local market. The engine is run 
on no-load condition and its speeds are adjusted to 1250 rpm and 1500 rpm by adjusting the screw 
provided with the fuel injector pump. The engine is run to gain uniform speed after which it is 
gradually loaded. The experiments are conducted at six power levels for each load condition. The 
engine is run for at least 7 minutes after which data is collected. The experiment is repeated 5 times 
and the average value is taken. The observations are made during the test for the determination of 
various engine parameters like Brake specific fuel consumption, Brake thermal efficiency and exhaust 
emissions[7]. 

Heat transfer affects engine performance, efficiency, and emissions. The mass of fuel within the 
cylinder, higher heat transfer to the combustion chamber walls, will lower the average combustion 
gas temperature and pressure, and reduce the work per cycle transferred to the piston. Thus specific 
power and efficiency are affected by the magnitude of engine heat transfer [8]. Advances in 
engine technology by introducing ceramic heater increase the engine output efficiency and reduce the 
emission parameters [10]. 

In ceramic heater C.I engine an injection pressure and rate of injection can also offset the adverse 
effect of ceramic heater as shown in Figure 1. In this new system, decrease in pre mix of 
combustion due to decrease in ignition delay increases the Brake Specific Fuel Consumption (BSFC). 
Partially Stabilized Zirconia(PSZ) Ceramic Heater is fitted inside the cylinder, because of its very 
high fracture toughness among ceramics, it has one of the highest maximum service temperatures 
(2000°C) among all of the ceramics and it retains some of its mechanical strength close to its 
melting point (2750°C). PSZ ceramic heater is used in diesel engine because of two very notable 
properties: one is high temperature capability and other is low thermal conductivity. None of the 
other ceramics possess a thermal conductivity as low as the zirconia. This means that engine using 
zirconia ceramic heater would retain much of the heat generated in the combustion chamber instead 
of loosing it to the surroundings [9]. 



Table 1 Tested engine specifications 


Engine type 


4- stroke single cylinder engine 


Make 


Kirloskar 


Power 


10.0KW 


Bore x Stroke(mm) 


102x110 


Cubic Capacity(cc) 


898 


Compression ration 


18:1 


Colling system 


Water cooled 


Lubrication system 


Force feed 


Attachment 


Ceramic heater(12 V,DC) 



250 



Vol. 2, Issue 1, pp. 249-257 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Table 2 Properties of blending stocks 



Properties 


Diesel 


Ethanol 


n-Butanol 


Boiling point (°C) 


180 


78 


117 


Flash point (°C) 


65 


10 


35 


Density, g/ml at 20°C 


0.829 


0.789 


0.81 


Oxygenate (Wt%) 


0.84 


35 


7.5 


Carbonate (Wt%) 


- 


52 


25 


Hydrogen (wt%) 


87 


13 


74 


Viscosity,CS at 40°C 


13 


1.2 


10.3 


Cetane number 


48 


6 


40 



Figure 1 shows the experimental set up of the work. A dynamometer is used for measuring the power 
of the engine output. Exhaust gas analyzer is used for measuring the emissions of CO, HC and NOx 
from the engine.A fuel consumption meter is used for measuring the break specific fuel consumption 
of the engine, also Data Acquisition System(DAS) is used to calculate all required out put parameters. 



Fuel Tank 



PC- 



Data 

Acquisition 

System 



Fuel 

Measuring 

Device 




1 . Flywheel 

2. Dynamometer 

3. R.P.M. Measuring device 

4. Air stabilizing tank 



Figure 1 Experimental Setup 

5. Digital air flow meter 

6. Air filter 

7. Ceramic heater 
8. Injector 



III. Results and Discussions 

The experimental tests were carried out on the surface ignition ceramic heater four stroke CI engine 
using pure diesel and ethanol- diesel blends with n-butanol additive at different speeds. The relevant 
parameters such as engine torque and fuel consumption of the engine were recorded and the brake 
specific fuel consumption, brake thermal efficiency were also calculated at 1250 rpm and 1500 rpm. 



251 | 



Vol. 2, Issue 1, pp. 249-257 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

The engine emissions of CO, unburned HC and NOx were analyzed using the exhaust gas analyzer. 
The results were obtained by data acquisition system and are shown as follows. The ratio of ethanol- 
diesel blends gives various fuel consumption according to the percentage of ethanol present in the 
diesel fuel. If more ethanol is added with diesel, gives more fuel consumption. 



580- 
560- 
540- 
520- 



§ 500- 

D) 480- 

■^ 460 

g 440- 

Q. 420 

| 400- 

£ 380- 

Q 360- 

o 340- 

o 320- 

& 300- 

0/3 280- 

-^ 260 
cO 

m 240 

220 
200 -■ 




Speed =1250rpm 




-■-B0D100E0 
-•-B5D85E10 
-a- B5D75E20 
-T- B5D70E25 
B5D65E30 



Power in kW 



Figure 2 BSFC of the engine for 1250rpm 



560- 



^ 520 



480- 



Q. 440 

| 400 

o 

O 

Q 360 

LL 

o 320 

"o 

g_ 280- 

% 240 

CO 

* 200 



Speed=1500rpm 




-■-B0D100E0 
-•-B5D85E10 
-a- B5D75E20 
-T- B5D70E25 
B5D65E30 



Power in kW 

Figure 3 BSFC of the engine for 1500 rpm 

Figures 2 and 3 show the BSFC of the engine. When the engine runs at 1250 rpm on different engine 
loads, for the blends of B5D65E30, the BSFC is increased by 4% for the blends of B5D85E10 
BSFC is decreased by 1.2% for maximum engine load and the blends of B5D75E20 BSFC is average 
by 2.5% up and down. The results show the trends of the increase of fuel consumption with the 
increase percentage of ethanol in the blends. 

When the engine runs at 1500 rpm , for the blends of B5D65E30, the BSFC is increased at all engine 
load conditions.The blends B5D70E25 give 3.25% less BSFC next to pure diesel fuel. However, 
BSFC is high at minimum power for all fuel ratios. This increase of fuel consumption is due to the 
lower heating value of ethanol than that of pure diesel[4]. 



252 | 



Vol. 2, Issue 1, pp. 249-257 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



0.40 
0.38 
0.36 
0.34 
0.32 
0.30 
0.28 
0.26 
0.24 
0.22 
0.20 
0.18 
0.16 
0.14 
0.12 



Speed =1250rpm 




-■-B0D100E0 
-•-B5D85E10 

B5D75E20 
-▼- B5D70E25 

B5D65E30 



3 4 

Power in kW 



Figure 4 BTE of the engine for 1250 rpm 

Figures 4 and 5 show the results of the break thermal efficiency(BTE) of the engine.When the engine 
runs at the speed of 1250 rpm for the blends of B5D70E25 the BTE is increased by 2.5%, and at 
average load BTE is increased for the blends of B5D65E30 by 3.4%. These results show the 
difference of the break thermal efficiencies between the blends and diesel are relatively small at 
1500rpm.When the engine runs at the speed of 1500 rpm the break thermal efficiency is increased for 
the blends of B5D75E20 and B5D70E25 by 5% at high load and 2.5% at low load.The exhaust 
emissions are measured in terms of Corbon Monoxide (CO),Hydrocarbons(HC) and Oxides of 
Nitrogen (NOx) emissions. The results for diesel fuel as well as ethanol-diesel blends are given 
below. The oxygen content of the blended fuels would help to increase the oxygen to fuel ratio in the 
fuel at rich regions. The resulting more complete combustion leads to reduction of CO in the exhaust. 
If the percentage of ethanol in the blends increased, NOx emission is reduced. This is because of the 
air-fuel ratio in the case of ethanol-diesel blends, is lower as compared to diesel alone. The latent 
heat of vaporization of ethanol lowers at same temperature resulting in lower NOx emissions[7]. 



o 

c 


O 

LU 

E 



0.40- 



0.36- 



0.32- 



0.28- 



0.24- 



0.20- 



0.16- 




speed=1500 rpm 



*— B0D100E0 
*— B5D85E10 
^- B5D75E20 
^- B5D70E25 
B5D65E30 



Power in kW 



Figure 5 BTE of the engine for 1500 rpm 



253 | 



Vol. 2, Issue 1, pp. 249-257 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



0.40 

0.35- 

0.30- 

0.25- 

0.20- 

0.15- 

0.10- 

0.05- 

0.00- 



-■-B0D100E0 
-•-B5D85E10 
-a- B5D75E20 
-▼- B5D70E25 
B5D65E30 




Power in kW 



Figure 6 CO emissions for 1250 rpm 



o 
o 




Speed=1500rpm 



-■-B0D100E0 
-•-B5D85E10 

B5D75E20 
-T-B5D70E25 

B5D65E30 




Power in kW 

Figure 7 CO emissions for 1500 rpm 

Figures 6 and 7 show the CO emissions of the engine. The CO emissions from the engine fuelled by 
the blends are higher than those fuelled by pure diesel at minimum loads. The higher percentage of 
the ethanol gives more CO emissions upto half load. For higher engine loads which are above half 
load , the CO emissions became lower than that fuelled by diesel for all the blends for all speeds by 
0.01% to 0.07%. Upto half load the varition of CO emissions between pure diesel and B5D85E10 is 
only 0.05% for the speed of 1500rpm. At average load CO emission is same for 1500rpm and 0.01% 
to 0.02% different for 1250rpm. The percentage of ethanol in the blends increased the percentage of 
CO emission reduced. The emission reduced with the use of 10%, 20%, 25% and 30% ethanol-diesel 
blends as compared to diesel alone. This is due to the concept that ethanol has less carbon than diesel. 
The same fuel dispersion pattern as for diesel, the oxygen content of the blended fuels would help to 
increase the oxygen in fuel ratio in the fuel rich regions. This results in more complete combustion 
which leads to reduced CO in the exhaust smoke by ceramic heater engine. 

The reduction of CO emissions at full load is due to the more complete combustion. The phenomenon 
is due to the oxygen element contained in ethanol. When the engine working above its half load, the 
temperature in the cylinder is high, which makes the chemical reaction of fuel with oxygen easier 
and the combustion becomes more complete for two different speeds. 

The level of unburned hydro carbons (HC) in exhaust gases is generally specified in terms of the total 
hydro carbon concentration expressed in parts per million(ppm) carbon atoms. 



254 | 



Vol. 2, Issue 1, pp. 249-257 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



O 

X 



-■-B0D100E0 
-•-B5D85E10 
-a- B5D75E20 
-T- B5D70E25 
B5D65E30 




3 4 5 

Power in kW 



Figure 8 HC emissions for 1250rpm 



o 

X 







Speed = 


1500 rpm 








-B0D100E0 












-B5D85E10 










~^~ 


B5D75E20 
- B5D70E25 






♦ 






B5D65E30 














^x^ 




^ 




• — 


• 


»^^^ 


— A^ 


_______ -♦ 

\__— — • 


■ - 

1 1 


■ — __ 


— ■ 

1 i ■ 


_____ ■ - 

■ — " 

I ' I ' 


_____ 1 

I ' I ' 



3 4 

Power in kW 



Figure 9 HC emissions for 1500rpm 

Figures 8 and 9 show the HC emissions of the engine. More percentage of ethanol gives more HC 
emissions for all speeds. For the blends of B5D85E10 the HC is reduced by 5 tolOppm from pure 
diesel atl250rpm and 7 to 12.5ppm at 1500rpm. This is due to the high temperature in the ceramic 
heater engine cylinder to make the fuel be easier to react with oxygen when the engine runs on 
the top load and high speed. Figure 9 shows that the results of unburned HC emissions from the 
engine for the blended fuels are all higher when the engine runs at the speed of 1500rpm. 




Figure 10 NOx emissions for 1250rpm 



255 | 



Vol. 2, Issue 1, pp. 249-257 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



1400- — ■ — 



Speed =1500 rpm 




3 4 5 

Power in kW 



Figure 11 NOx emissions for 1500rpm 

Figures 10 and 11 show the NOx emissions of the engine. When the engine runs at the speed of 1250 
rpm NOx emission is minimum for pure diesel and more forl500rpm. The NOx emission is more for 
the blends of B5D85E10 at 1500rpm by 20%. The blends B5D70E25 give average NOx for all 
working conditions. The NOx emissions from the enigne are higher than those of diesel. NOx 
emissions are increased for all blends and speeds when the load is incerased. For low load NOx is 
minimum due to the fuel air mixture with spread in composition about stoichiometric burns.During 
the mixing controlled combustion phase the burning mixture is likely to be closer to stoichiometric 
by the help of ceramic heater. When the engine runs at 1500rpm the NOx is reduced by 40% for the 
blends of B5D75E20. This is because the air-fuel ratio in the case of ethanol-diesel blends is lower 
as compared to diesel alone. The latent heat of vaporization of ethanol is minimum for the same 
temperature in minimum NOx emissions[7]. 

IV. Conclusions 

An experimental investigation was conducted on the blends of ethanol - diesel fuel using ceramic 

heater surface ignition single cylinder CI engine.The tested blends were from 10% to 30% of 

ethanol by volume and also with 5% of the additive of n-butanol.The engine was operated with 

each blend at different power and different speeds of 1250 rpm and 1500 rpm. 

The Experiment showed that the n-butanol was a good additive for mixing diesel with ethanol 

blends. 

Using ceramic heater improved engine performance to 2% and controlled the emissions and reduced 

unburned HC by 7.5ppm. 

The brake specific fuel consumption is slightly increased by 62g/Kwhr for B5D65E30 and 

58g/Kwhr for B5D70E25 blends at 1250 rpm and 60g/Kwhr for B5D65E30 at 1500 rpm. 

The break thermal efficiency is increased for the blends of B5D75E20 by 2% and B5D65E30 by 

2.5% at the speed of 1250 rpm. When the engine runs at 1500 rpm the break thermal efficiency is 

increased for the blends of B5D75E20 by 5% at high power and B5D70E25 by 2.5% at low power. 

The higher percentages of the ethanol give more CO emission by 70.34% maximum.At half load 

CO emission is average.For higher engine loads which are above half of the engine load , the CO 

emission becomes lower than that fuelled by diesel for all the blends for the speed of 1250rpm. CO 

emission is same for all blends at half load and 5% CO emission is increased at low and high load at 

the speed of 1500 rpm. 

The blends of B5D85E10 the HC emission is reduced by 20ppm. When the engine runs at the 

speed of 1500 rpm, the HC emission becomes less as the loads increased. Less emission for 

B0D100E0 and B5D85E10 by 24 ppm and 10 ppm respectively. 

NOx emission is reduced for the blends of B0D100E0 and B5D70E25 by lOOppm at the speed of 

1250 rpm. When the engine runs at 1500 rpm the NOx is reduced for the blends of B5D65E30 upto 

half load of the engine by 220 ppm. 



256 



Vol. 2, Issue 1, pp. 249-257 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

References 

[1]. Alan C.Hansen, Qin Zhang, Peter W.L.Lyne, Ethanol -diesel fuel blends - a review . Bioresource 

Technology 96 (2005) 277 -285. 
[2]. E.A.Ajav, Bachchan Singh, T.K.Bhattacharya, Experimental study of some performance parameters 

of a constant speed stationary diesel engine using ethanol-diesel blends as fuel. Biomass and 

Bioenergy 17(1999)357-365. 
[3]. DANIEL NG., National Aeronautics and space Administration, Lewis Research center, Cleveland, 

Ohio 44135, Temperature measurement of a Miniature Ceramic heater in the presence of an 

Extended Interfering background radiation sources using a multiwavelength pyrometer. 
[4]. Chonglin Song, Zhuang Zhao, Gang L.V, Jinou Song 5 Lidong Liu, Ruifen Zhao, Carbonyl compound 

emissions from a heavy -duty diesel engine fueled with diesel fuel and ethanol -diesel blend. 

Chemosphere 79 (2010) 1033-1039. 
[5]. Douglas J.Bali and Glenn E.Tripp (Delphi Automotive), Louis S. Socha, Achim Heibel and Medha 

Kulkarni, Phillip A. Weber, Douglas G. Linden, A comparison of Emissions and Flow Restriction 

of Thin wall Ceramic Substrates for Low Emission Vehicles. 199-01-0271 Society of Automotive 

Engineers, Inc. 
[6]. Hwanam Kim, Byungchul Choi, Effect of ethanol-diesel blend fuels on emission and particle size 

distribution in a common - rail direct injection diesel engine with warm-up catalytic converter. 

Renewable Energy 33(2008) 2222-2228. 
[7]. Jincheng Huang, Yaodong Wang, Shuangding Li, Anthony P.Roskily, Hongdong Yu, Huifen Li, 

Experimental Investigation on the performance and emissions of a diesel engine fuelled with ethanol 

- diesel blends. Applied Thermal Engineering 29(2009) 2484-2490. 
[8]. John B. Hey wood, Internal Combustion Engine Fundamentals, Mc Graw Hill Book Company, New 

Delhi, 1988. 
[9] . http//global.kyocera.com/prdct/fc/product/pdf/heaters.pdf 
[10]. P. Satge Zde Caro, Z. Mouloungui, G. Vaitilingom, J. Ch. Berge, "Interest of combining an additive 
with diesel-ethanol blends for use in diesel engines", Fuel 80(2001) 565- 574, Elsevier. 

Authors 

R. Rama Udaya Marthandan is working as a Professor and HOD at Sun College of 
Engineering and Technology, Kanykumari District, INDIA. He received the M.E. degree in 
Mechanical Engineering from Annamalai University in 2000. He is a Head of the Department of 
Mech. Engg. He is a life member of ASME and ISTE. His research interests include Alternative 
Fuels and IC Engine. 



N. Sivakumar is working as an Assistant Professor at Sun College of Engineering and 
Technology, Kanykumari District, INDIA. He received the M.E. degree in Mechanical 
Engineering from Anna University: Chennai in 2004. He is a life member of ISTE. His research 
interests include Finite Element Analysis. 

B. Durgaprasad is working as an Associate Professor at JN Technical University, Ananthapur, Andhra 
Pradesh, INDIA. He received his Ph.D. in the area of surface ignition IC engine from JNTU in 2000. 





257 | 



Vol. 2, Issue 1, pp. 249-257 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Performance Verification of DC-DC Buck 

Converter using Sliding Mode Controller for 

Comparison with the Existing Controllers - A 

Theoretical Approach 

Shelgaonkar (Bindu) Arti Kamalakar, N. R. Kulkarni 
Modren College of Engg. Pune ,Maharashtra. 



Abstract 

In recent electronic applications the variable DC power supply is derived with light weight, occupying less size 
using 100 kHz switching frequency. When the frequency is high, the load experiences practically uninterrupted 
DC voltage. According to need of application buck converter is considered for analysis. It is observed that 
nature of DC- DC converter is nonlinear and time variant systems, and does not lend them to the application of 
linear control theory. The performance of buck converter has been studied and is undertaken for their 
theoretical verification, graphical representation and Matlab simulation. From the linear controller PI, PID is 
considered and non linear controller sliding mode control is taken as control method. The paper work will 
highlights nonlinear aspects of buck converter, non linear controller like sliding mode controller and hybrid 
type of controller SMC PID. This will also focuses the benefits of non linear control. 

KEYWORDS: SMC {sliding mode control), PI and PID control. 

I. Introduction 

DC-DC converter convert DC voltage signal from high level to low level signal or it can be vise versa 
depending on the type of converter used in system. Buck converter is one of the most important 
components of circuit it converts voltage signal from high DC signal to low voltage. In buck 
converter, a high speed switching devices are placed and the better efficiency of power conversion 
with the steady state can be achieved. In this paper work performance of buck converter is analyzed. 
The circuit may consist of nonlinearity like delay, hysteresis etc. and because of this output voltage is 
not constant. To settle the output voltage within minimum settling time and less overshoot different 
types of controllers are considered such as linear controller PI, PID and in nonlinear controllers SMC 
(sliding mode controller). The paper deals with comparison of performance of DC-DC buck converter 
using controllers PI, PID, SMC and SMC PID. The performance of buck converter has been analyzed 
in many papers amongst them papers [1][2] have been studied and are undertaken for their theoretical 
verification, graphical representation and Matlab simulation. 

II. Simulated model of buck converter 

Simulated model of buck converter by using Matlab are as shown in figure no.2.1. It consist of 24 V 
input DC supply, GTO (gate turn off thyristor) as switch, PWM (Pulse width modulation) generator 
for providing switching pluses to GTO. Inductor is of 69 |iH[l] and capacitor is of 220|iF[l], with 
load resistance of 13 Q [1] .The desired output from this converter is 12 V DC. 



258 



Vol. 2, Issue 1, pp. 258-268 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



itinuG 



powergui 



Pulse 
Generator 



MR 



a " + k 



"U = 24V 



GTOf^1/10e-6 



FW Diode 



^WF— 



0.6789 I 



I load 



n 



Scope2 



L^=69e-6 H Load current 

C=220e-6 F 



R=13 ohms 



n 



Scopel 



\ytoad 



14.12 



Load voltage vo | tage 



Figure No 2.1 Buck converter in Matlab Simulink. 

The circuit has settling time of 2 msec and output voltage is 14.12 V which is required to settle at 12 
V. To compensate these transients present in buck converter different types of controllers can be used. 

III. Control Methods 

Figure 3.1 shows the block diagram with some methods that can be used to control DC-DC converters 
and the disturbances that have influence on the behavior of the converter and its stability. The 
feedback signal may be the output voltage, the inductor current, or both. The feedback control can be 
either analog or digital control. From these control methods PI, PID are linear control methods and 
SMC, SMC PID are the non- linear control methods. Comparison between linear and nonlinear 
control methods are given below. 

Disturbances Dist urbanees 1 3ist urbanees 

(input voltage variation) (non-linear components (output power variation) 

variation ) 

i j i 

Output voltage 



Input voltage 



S witching signal 



DC DC Converter 



FI control mttlifHl ; 

FID f ontrol in? flioii : 

Sliding moilr control (SMC) control metlioit : 

SMC PH> roiilrol nif thoil. 



3- 



Feedback signal 
(out put voltage, 
inductor current. 
or both) 



Figure No. 3.1 Types of controller. 



259 



Vol. 2, Issue 1, pp. 258-268 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

3.1 PI control method 

Settling time of PI compensated buck converter circuit is 1 1 msec initial overshoot for output voltage 
is 27 V and 43 A for inductor current. After settling time of 11 msec output voltage is at 12V and 
inductor current is at 1.738 A. 
Load Voltage:- 




Figure No. 3.1.1 Load voltage of buck converter in Matlab/Simulink model. 
Inductor current: - 




Time in sec 



FigureNo.3.1.2 Inductor current from simulation. 

3.2. Effect of variation of load resistance on buck converter with PI control 

When buck converter is considered with PI control it has settling time of 1 1 msec and output voltage 
is at 12 V. When the circuit was tested under the load variation from (open circuit) to short circuit, it 
was found that as load resistance increases load current decreases. 



Lo;hI resistance variation for buck converter with PI control 

.. ■ Output voUorc 1" 


12 - 
10 - 

t - 

6 - 
4 

2 - 

- 


volts 
12 12 12 12 12 12 12 12 12 12 






















































1^6 M2AS 1 










E I102 


r JAS 


127 


0.21 


0.14 


0.19 


0.14 


10 11 14 20 50 60 SO 100 ISO 200 
Load resistance m ohms 



Figure No.3.2.1 Bar graph for the variation of load resistance. 



260 



Vol. 2, Issue 1, pp. 258-268 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

3.3. Effect of variation of line voltage on buck converter with PI control 

When the circuit was tested under the line variation from 20 V to 34 V, it was found that as line 
resistance increases, load current increases settling time is almost remains constant for PI controller. 



Lint variation for l>uck converter with PI control 

n li n ii 




Inductor current En Amp. 
settlingtimeinmset 



20 



24 28 30 

Input voltage tn Volt* 



34 



Figure No.3.3.1 Bar graph for the line variation. 

3.4. PID control method 

PID controllers are dominant and popular and, have been widely used since the 1940' s because one 
can obtain the desired system responses and it can control a wide class of systems. The basic AC 
modeling approach is a common averaging technique used for PID modeling. After the circuit is 
modeled, we go through the design of PID controller with the help of Matlab in a simple way to get an 
overall system with good quality performance. Simulink model of the converter is built up and the 
controller obtained is added to the model. 



Switching iequsncy 
Operator I-ccnteclk 



L> 



s+3142/5 



PW 



Output vcltgge 



G(5fe s+10681 


4— 


+ 


•5* 91106 







R?:aai:? v 



Figure No 3.4.1 The block diagram of controller includes PID control. 



261 | 



Vol. 2, Issue 1, pp. 258-268 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



T 



feuck converter 



1 



HOHflM 



U-24V 



jj»-ci, ■ 



j M 



| j^-r*-TFF* 



J=WrM 



T3BI 



-yu4- 



'■-'i i'.,^ 



Lo-id-cuircnl 



Ct220MJ 



►n 



pid24 



V 






3 



witigi 



Qfiutei 



M a wte i E m 



:1 



Pul» 



•3U2 
I 



L — r<£\*- 



M 



ponce 



u^ 



Iff. 



Ccr-taM 



PID controller 



i 



Figure No. 3.4.2 Buck converter with PID control Matlab model 

3.4.1. Inductor current waveform 

By considering above scenario in which a buck converter when considered with PID controller it has 
been observed that the circuit has settling time of 2.5 msec. The output voltage attends steady state 
value of 12 V, which is expected output from this application. Settling time for PID controlled buck 
converter is 2.5 msec and transient voltage is of 16 V and transient current is of 28 A which are less as 
compared to PI controller. 

3.5 Effect of variation of load resistance on buck converter with PID control 

When PID controlled buck converter is considered with load variation, in a range of 10 Q to 13 Q 
settling time and inductor current almost remains same. When load regulation is found out for this 
circuit it is found to be 29.82 %. 



Load resistance variation for buck converter with PID control 



]j 



12 12 12 12 12 12 12 12 12 




03 



0.42 



i ?■■] 



20 50 SO SO 

Load resistance in oltms 



020 

inn 



output voltage in volts 
Inductor current in Amp. 



0.19 D.15 



ISO 200 
> 



Figure No.3.4.1.1 Bar graph for variation of load resistance in PID control circuit. 



262 | 



Vol. 2, Issue 1, pp. 258-268 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

3.6 Effect of variation of line voltage on buck converter with PID control 



Line variation foi - buck tron^cr-tci- witli PII> control 




m irtductorcurrent in Amp, 
■ settling tirrne in msec 



Input voltage In Volts 



Figure No. 3.4.1.2 Bar graph line variation. 

Figure 3.4.1.2 shows the line variation for PID controlled buck converter circuit. As input voltage 
increases, inductor current also increases and settling time also increases. The settling time is in msec 
for this circuit. From this variation we can say that this controller can be used in range of 20 V to 28 V 
with same output voltage, settling time and inductor current. 

IV. SMC Control Method 

From above all control methods sliding mode control is the only non linear method and its 
performance is studied for comparison with other linear methods. SMC could be implemented for 
switch mode power supplies. The controller block diagram of SMC is shown in figure no. 4.1 



Sliding mode controller 

CZ5 

Inductor current 
Output voltage 



Inner current feedback loop 



iT- 



Relav 



Switch drive 



Ref. 



Refer en c e volt ag e 



Integrator Gain 



- 



10 



Figure No. 4.1 The simulation controller block diagram SMC. 

4.1Selection of various parameters for the circuit 

The control topology consists of a linear and non-linear part. The non-linear parameter can be 
selected, while it is left to the designer to tune the linear part and get the optimum values depending 
on the application. The output of the integral is amplified through a gain and the result is subtracted 
from the inductor loop, the difference is passed through a hysteresis. One major drawback of this 
model is the lack of a standard procedure to select the gain. The hysteresis parameter can be selected 
by measuring the peak-to-peak inductor current and these are the limits for the hysteresis parameters. 



Table 2: 


IMaiii 


Circ 


uit Para meters :- 


Para meter limine 


symbol 




Input ^-olta-ge 


"^ 7 tr 




2— volts 


Output -vol - .:-_ g e 


Vo 




l-.olts 


Capacitor 


c 




2L20|mF 


Inductor 


L. 




69uH 


Loa_d resistance 


Ri 




13 £2 


-ComLnal -itching fire que* 


icv r 


I 


lOO tiH^ 
u= O 


S^CiSJ=few°4G£ 


S-"- 


;S V: "it oli oi.i 


-." 


- 


u= 1 



263 | 



Vol. 2, Issue 1, pp. 258-268 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

4.2. Buck converter with sliding mode control simulated circuit diagram 

Considering above circuit in which a buck converter when considered with SMC controller it 
has been observed that the circuit has settling time of 20 msec. The output voltage attends 
steady state value of 12 V , which is expected output from this application. Under the load 
variation of SMC circuit from to oo , it was found that as load resistance increases load 
current decreases and settling time increases continuously 



Buck converter 



:-»*-: 



L5=69e-6H ■ n <j> 



$ 



Soopel 



1.6Q9 | 



^ 



Load voltage 



□ 



Scope2 



| 12.25] 



voltagel 



□ 



8 cop e4 



voltage 



f: Dwergui 



JF 



<HJ 



Relay ts = 1.0000e-005 Gain2 



XYGrapM 



^ 



Integrator 



D u/d 



o 



SMC 



Figure No. 4.2.1 Simulation diagram for buck converter with SMC 
4.2.1 Effect of variation of load resistance on buck converter with SMC control 



Loiid r*sl#rniir« vmlnfiontor ouck converter witii SMC control 



12 12 12 12 12 12 12 




11-74 11. /I 



I O I . T . 1 1. T vn T.-. y.R l 'jci I f h 
I hit: .Jirtar t:.jrrRnt fn Amp 



U 79. ^fl BO &C1 

LuailrdfiKlcinccin uhnm 



1 C ,0 3 CfO 



Figure No.4.2.1.1 Bar graph for load resistance variation 

Above bar graph shows the effect of load variation on buck converter with SMC controller. As 
resistance value increases inductor current decreases. For oo resistance voltage is 23.69 and inductor 



264 



Vol. 2, Issue 1, pp. 258-268 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

current is 1.16e-10 A. But in the range of 10 Q to 13 Q load resistances inductor current and load 
voltage almost remain constant. 

4.2.2 Effect of variation of line voltage on buck converter with PID control 



Line variation for buck converter with SMC control 

28 




20 24 28 30 

Input volfra ge in Volts — 



34 



Inductor current in Amp. 
Settling time in msec 



Figure No.4.2.2.1 Bar graph for line variation 

V. Matlab Simulation Model of Buck Converter with SMC PID 
Controller 



-c 



a T k 



GTOfc1/10e-6 

FWDioc 



'41^^ 



Ls=69e-6H Load current 



-a 





I— ► 


□ 

pid2 










I 1.755 1 








_Ioad1 





ptinuci 



rO 



r^ 



Integrator! 



w 



r& 



, | Uoad 

, R=13 hmd dvQ|tage 



^ | 11.99] 



H— □ 



Relay ts= 1 .0000 e-005 



DO* 1 — <^ 

Kl 



n 



pid24 



| 11.99| 



voltage 



Figure No.5.1 Buck converter with SMC PID controller. 

Above figure shows simulated model of buck converter with SMC PID controller. In this model SMC 
and PID controllers both are considered to get advantages of both control methods. From performance 
comparison of SMC PID with other controllers we can say that this circuit has large settling time but 
very less overshoot or no overshoot in voltage. Whenever we can consider this settling time and 
required more accuracy we can go for SMC PID model. 



265 



Vol. 2, Issue 1, pp. 258-268 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

5.1Effect of variation of load resistance on buck converter with SMC PID control:- 

Above bar graph shows the effect of load variation on buck converter with SMC PID controller. As 
resistance value increases inductor current decreases. For oo resistance voltage is 23.69 V and inductor 
current is 1.06e-12 A. But in the range of 10 Q to 15 Q load resistances inductor current and load 
voltage almost remain constant 



Load resistance variation for buck converter with SMC PID control 




i Out put voltage in volts d \ 
I Inductor current in Amp 



10 11 14 20 50 60 SO 100 

Load resistance in ohms — ^ 



Figure No.5.1.1 Bar graph for load resistance variation 
5.2 Effect of variation of line voltage on buck converter with SMC PID control 



Line variation for buck converter with SMC 1 PID control 




Inductor current in Amp. 
Settling time in msec 



24 23 30 

Input voltage in Volts 



Figure No.5.2.1 Bar graph for line variation 



Performance comparison 

Table 1 shows the summary of the performance characteristics of the buck converter between PI, PID, 
SMC and SMC PID controller quantitatively. Based on the data tabulated in Table 71, PID has the 
fastest settling time of 2.5 msec while SMC has the slowest settling time of 20 m seconds. An extra of 
17.5 m seconds is required for the SMC controller for steady state voltage. 



266 



Vol. 2, Issue 1, pp. 258-268 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Table 1 





.^ 




C>Litp-T_it voltage 


IE m dui crt or cn_jurr~ e-nn 




^^ci^ 


Rise Time 


F*^=ic 




Time 


Pe^k 


S =^rtlLEi &■ 




Tim e 




- 


Budk C o-u^srtsr 


O -S- hi =^- 


11V 


-, m ,^ 


O .^ DO. 


2A 


4 lll^O 




2 


c ontr o 11 sar 


0.^.3 niss — 


J^ 5 'V 


^r=;V. 


0.2-5 


-*D- ^t. 


i5-. fS ins ec 




- 


o on**- c-11 =^r 


0.2m = ^- 


_^r 


^^ 


V e ~ 


z=s --=v 


-m^ 




- 


Bucic: C oin.-sartsr 
controUar 


3 mB « 


n^v 


20C1 ni 


_^ m .« 


Zi.5 ^\. 


^0 MB « 




= 


Bu=±£: C on-i-erter 

-^.-i-tii SMC RID 

— oix-tt-ollei- 


_=: m *«= 


C: ,-■- 


3 <W *«= 


Z^ HI = =-=- 




3C in a « 



VI. 



Comparison Graph For Rise Time, Delay Time And Settling 
Time For All Existing Controllers 




Figure No. 6.1. comparative graph for all existing controllers 



6.1 Comparative graph for peak overshoot, regulation, output voltage and inductor 
current all existing controllers 

From comparison we can say for same output voltage and inductor current peak overshoot is 
maximum for PI control and no overshoot for SMC PID control method. From the 
performance analysis of uncompensated buck converter we can say that because of 
disturbances and nonlinearities output voltage of converter is 14.12 V instead of 12 V. 



CD 



|Peak OVerShOOt Ql vCilt 3 tp&> 

_| Regulation 
^Oulpul woltacje< 
Inductor current 




Figure No.6.1.1. Comparative graph for all existing controllers. 

VII. Conclusion 

As SMC is not operating at a constant switching frequency and converters have a highly nonlinear 
and time varying nature therefore it is selected to control such kind of DC- DC converter. Therefore it 
is also selected as control technique for performance analysis. The waveforms of simulated output 



267 | 



Vol. 2, Issue 1, pp. 258-268 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

voltage and current were obtained, studied and compared with the waveforms from other controllers 
for performance comparison. By studied references papers in details the waveforms were found to be 
in precise proximity of theoretical waveforms. Some concluding points which are analyzed in 
following points. From performance comparison of SMC with PI and PID it was found that it has 
large settling time. So when more voltage accuracy is required and large settling time can be 
considered then we can go for SMC or SMC PID control method. But when less cost, less accuracy 
and less complexity is required, than PI or PID control method can be used. When buck converter is 
considered with PI control within 6.5 msec output voltage attends 12 V. 

Acknowledgement 

We wish to acknowledge the support given by Principal, Modren College of Engineering, Pune for 
carrying out the present research work and HOD Mrs. Prof. N.R. Kulkarni department of Electrical 
Engg. for constant encouragement. 

References 

[1]. M.Ahmed, M.Kuisma, P. Silventoinen, "Implementing Simple Procedure for Controlling Switch Mode 

Power Supply Using Sliding Mode Control as a Control Technique", XHI-th International Symposium on 

Electrical Apparatus and technologies (Siela). May 2003, pp 9-14, Vol. 1 
[2]. Hongmei Li and Xiao Ye "Sliding-Mode PID Control of DC-DC Converter" , 5th IEEE Conference on 

Industrial Electronics and Applications . 
[3]. V.I.Utkin, Sliding modes and their application in variable structure systems, MIR Publishers, Moscow, 

1978 
[4]. R. Venkataramanan, A. Sabanovic, S. Cuk:"Sliding-mode control of DC-to-DC converters," IECON Conf. 

Proa, 1985, pp. 251-258. 
[5]. G. Spiazzi, P. Mattavelli, L. Rossetto, L. Malesani, "Application of Sliding Mode Control to Switch-Mode 

Power Supplies," Journal of Circuits, Systems and Computers (JCSC), Vol. 5, No. 3, September 1995, 

pp.337-354. 
[6]. Siew-Chong Tan, Member, IEEE, Y. M. Lai,Member, IEEE, and Chi K. Tse, Fellow, IEEE "Indirect 

Sliding Mode Control of Power Converters . 

Biography 

Shelgaonkar(Bindu) Arti Kamalakar was born in Aurangabad, India, in Year 1978. She received 
the Bachelor in electrical engg. degree from the University of Dr. BAMU Aurangabad city, in 
Year 1999 and the pursing Master in 2008 degree from the University of Pune , in Year, both in 
control system engineering. 



N.R. Kulkarni received the Bachelor in electrical engg. degree from WCE ,Sangli in 1985, 
M.E.(Electrical) Control System from COEP Pune in 1998, .Ph.D. (Electrical) in 2011. Area of 
interest control System, Electrical machine, Nonconventional energy, Nonlinear system, Sliding 
mode control. 




268 



Vol. 2, Issue 1, pp. 258-268 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Performance Evaluation of DS-CDMA System using 

MATLAB 

Athar Ravish Khan 

Department of Electronics & Telecommunication 

Jawaharlal Darda Institute of Engineering and Technology, Yavatmal, Maharashtra, India 



Abstract 

The author evaluated the performance of synchronous DS-CDMA systems over multipath fading channel and 
AWGN Channel The synchronous DS-CDMA system is well known for eliminating the effects of multiple access 
interference (MAI) which limits the capacity and degrades the BER performance of the system. This paper 
investigated the bit error rate (BER) performance of a synchronous DS-CDMA system over AWGN and 
Rayleigh channel, which is affected by the different number of users, as well as different types spreading codes. 
The promising simulation results explore the comparative study of different DS-CDMA system parameter and 
showed the possibility of applying this system to the wideband channel. Different MATLAB functions and 
MATLAB program segments are explained for the simulation of DS-CDMA system. 

KEYWORDS: CDMA system, QPSK, BER, Rayleigh Channel, AWGN channel, MATLAB program segment, 
Gold Sequence, M- sequence. 

I. Introduction 

Direct-sequence code-division multiple access (DS-CDMA) is currently the subject of much research 
as it is a promising multiple access capability for third and fourth generations mobile communication 
systems. Code-division multiple access (CDMA) is a technique whereby many users simultaneously 
access a communication channel. The users of the system are identified at the base station by their 
unique spreading code. The signal that is transmitted by any user consists of the user's data that 
modulates its spreading code, which in turn modulates a carrier. An example of such a modulation 
scheme is quadrature phase shift keying (QPSK). In this paper, we introduce the Rayleigh channel and 
AWGN Channel, and investigated the bit error rate (BER) performance of a synchronous DS-CDMA 
system over these channels. In the DS-CDMA system, the narrowband message signal is multiplied 
by a large bandwidth signal, which is called the spreading of a signal. The spreading signal is 
generated by convolving a M-sequence & GOLD sequence code with a chip waveform whose 
duration is much smaller than the symbol duration. All users in the system use the same carrier 
frequency and may transmit simultaneously. The receiver performs a correlation operation to detect 
the message addressed to a given user and the signals from other users appear as noise due to de- 
correlation. The synchronous DS-CDMA system is presented for eliminating the effects of multiple 
access interference (MAI) which limits the capacity and degrades the BER performance of the system. 
MAI refers to the interference between different direct sequences users. With increasing the number 
of users, the MAI grows to be significant and the DS-CDMA system will be interference limited. The 
spreading M & GOLD sequences in a DS-CDMA system need to have good cross-correlation 
characteristics as well as good autocorrelation characteristics [P. Alexander et.al],[ E. Dinan et.al]. 
The goal is to reduce the fading effect by supplying the receiver with several replicas of the same 



269 



Vol. 2, Issue 1, pp. 269-281 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

information signal transmitted over independently fading paths. The remainder of the paper is 
organized as follows. In the next section we present channel modelling. Section 3 deals with 
modulation and Demodulation scheme used in the system .Section 4 deals with proposed transmitter 
and receiver model for simulation. Different MATLAB functions, program segments and flow of 
program segment are explained in the Section 5 & 6 respectively, the paper ends with simulation 
results and conclusion. 

II. Channel Model 

2.1. Rayleigh fading channel Model: 

Rayleigh fading is a statistical model for the effect of a propagation environment on a radio signal, 
such as that used by wireless devices. Rayleigh fading models assume that the magnitude of a signal 
that has passed through such a transmission medium will vary randomly, or fade, according to a 
Rayleigh distribution the radial component of the sum of two uncorrected Gaussian random 
variables. [C.Trabelsi et.al]. Rayleigh fading is viewed as a reasonable model for tropospheric and 
ionospheric signal propagation as well as the effect of heavily built-up urban environments on radio 
signals. Rayleigh fading is most applicable when there is no dominant propagation along a line of 
sight between the transmitter and receiver Rayleigh fading is a reasonable model when there are many 
objects in the environment that scatter the radio signal before it arrives at the receiver, if there is 
sufficiently much scatter, the channel impulse response will be well modelled as a Gaussian process 
irrespective of the distribution of the individual components. If there is no dominant component to the 
scatter, then such a process will have zero mean and phase evenly distributed between and 2n 
radians. The envelope of the channel response will therefore be Rayleigh distributed [Theodore S. 
Rappaport]. 
2.2 . AWGN channel Model 

Additive White Gaussian Noise channel model as the name indicate Gaussian noise get directly added 
with the signal and information signal get converted into the noise in this model scattering and fading 
of the information is not considered[Theodore S. Rappaport]. 

III. Modulator and Demodulator 

A QPSK signal is generated by two BPSK signals. To distinguish the two signals, we use two 
orthogonal carrier signals. One is given by cos2nf c t, and the other is given by sin2nf c t. A channel in 
which cos27rf c t is used as a carrier signal is generally called an in-phase channel, or Ich, and a channel 
in which sin27rf c t is used as a carrier signal is generally called a quadrature-phase channel, or Qch. 
Therefore, dj(t) and d q (t) are the data in Ich and Qch, respectively. Modulation schemes that use Ich 
and Qch are called quadrature modulation schemes. The mathematical analysis shows that QPSK 
[X.Wang ^..al] 



s n (t) = ^f cos (2nf c t + (2n - 1) J) for n =1,2,3,4 



(1) 



This yields the four phases n/4, 3n/4, 5n/4 and 7n/4 as needed. This results in a two-dimensional 
signal space with unit basis functions. The even Equation(2) and odd Equation(3) samples of signal 
are given by, 

01 (0= \^cos(2nf c t) (2) 

The first basis function is used as the in-phase component of the signal and the second as the 
quadrature component of the signal. An illustration of the major components of the transmitter and 
receiver structure is shown below. 



270 | 



Vol. 2, Issue 1, pp. 269-281 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 







1001 




1 

i 


fc^ 


1 flJ,C0 = 


- POS^Itf^t) 




NRZ Encoder 










+6 




1 






i 
i 




BinaryBitstrearm 


Dernulti plexor 


r 


11000110 


i 


u * 








L QPSK signal 




NRZ Encoder 












►x. 


1 L CO = 


n 






IUIU 


i 

i 


— 5^(2^ o 



Figure.l QPSK Modulator 

The binary data stream is split into the in-phase and quadrature-phase components. These are then 
separately modulated onto two orthogonal basis functions. In this implementation, two sinusoids are 
used. Afterwards, the two signals are superimposed, and the resulting signal is the QPSK signal. Note 
the use of polar non-return-to-zero encoding. These encoders can be placed before for binary data 
source, but have been placed after to illustrate the conceptual difference between digital and analog 
signals involved with digital modulation. In the receiver structure for QPSK matched filters can be 
replaced with correlators. Each detection device uses a reference threshold value to determine whether 
a 1 or is detected as shown in the Figure (2) 







Matched 
Filter to tfr(t) 


V 










A > 


Decision 
Device 


1 l 














QPSK 
signal 


T W 




* 


Binary Bitstneam 


















Multiplexe 






Sampling at time Interval Ts 


110 Oil o 








i 


L 






Matched 
Filter to qjaft) 


/ 










~i > 


Decision 
Device 








w 


\ 




10 1 







Figure.2 QPSK Demodulator 

IV. Proposed System Model 

4.1 Proposed Transmitter Model: 

The randomly generated data in system can be transmitted with the help of proposed transmitter 
model which is shown in Figure(3) given below 




Figure.3 DS-CDMA transmitter 



271 | 



Vol. 2, Issue 1, pp. 269-281 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

At first, the data generator generates the data randomly, that generated data is further given to the 
mapping circuit. Mapping circuit which is consisting of QPSK modulator converts this serially 
random data into two parallel data streams even and odd samples i.e. Ich (in-phase) and Qch 
(quadrature phase) [X.Wang.ef.al]. This Ich and Qch are then convolved with codes and spreaded 
individually by using M-sequence or Gold sequence codes .The spreaded data is given to the over 
sampler circuit which converts unipolar data into bipolar one, then this oversampled data is convolved 
using with help of filter coefficients of T-filter. Then these two individual data streams are summed 
up and passed through Band pass filter (BPF) which is then transmitted to channel. 
4.2 Proposed Receiver Model: 

The randomly generated data in system which is transmitted through the channel can be received 
with the proposed receiver model which is shown in Figure (4) given below. 



/// 



Convolution 



pj Code 1 



K 



Convolution 



Figure.4 DS-CDMA receiver 

At the receiver ,the received signal passes through band pass filter (BPF). where spurious signals 
eliminated. Then signal divided into two streams and convolved using filter co-efficient, by which 
Inter Symbol Interference (ISI) in the signal is eliminated. This signal is dispreaded using codes, also 
synchronized. This two dispreaded streams are then faded to Demapping circuit which is consisting of 
QPSK demodulator. Demodulator circuit converts the two parallel data streams into single serial data 
stream. Thus the received data is recovered at the end. 

V. MATLAB Simulations 

5.1 DS-CDMA System: 

This section shows the procedure to obtain BER of a synchronous DS-CDMA. In the synchronous 

DS-CDMA, users employ their own sequences to spread the information data. At each user's terminal, 

the information data are modulated by the first modulation scheme. Then, the first bits of the 

modulated data are spread by a code sequence, such as an M-sequence or a Gold sequence. The 

spread data of all the users are transmitted to the base station at the same time. The base station 

detects the information data of each user by correlating the received signal with a code sequence 

allocated to each user. In the simulation, QPSK is used as the modulation scheme. The parameters 

used for the simulation are defined as follows [Hiroshi Harada et.al]: 

sr =2560000.0; % symbol rate 

ml = 2; % number of modulation levels 

br = sr * ml; % bit rate 

nd = 200; % number of symbol 

ebn0 = [0:20]; % Eb/No 

irfn = 21 ; % number of filter taps 

IPOINT =8; % number of oversample 

alfs = 0.5; % roll off factor 



272 | 



Vol. 2, Issue 1, pp. 269-281 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

The coefficient of the filter is defined as given in the above program segment evaluates the 
performance of QPSK and the MATLAB function hrollfcoef is use to evaluate the filter coefficient 
based on the above parameter. 

[xh]=hrollfcoef(irfn,IPOINT,sr,alfs,l); 

% T Filter Function 
[xh2]=hrollfcoef(irfn,IPOINT,sr,alfs,0); 

% R Filter Function 
The parameter for the spread sequences, namely M-sequence and Gold sequences are used. By 
denoting variables as seq 1, or 2 a code sequence is selected. Next, the number of registers is set to 
generate an M-sequence. In synchronous DS-CDMA, the number of code sequences that can be 
allocated to different users is equal to the number of code lengths. Therefore, the length of the code 
sequence must be larger than the number of users. To generate a code sequence, we must specify the 
number of registers, the position of the feedback tap, and the initial values of the registers. To 
generate a Gold sequence and an orthogonal Gold sequence, two M-sequences are needed. Therefore, 
the following parameters are used. By using these parameters, a spread code is generated, and the 
generated code is stored as variable code. 

user =3 % number of users 

seq = 1; % l:M-sequence 2:Gold 

stage = 3; % number of stages 

ptapl = [1 3]; % position of taps for 1st 

ptap2 = [2 3]; % position of taps for 2nd 

regil = [111]; % initial value of register for 1st 

regi2 = [111]; % initial value of register for 2nd 

Here, code is a matrix with a sequence of the number of users multiplied by the length of the code 

sequence. An M-sequence is generated by MATLAB function mseq.m, and a Gold sequence is 

generated by MATLAB function goldseq.m. An orthogonal Gold sequence can be generated by 

adding a bit of data to the top or bottom of a Gold sequence. Because the generated code sequence 

consists of and 1, the program converts it into a sequence consisting - 1 and 1. 

switch seq 

case 1 % M-sequence 

code = mseq(stage,ptapl, regil, user); 
case 2 % Gold sequence 

ml = mseq(stage,ptapl,regil); 

m2 = mseq(stage,ptap2,regi2); 

code = goldseq(ml,m2,user); 
end 

code = code * 2 - 1 ; 
clen = length(code); 

When rfade is 0, the simulation evaluates the BER performance in an AGWN channel. When rfade is 
1, the simulation evaluates the BER performance in a Rayleigh fading environment [C.Trabelsi et.al]. 

rfade =1; % Rayleigh fading 0:nothing Lconsider 

itau = [0,8]; % delay time 

dlvll = [0.0,40.0]; % attenuation level 

nO = [6,7]; % number of waves to generate fading 

thl = [0.0,0.0] ; % initial Phase of delayed wave 

itndl = [3001,4004]; % set fading counter 

nowl =2; % number of directwave + delayed wave 

tstp = 1 / sr / IPOINT / clen; % time resolution 

fd = 160; % doppler frequency [Hz] 



273 | 



Vol. 2, Issue 1, pp. 269-281 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

flat = 1 ; % flat Rayleigh environment 

itndel = nd * IPOINT * clen * 30; % number of fading counter to skip 

Then, the number of simulation loops is set. The variables that count the number of transmitted data 
bits and the number of errors are initialized. 

nloop = 10; % simulation number of times 

noe = 0; 
nod = 0; 

The transmitted data in the in-phase channel and quadrature phase modulated by QPSK are multiplied 

by the code sequence used to spread the transmitted data. The spread data are then oversampled and 

filtered by a roll-off filter and transmitted to a communication channel. Here, MATLAB functions 

compoversamp2.m, compconv2 .m and qpskmod.m used for oversampling filtering, and modulation, 

filter parameter xh form T -filter is provided in compconv2 function. 

data = rand(user,nd*ml) > 0.5; 

[ich, qch] = qpskmod(data,user,nd,ml); % QPSK modulation 

[ichl,qchl] = spread(ich,qch,code); % spreading 

[ich2,qch2] = compoversamp2(ichl, qch 1, IPOINT); % over sampling 

[ich3,qch3] = compconv2(ich2,qch2,xh); % filter 

Above program segment demonstrate the transmitter section of the DS-CDMA system. During this 
process ichl,qchl get transformed into ich3 and qch3. The signals transmitted from the users are 
synthesized by considering the if-else statement depending upon the number of user ich4 and qch4 is 
generated 

if user ==1 % transmission based of Users 

ich4 = ich3; 

qch4 = qch3; 
else 

ich4 = sum(ich3); 

qch4 = sum(qch3); 
end 

The synthesized signal is contaminated in a Rayleigh fading channel as shown in below program 
segment . In reality, the transmitted signals of all users are contaminated by distinctive Rayleigh 
fading. However, in this simulation, the synthesized signal is contaminated by Rayleigh fading. 
Function sefade.m used to consider the Rayleigh fading 
if rfade == 

ich5 = ich4; 

qch5 = qch4; 
else % fading channel 

[ich5 ,qch5] = sef ade(ich4,qch4,itau,dlvl 1 ,th 1 ,n0,itnd 1 ,now 1 , . . . . 

..Iength(ich4),tstp,fd,flat); 
itndl = itndl + itndel; 
end 

At the receiver, AWGN is added to the received data, as shown in the simulation for the QPSK 
transmission in Program Segment (5). Then, the contaminated signal is filtered by using a the root 
roll-off filter. Below program segment calculate the attenuation and add AWGN to the signal ich6 and 
qch6 and transform the signal to ich8 and qch8 using the filter coefficient xh2. 

spow = sum(rot90(ich3. A 2 + qch3. A 2)) / nd; % attenuation Calculation 

attn = sqrt(0.5 * spow * sr /br * 10 A (-ebn0(i)/10)); 

snrlnr=10. A (ebn0(i)/10); 

attnNEW=sum(attn)/400; 



274 | 



Vol. 2, Issue 1, pp. 269-281 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[ich6,qch6] = comb2(ich5,qch5,attn); % Add White Gaussian Noise (AWGN) 

[ich7,qch7] = compconv2(ich6,qch6,xh2); % filter 

sampl = irfn * IPOINT + 1; 

ich8 = ich7(:,sampl:IPOINT:IPOINT*nd*clen+sampl-l); 

qch8 = qch7(:,sampl:IPOINT:IPOINT*nd*clen+sampl4); 

The resampled data are now the synthesized data of all the users. By correlating the synthesized data 
with the spread code used at the transmitter, the transmitted data of all the users are detected. The 
correlation is performed by Program, 

[ich9 qch9] = despread(ich8,qch8,code); % dispreading 

The correlated data are demodulated by QPSK. [ Fumiyuki ADACHI] Then, the total number of 

errors for all the users is calculated. Finally, the BER is calculated. 

demodata = qpskdemod(ich9,qch9,user,nd,ml); % QPSK demodulation 

noe2 = sum(sum(abs(data-demodata))); 
nod2 = user * nd * ml; 
noe = noe + noe2; 
nod = nod + nod2; 

VI. Simulation Flowchart 

In order to simulate the system following step are 

• Initialized the common variable 

• Initialized the filter coefficient 
Select the switch for m-sequence and gold sequence 
Generate the spreading codes 
Initialize the fading by using variable rfade 

Define the variables for signal to noise ratio and the number of simulation requires as the 
data is random BER must have the average value of number of simulation. 
Simulate the system by using the proposed transmitter and receiver for different type of 
channel and codes 
Theoretical value of BER for Rayleigh channel and AWGN channel can be calculated by 



BER theoretical(AWGN) = - erfc(jE b /N ) --—(3) 



BER theoretical (Rayleigh) = - 



i-. x 



Eh 
N 



-(4) 



275 | 



Vol. 2, Issue 1, pp. 269-281 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



( StsrMetearsllJ I 



Preparation part 
(common vari^ble-S] 



Fitter initialization 
{Fiift-ar coeffkient?) 



Spreading*^? 

inrti^Lh^^tion rus-sis and 
sequencel 





w-M^code - 

q | atage^ ptap 1 # creg-i 1 H user] 



Gene ration of bipolar spreading code 
code - code * 2 - lj 
Efrlen = leitgrc b 4 code J ; 



Fading initialization 



Start Sfmulalio-Ji 

^Define ntoflp, no e. 

Snod) 



. 



No 



Gold codtc - 
go lda-eq |ml r co£ # user |i 



276 | 



Vol. 2, Issue 1, pp. 269-281 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 




Bit Error Rate Calculation 




Y-5:: 



Theoretical BER calculation 



CSTOP (print A 

results) J 



211 \ 



Vol. 2, Issue 1, pp. 269-281 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

VII. Simulation Results Obtained 




Figure.6 Performance of DS CDMA System in AWGN Environment With M Sequence 



BER Vs Efa/No on AWGN Channel + GOLDCODE 













Theo BER for QPSK on AWGN j 

— * — Simulated BER on AWGN No of USER =1 1 

O Simulated BER on AWGN No of USER =3 \ 

—B— Simulated BER on AWGN No of USER =7 \ 


z:::::::::::::::p:::::::::::::::|:::::::::::::::::[:::::::::::::::::i::::::::::::::::: 








[ | 








::::::::::::::::t:::::::~ 




— ^^h^^^^_^_^-- 


























































i i i \ 




i i i i 






Figure.7 Performance of DS CDMA System in AWGN Environment With GOLD Sequence 




Figure.8 Performance of DS CDMA System in Rayleigh Environment With Gold Sequence 



278 | 



Vol. 2, Issue 1, pp. 269-281 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



BERVs Eb/No on RAYLEIGH + M-SEQUENCE 






10" 









Theo BER for QPSK on RAYLEIGH ; 

— (— Simulated BER on RAYLEIGH No of USER =1 - 
— S— Simulated BER on RAYLEIGH No of USER =3 - 
— B— Simulated BER on RAYLEIGH No of USER =7 - 












































































































































































i 


i i i 


i i 





Figure.9 Performance of DS CDMA System in Rayleigh Environment With M Sequence 



BER Vs Eb/No on RAYLEIGH + [M-SEQUENCE / GOLDCODEj.NO OF USER=3 










^^Theo BER for QPSK on RAYLEIGH 

—^- Simulated BER on RAYLEIGH + M-SEQ, No of USER =3 '- 

— 9— Simulated BER on RAYLEIGH + GOLDCODE, No of USER =3 - 
























J.\\\\\"""\\\l\\\\\"\\"""\\"\^^ 


\ \ \ ^^^^r- \ \ 






::::::::::::::::]:::::::::::::::::::::::::::::::::::[::::::::::::::::]:::::::::::::::::i::::::::::::::::r 


\ \ \ \ \ \ \ ^*>>*^ < 


i i i i i i i i i 



Figure. 10 Performance of DS CDMA System in Rayleigh Environment With M & Gold Sequence 



BER Vs Eb/No on AVVGN + [M-SEQUENCE / G0LDC0DE],N0 OF USER=3 




-Theo BER for QPSK on AWGN 
Simulated BER on AWGN + M-SEQ, No of USER =3 
Simulated BER on VGN GOLD' ODE, No of USER =3 




Figure.ll Performance of DS CDMA System in AWGN Environment With M & GOLD Sequence 



279 | 



Vol. 2, Issue 1, pp. 269-281 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



BER Vs Eb/No on M-SEQUENCE [AWGN+RAYLEIGH],NO OF USER=3 




Figure.12 Performance of DS CDMA System in AWGN & Rayleigh Environment With M Sequence 



BER Vs Eh/No on G0LDC0DE [AWGN+RAYLEIGH],NO OF USER=3 



-Theo BER for QPSK on AWGN 

-Theo BER for QPSK on RAYLEIGH 

-Simulated BER on AWGN + G0LDC0DE, No of USER =3 

-Simulated BER on RAYLEIGH + GOLDCODE, No of USER =3 




16 18 20 



Figure.13 Performance of DS CDMA System In AWGN & Rayleigh Environment With Gold sequence 

VIII. Results and Conclusion 

In AWGN environment, when gold sequence or m sequence is used, for the different users the 
practical BER value for the minimum number of user is nearly approaches to the theoretical value of 
BER. In RAYLEIGH environment, when gold or m sequence is used, at the initial SNR value the 
practical and theoretical value of BER are same, as the SNR increases the practical BER value 
increases as compared to the theoretical value of BER. When the m sequence and gold sequence is 
considered in RAYLEIGH environment, at initial state the practical BER value and theoretical BER is 
same. But as the SNR increases, the practical BER value increases rapidly as compared to the 
theoretical BER value. When the m sequence and gold sequence is considered in AWGN 
environment, with single user, initially the practical BER value is same as the theoretical value, and 
with increasing SNR the practical value increases as compared to the theoretical value of BER. When 
either sequence is used in the system for AWGN and Rayleigh environment, initially the BER 
theoretical and practical value are nearly same. But, as the SNR value increases in case of the AWGN, 
the practical BER value increases rapidly as compared to the theoretical value, and in case of 
Rayleigh the practical value approaches to the theoretical value. 



280 



Vol. 2, Issue 1, pp. 269-281 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Acknowledgments 

The authors would like to thank firstly, our GOD, and all friends who gave us any help related to this 
work. Finally, the most thank is to our families and to our country INDIA which born us. 

References 

[I] Dr. Mike Fitton, Mike Fitton, "Principles of Digital Modulation Telecommunications" Research Lab 
Toshiba Research Europe Limited. 

[2] P. Alexander, A. Grant and M. C. Reed, "Iterative Detection Of Code-Division Multiple Access With 

Error Control Coding" European Trans. 
[3] Hiroshi Harada and Ramjee Prasad, "Simulation and Software Radio" for Mobile 

Communication. 
[4] X.Wang and H.V.Poor, "Wireless Communication Systems: Advanced Techniques for Signal 

Reception" . 
[5] J. Proakis, Digital Communications, McGraw-Hill, McGraw-Hill. 

[6] Sklar B., "A Structured Overview Of Digital Communications - A Tutorial Review - Part I ", IEEE 

Communications Magazine, August 2003. 
[7] Sklar B., "A Structured Overview Of Digital Communications - A Tutorial Review - Part II ", IEEE 

Communications Magazine, October 2003. 
[8] E. Dinan and B. Jabbari, "Spreading Codes for Direct Sequence CDMA and Wideband CDMA 

Cellular Networks" , IEEE Communications Magazine. 
[9] Shimon Moshavi, Bellcore, "Multi-user Detection for DS-CDMA Communications" , IEEE 

Communications Magazine. 
[10] Hamed D. Al-Sharari, "Performance of Wideband Mobile Channel on Synchronous DS-CDMA", 

College of Engineering, Aljouf University Sakaka, Aljouf, P.O. Box 2014, Kingdom Of Saudi 

Arabia. 

[II] Theodore S. Rappaport, "Wireless Communications Principles And Practice". 

[12] Wang Xiaoying "Study Spread Spectrum In Matlab" School of Electrical & Electronic Engineering 

Nanyang Technological University Nanyang Drive, Singapore 639798. 
[13] Zoran Zvonar and David Brady, "On Multiuser Detection In Synchronous CDMA Flat Rayleigh 

Fading Channels" Department of Electrical and Computer Engineering Northeastern University 

Boston, MA 021 15. 
[14] C.Trabelsi and A. Yongacoglu "Bit-error-rate performance for asynchronous DS-CDMA over 

multipath fading channels" IFF Proc.-Commun., Vol. 142, No. 5, October 1995 
[15] Fumiyuki ADACHI "Bit Errror Rate Analysis of DS-CDMA with joint frequency -Domain 

Equalization and Antenna Diversity Combinning'TEICE TRANS.COMMUN.,VOL.E87-B ,NO.10 

OCTOBER 2004 

Athar Ravish Khan was born in Maharashtra, INDIA, in 1979. He received the B.E. degree 

in electronics and telecommunication engineering, M.E. degree in digital electronics from 

SGBA University Amravati Maharashtra India, in 2000 and 2011 respectively. In 2000, he 

joined B.N College of Engineering Pusad and worked as lecturer. In 2006 he joined as 

lecturer in J.D Institute of Engineering and Technology Yavatmal, Maharashtra INDIA and 

in March 2011 he became an honorary Assistant Professor there. He is pursuing Ph.D. 

degree under the supervision of Prof. Dr. Sanjay M. Gulhane. His current research interests 

include digital signal processing, neural networks and wireless communications, with specific emphasis on 

UWB in underground Mines -Tunnels. 




281 



Vol. 2, Issue 1, pp. 269-281 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Recent Philosophies of AGC of a Hydro-Thermal 
System in Deregulated Environment 

L. ShanmukhaRao 1 , N.Venkata Ramana 2 , 
Associate Professor, E.E.E.Department, Dhanekula Institute of Engineering & Technology, 

Ganguru,Vijayawada, AP, India. 
2 Professor, E.E.E Department, JNTU Jagityal, AP, India. 



Abstract 

In restructured power system, engineering aspects of planning and operation have to be reformulated although 
essentials ideas remain same. With emergency of distinct identities of GENCO 's, TRANSCO 's, DISCO 's, and 
the ISO's many of the ancillary services of the vertically integrated utility will have a different role to play and 
hence have to be modified differently. Among these, ancillary services are, "the automatic generation control 
(AGC)". An attempt is made in this paper to present critical literature review and an up-to-date and exhaustive 
bibliography on the AGC of a hydro thermal system in deregulated environment. Various control aspects 
concerning the AGC problem have been highlighted. 

KEYWORDS' Automatic generation control, Hydro-Thermal system, Deregulation. 

I. Introduction 

In modern power system network there are number of generating utilities interconnected together 
through tie-lines. In order to achieve integrated operation of a power system, an electric energy 
system must be maintained at a desired operating level characterized by nominal frequency, voltage 
profile and load flow configuration. 

Modern power system normally consists of a number of subsystems interconnected through tie lines. 
For each subsystem the requirements usually include matching system generation to system load and 
regulating system frequency [5]. This is basically known as load-frequency control problem or 
automatic generation control (AGC) problem. It is desirable to achieve a better frequency constancy 
than is obtained by speed governing system alone. In an interconnected power system, it is also 
desirable to maintain the tie line flow at a given level irrespective of the load change in any area. To 
accomplish this, it becomes necessary to manipulate the operation of main steam valves or hydro 
gates in accordance with a suitable control strategy, which in turn controls the real power output of 
the generators. The control of the real power output of electric generators in this way is termed as 
"Automatic Generation Control (AGC)"This paper discuss the critical literature review of AGC 
schemes of hydro thermal system in deregulated environment. 

II. Automatic Generation Control 

Power system loads are sensitive to frequency and following system frequency changes the aggregate 
load change follows deviation. When a generating unit is tripped or additional load is added to the 
system, the power mismatch is initially compressed by an extraction of the kinetic energy from the 
system inertial storage that causes a system frequency drop. [19] 

As the frequency decreases, the power consumed by loads also decreases. Equilibrium for large 
system can be obtained when the frequency sensitive reduction of loads balances the power output of 



282 



Vol. 2, Issue 1, pp. 282-288 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

tripped unit or that delivered to the additional load resulting in new frequency. This effect could stop 
the frequency decline in less than a couple of seconds. However, if the mismatch causes the frequency 
to deviate beyond the governor dead band of the generating units their outputs will be increased by the 
governor action. For such mismatches equilibrium is obtained when reduction in power consumed by 
the loads plus the increased generation due to governer action compensate the mismatch. Such 
equilibrium is normally obtained with a dozen seconds of the frequency incident. Governor droop is 
the percentage change in frequency that would cause unit generation to change buy 100% of its 
capability. Typically speed droops for active generators are in the range of about 4%. With this level 
of frequency sensitivity and at the expense of some frequency deviation, generation adjust by 
governors provide ample opportunity for follow up manual control of units. 

This automatic adjustment of generation by free governor action is known as primary frequency 
regulation. The objectives of the follow up control especially under normal changes of load are to 
return frequency to, schedule, to minimize production cost, and to operate the system at an adequate 
level of security. 

Automatic generation control is a closed loop control system that particularly replaces this manual 
control. This form of generation control has become essential to the real time operation and control of 
interconnected power systems and operates in widely varying power system control environments 
ranging from autonomous to strongly interconnected systems with hierarchy multi- level control. 
The purpose of AGC is to replace portions of the manual control. 

As it automatically responds to normal load changes, AGC reduces the response time to a minute or 
more or less. Mainly due to delays associated with physically limited response rates of energy 
conversion, further reduction in the response of AGC is neither possible nor desired. 
Neither follow up manual control nor AGC is able or expected to play any role in limiting the 
magnitude of their just frequency swing, which occurs within seconds after the loss of a block 
generation or load in the systems. For conditions where change of generation due to governor action 
and change of load due to its sensitivity to frequency are not enough to intercept the runaway 
frequency. Over and under frequency relay are among the last resorts for shedding loads to prevent 
system collapse or tripping generating units to prevent their damage. 
The main aims behind the design of AGC are: 

a) The steady state frequency error following a step load perturbation should be zero. 

b) The steady state change in the tie flow following a step load change in an area must 
be zero. 

c) An automatic generation controller providing a slow monotonic type of generation 
responses should be preferred in order to reduce wear and tear of the equipment. 

The objectives of AGC may, therefore be summarized as follows: 

1. Each area regulates its own load fluctuations. 

2. Each area assists the other areas, which cannot control their own load fluctuations. 

3. Each area contributes to the control of the system frequency, so that the Operating 
costs are minimized. 

4. The deviations in frequency and tie line power flow error to zero in the steady state. 

5. When load changes are small, the system must be permitted to come back to the 
steady state (by natural damping) so that the mechanical power does not change small 
disturbances for economic reasons. 

The problem of AGC can be subdivided into fast primary control and slow secondary control modes. 
The fast primary control (governing mechanism) mode tries to minimize the frequency deviations and 
has a time constant of the order of seconds. But, primary control does not guarantee the zero steady 
state error. The slow secondary control channel (supplementary control), with time constants of the 
order of minutes, regulates the generation to satisfy certain loading requirements and contractual tie- 
line loading agreements. The overall performance of the AGC in any power system depends on the 
proper design of both primary and secondary control loops. 

The traditional power system industry has a "vertically integrated utility" (VIU) 
[9], [10], [14], [15], [32] structure and treated as a single utility company which monopolies generation, 
transmission and distribution in a certain geographic region. Interconnection between networks and 



283 



Vol. 2, Issue 1, pp. 282-288 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

interaction between companies is usually voluntary to improve system reliability and performance. In 
the restructured or deregulated environment, vertically integrated utilities no longer exist. The first 
step in deregulation has been to separate the generation of power from the transmission and 
distribution, thus putting all the generation on the same footing as independent power producers 
(IPPs). So in the new scenario the utilities no longer own generation, transmission, and distribution; 
instead, there are three different entities, viz., GENCOs (generation companies), TRANSCOs 
(transmission companies) and DISCOs (distribution companies) 

III. Deregulation 

Analysis of the electrical industry beings with the reorganization of three components; Generation, 
Transmission and Distribution. 



Bill 



UTILITY 

Generation, Transmission, Distribution 

~7\ 



Energy KWH 



End Use Customer 



Fig.l Schematic Diagram of Power System 

Once electricity is generated,[3],[7] whether by burning fossil fuels, harnessing wind, solar or hydro 
energy, or through nuclear fission it is sent through high voltage, high capacity transmission lines to 
the local regions in which the electricity will be consumed. When electricity arrives in the region in 
which it is to be consumed, it is transformed to a lower voltage and sent through local distribution 
wires to end-use consumers. In general, all three of these vertically related sectors have typically been 
tied together within a utility, which has been either investor-owned or state de -regulated or owned by 
the municipality. For many years each sector was thought of is a natural monopoly. 
Electric deregulation also known as electric restructuring, is the process by which the traditional 
monopoly structure for generating and delivering power to retail consumer is opened to competition 
by legislative or regulatory initiative. Addressed at the state level, electricity deregulation in its early 
stages and already beginning to benefits for consumers, the economy and future reliability of energy 
sources. 

In the transmission and distribution sectors, effective competition would require that rival firms 
duplicate one another's wire network, which would be inefficient. If wires owned by different 
companies were allowed to interconnect to form a single network, the flow on one line affects the 
capacity of other lines in the system to carry power. The commodity that is opened to competition is 
called electricity generation or supply. Competitive suppliers offer it for sale. Customer can choose 
their competitive supplier. A customer's electricity bill will show a generation charge that represents 
the fee for the use of certain amount of electricity. Other elements the bills include amounts owned to 
the utility (now known as Distribution Company) for delivering the power to consumers through poles 
and wires. This delivery function is not being to competition. 

Deregulation presents a chance to do better job at keeping cost down and making sure consumers 
have the kind of choice that best suits their needs. Historical deregulation or restructuring has the 
potential to produce gains in three broad sectors of the electricity utility industry operations, 
investment and consumption. 

Concordia and Kirchmayer [l]-[3] have analyzed the AGC problem of two equal area thermal, hydro 
and hydro-thermal systems. They have studied the AGC of a hydro-thermal system considering non- 
reheat turbine and mechanical governor in hydro system, neglecting generation rate constraints. Their 
conclusion from simulation studies show that for minimum interaction between control areas 
frequency bias (B) must be set equal to area frequency response characteristics |3. Although they have 
extensively studied the effect of variation of several parameters on dynamic performance of the 



284 



Vol. 2, Issue 1, pp. 282-288 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

system, no explicit method has been suggested by them for the optimization of controllers. Concordia 
[3] has given the basic concepts of load-frequency control of an interconnected power system. He has 
discussed the effect of frequency bias and governor turbine model of a thermal system. 
Cohn [7]-[8] has discussed mainly regarding the selection of frequency bias setting for large multi- 
area power system. His investigations reveal that for minimum interaction between control areas 
ideally the frequency bias setting (B) of a control area should match the combined generation and load 
frequency response i.e. area frequency response characteristics (|3) of the area. However, Cohn has not 
addressed to the problem of optimum gain setting and structures of the supplementary controllers 
form the point of view of dynamic behavior of the system. 

Nanda and Kaul [9], [10] have extensively studied the AGC problem of a two area reheat thermal 
system, using both parameter plane and ISE techniques for optimization of integral gain setting and 
for investigating the degree of stability of the system. They have studied the effect of GRC, area 
capacity effect, speed regulation parameter on optimum controller setting and system dynamic 
responses. The effect of variation of significant parameters on optimum controller setting and cost 
function has been brought out through sensitivity analysis neglecting GRC. However, they have not 
addressed to the problem pertaining to correction of time error and inadvertent interchange 
accumulations. 

IEEE committee report on "Power Plant Responses" [13] shows that in practice GRC for reheat 
thermal system varies between 2.5% to 12% per minute and the permissible rate of generation for 
hydro plant is relatively much higher (a typical value of generation rate constraints (GRC) being 
270% per minute for raising generation and 360% per minute for lowering generation), as compared 
to that for reheat type thermal units having GRC of the order of 3% per minute. Ref. [13] provides the 
transfer function model for steam and hydro turbines for AGC. 

Nanda [14], [15] ,[32] have investigated the AGC problem of an interconnected hydro-thermal system 
in both continuous and discrete mode, with and without GRC. They are possibly the first to consider 
GRC to investigate the AGC problem of a hydro-thermal system with conventional integral 
controllers. They have found out the optimum integral controller settings and their sensitivity to GRC, 
speed regulation parameter 'R', base load condition etc. They have also studied the AGC problem of 
hydro thermal system, considering GRC where their main contribution is to explore the best value of 
speed regulation parameter. They have considered mechanical governor for hydro turbine. 
V. Donde and M A..Pai, LA. Hiskens [5] present AGC of a two area non reheat thermal system in 
deregulated power system. The concept of DISCO participation matrix (DPM) and area participation 
factor (APF) to represent bilateral contracts are introduced. However, they have not dealt with reheat 
turbine, GRC and hydro-thermal system in their work. 

Bekhouche [25] has compared load frequency control before and after deregulation. Before 
deregulation ancillary services, including AGC are provided by a single utility company called a 
control area that owns both transmission and generation systems. After deregulation, the power 
system structure has changed allowing specialized companies for generation, transmission, 
distribution and independent system operator. 

Richard D. Christie and Anjan Bose [20] have dealt with LFC (Load Frequency Control) issues in 
deregulated power system. It identifies the technical issues associated with load frequency control and 
also identifies technical solutions such as standards and algorithms, needed for the operation in this 
new restructured power system. 

Meliopoulos, Cokkinides and Bakirtzis[23] have given the concept that in a deregulated environment, 
independent generators and utility generators may or may not participate in the load frequency control 
of the system. For the purpose of evaluating the performance of such a system, a flexible method has 
been developed and implemented They proposed a method in which they assumed that load frequency 
control is performed by ISO (Independent System Operator) based on parameters defined by 
participating generating units. The participating units comprise utility generators and independent 
power producers. The utilities define the units which will be under load-frequency control, while the 
independent power producers may or may not participate in the load frequency control. For all the 
units which participate in the load-frequency control, the generator owner defines (a) generation 
limits, (b) rate of change and (c) economic participation factor. This information is transmitted to the 
ISO. This scheme allows the utilities to economically dispatch their own system, while at the same 
time permit the ISO to control the interconnected system operation. In the paper it has been shown 



285 



Vol. 2, Issue 1, pp. 282-288 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

that if the percentage of units participating in this control action is very small, system performance 
deteriorates to a point that is unacceptable. It is therefore recommended that minimum requirements 
be established, based on system. 

J. Kumar, Kah-Hoe Ng and G. Sheble[21][22] have presented AGC simulator model for price based 
operation in a deregulated power system. They have suggested the modifications required in the 
conventional AGC to study the load following in price based market operations. A framework for 
price-based operation is developed to assist in understanding AGC operation in the new business 
environment. The modified AGC scheme includes the contract data and measurements, which are 
continuous, regular and quiescent and hence, greatly improves control signals to unit dispatch and 
controllers. The proposed simulator is generic enough to simulate all possible types of load following 
contracts (bilateral and poolco). The proposed scheme includes ACE as a part of control error signal 
and thus, also satisfies the NERC performance criteria. The new framework requires establishment of 
standards for the electronic communication of contract data as well as measurements. They have 
highlighted salient differences between the automatic generation control in a vertical integrated 
electric industry (conventional scenario) and a horizontal integrated electric industry (restructured 
scenario). However; they have not addressed the aspects pertaining to reheat turbine, GRC and hydro- 
thermal system. 

Donde, Pai and Hiskens [24] present AGC of a two area non-reheat thermal system in deregulated 
power system. In a restructured power system, the engineering aspects of planning and operation have 
to be reformulated although essential ideas remain the same. With the emergence of the distinct 
identities of GENCOs, TRANSCOs, DISCOs, many of the ancillary services of a vertically integrated 
utility will have a different role to play and hence have to be modeled differently. Among these 
ancillary services is the automatic generation control (AGC). In the new scenario, a DISCO can 
contract individually with a GENCO for power and these transactions are done under the supervision 
of the ISO or the RTO. In this paper, the two area dynamic model is formulated. Specifically it had 
focused on the dynamics, trajectory sensitivities and parameter optimization. The concept of a DISCO 
participation matrix (DPM) is proposed which helps the visualization and implementation of the 
contracts. The information flow of the contracts is superimposed on the traditional AGC system and 
the simulations revealed some interesting patterns. The trajectory sensitivities are helpful in studying 
the effects of parameters as well as in optimization of the ACE parameters viz. tie line bias and 
frequency bias parameter. The concept of DISCO participation matrix (DPM) and area participation 
factor (APF) to represent bilateral contracts are introduced. 

IV. Conclusion 

Literature survey shows that most of the earlier work in the area of automatic generation control in 
deregulated power system pertains to interconnected thermal system and no attention has been 
devoted to hydro-thermal systems involving thermal and hydro subsystems of widely different 
characteristics. The paper presents a critical review of AGC of hydro thermal system in deregulated 
environment. It has paid particular attention to categorize various AGC strategies in the literature that 
highlights its salient features. The authors have made a sincere attempt to present the most 
comprehensive set of references for AGC. It is anticipated that this document will serve as a valuable 
resource for any worker of the future in this important area of research. 

References 

[1] C. Concordia, L. K. Kirchmayer, "Tie-Line Power & Frequency Control of Electric Power Systems", 

AIEE Trans., vol. 72, part III, 1953, pp. 562-572. 

[2] C. Concordia, L. K. Kirchmayer, "Tie-Line Power & Frequency Control of Electric Power Systems- 

Part II, AIEE Trans., vol. 73, part III-A, 1954, pp. 133-141. 

[3] L. K. Kirchmayer, "Economic Control of Interconnected Systems", John Wiley, New York, 1959. 

[4] O. I. Elgerd, C. E. Fosha, "Optimum Megawatt Frequency Control of Multi-area Electric Energy 

Systems", IEEE Trans, on Power Apparatus and Systems, vol. PAS-89, No.4, Apr. 1970, pp. 556-563. 

[5] C. E. Fosha, O. I. Elgerd, "The Megawatt Frequency Control problem: A New Approach via Optimal 

Control Theory", IEEE Trans, on Power Apparatus and Systems, vol. PAS-89, No.4, Apr. 1970, pp. 
563-574. 



286 



Vol. 2, Issue 1, pp. 282-288 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[6] Nathan Cohn, "Some Aspects of Tie-Line Bias Control on Interconnected Power Systems", AIEE 

Trans., vol. 75, Feb. 1957, pp. 1415-1436. 
[7] Nathan Cohn, "Control of Generation & Power Flow on an Interconnected Power Systems", John 

Wiley, New York, 2ndEdition, July 1971. 
[8] IEEE Committee Report, "IEEE Standard Definition of Terms for Automatic Generation Control of 

Electric Power Systems", IEEE Trans. Power Apparatus and Systems, vol. PAS-89, Jul. 1970, pp. 

1358-1364. 
[9] J. Nanda, B. L. Kaul, "Automatic generation Control of an Interconnected Power System", IEE Proc, 

vol. 125, No.5, May 1978, pp. 385-391. 
[10] "J. Nanda, M. L. Kothari, P. S. Satsangi, "Automatic Generation Control of an Interconnected Hydro- 

thermal System in Continuous and Discrete modes considering Generation Rate Constraints", IEE 

Proc, vol. 130, pt. D, No.l, Jan. 1983, pp 17-27. 
[11] D.G. Ramey, J. W. Skooglund, "Detailed Hydro governor representation for System stability Studies", 

IEEE Trans, on Power Apparatus and Systems, vol. PAS-89, No. Jan. 1970, pp. 106-112. 
[12] Power plant responses," IEEE Committee report, IEEE Trans. Power Apparatus & Systems, vol. PAS- 

86, Mar. 1967, pp. 384-395. 
[13] IEEE Committee Report, "Dynamic Models for steam and Hydro Turbines in Power System Studies", 

IEEE Trans. Power Apparatus & systems, Nov.IDec. 1973, pp. 1904-1915. 
[14] M. L. Kothari, B. L. Kaul, J. Nanda, "Automatic Generation Control of Hydro-Thermal System", 

Journals of Institute of Engineers (India), pt. EL-2, vol. 61, Oct. 1980, pp. 85-91. 
[15] M. L. Kothari, J. Nanda, P. S. Satsangi, "Automatic Generation Control of Hydro-Thermal System 

considering Generation Rate Constraint", Journals of Institute of Engineers (India), pt. EL, vol. 63, 

June 1983, pp. 289-297. 
[16] M. Leum, "The Development and Field Experience of a Transistor Electric Governor for Hydro 

Turbines," IEEE Trans. Power Apparatus & Systems, vol. PAS-85, Apr. 1966, pp. 393-402. 
[17] F. R. Schleif, A. B. Wilbor, "The Co-ordination of Hydraulic Turbine Governors for Power System 

Operation," IEEE Trans. Power Apparatus and Systems, vol. PAS-85, No.7, Jul. 1966, pp. 750-758. 
[18] L. Hari, M. L. Kothari, J. Nanda, "Optimum Selection of Speed Regulation Parameter for Automatic 

Generation Control in Discrete Mode considering Generation Rates Constraint", IEEE Proc, vol. 138, 

No.5, Sept 1991, pp. 401-406. 
[19] P. Kundur, "Power System Stability & Control," McGraw-Hill, New York, 1994, pp. 418-448. 

[20] Richard D. Christie, Anjan Bose, "Load Frequency Control Issues in Power System Operations after 

Deregulation", IEEE Transactions on Power Systems, V01.ll, No. 3, August 1996, pp 1191-1196. 
[21] J. Kumar, Kah-Hoe Ng and G. Sheble, "AGC Simulator for Price-Based Operation: Part I", IEEE 

Transactions on Power Systems, Vol.12, No.2, May 1997,pp527-532 
[22] J. Kumar, Kah-Hoe Ng and G. Sheble, "AGC Simulator for Price-Based Operation: Part II", IEEE 

Transactions on Power Systems, Vol.12, No.2, May 1997, pp 533-538. 
[23] A.P Sakis Meliopoulos, G.J.Cokkinidesand A.G.Bakirtzis," Load-Frequency Control Service in a 

Deregulated Environment", Decision Support Systems 24(1999) 243-250. 
[24] V. Donde, M. A. Pai and I. A. Hiskens, "Simulation and Optimization in an AGC System after 

Deregulation", IEEE Transactions on Power Systems, Vol. 16, No. 3, August 2001, pp 481-488. 
[25] Dr.N.Bekhouche," Automatic Generation Control Before and after Deregulation" IEEE 2002 Page 321- 

323 
[26] H.L.Zeynelgil, A.Demiroren, N.S.Sengor "Application of ANN technique to AGC for multi area 

system", electric power and energy system, page 345-354 July-2001 
[27] A.Demiroren, H.L.Zeynelgil, N.S.Sengor, "Application of ANN technique to load frequency control for 

three area power system" IEEE PTC,sept-2001 
[28] A.Demiroren, E.Yesil "Automatic Generation Control with fuzzy logic controllers in the power system 

including SMES units", electric power and energy system, 26(2004) page 291-305 
[29] S.P.Ghoshal," Optimizations of PID gains by particle swarm optimizations in fuzzy based automatic 

generation control" ,electric power and energy system, April 2004 page 203-212 
[30] S.P.Ghoshal," Application of GA/GA-SA based fuzzy automatic generation control of a multi-area 

thermal generating system", Electric Power System Research 70 (2004) 115-127. 
[31] H. Shayeghi, H.A. Shayanfar and O.P. Malik," Robust decentralized neural networks based LFC in 

regulated power system"Electric Power Systems Research 19 Apr 2006. 
[32] Manoranjan Parida and J. Nandal" Automatic Generation Control of a Hydro-Thermal System 

in Deregulated Environment", Proceedings of the Eighth International Conference on Electrical 

Machines and Systems, vol 2,Septmeber 2005 , page 942-947. 



287 | 



Vol. 2, Issue 1, pp. 282-288 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 






Authors 

L. ShanmukhaRao received the Bachelor in Electrical and Electronics Engineering degree 
from the kakatiya University Warangal, A.P, in 2006 and the Master in Electrical Power 
engineering degree from the JNTUH, Hyderabad, in 2006. He is currently pursuing the Ph.D. 
degree with the Department of Electrical Engineering, JNTUH, Hyderabad. His research 
interests include are Power System Operation and Control. He is currently Associate Professor 
at Dhanekula Institute of Engineering &Technology, Ganguru, Vijayawada, AP. India. 

N. Venkata Ramana has received M. Tech from, S.V. University, India in 1991 and Ph.D. in 
Electrical Engineering from Jawaharlal Nehru Technological University (J.N.T.U),Hyderabad, 
India in Jan' 2005. His main research interest includes Power System Modeling and Control. 
He authored 2 books on power systems and published 14 research papers in national and 
international journals and attended 10 international conferences. He is currently Professor at 
J.N.T.U. College of Engineering, Jagityal, Karimnagar District, A.P., India. 




288 



Vol. 2, Issue 1, pp. 282-288 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Dynamic Routing Scheme in All-Optical Network 
using Resource Adaptive Routing Scheme 

S. Suryanarayana 1 , K.Ravindra 2 , K. Chennakesava Reddy 3 
1 Dept. of ECE, CMR Institute of Technology, JNT University Hyderabad, India 
2 Dept. of ECE, Mallareddy Institute of Tech & Science, JNT University Hyderabad, India 
3 Dept. of EEE, TKR College of Engg &Tech, JNT University Hyderabad, India 



Abstract 

With the increasing demand for high data transfer rate, the communication is getting new developments. For 
progressive data transfer at high data rate services, the means of communication has now taken high offering 
bandwidth architecture such as optical networks. In optical networks the mode of communication is completely 
an optical medium and data are transferred from various nodes to reach to the destination via optical routers. 
Though these networks have high bandwidth compatibility they offer heavy traffic congestion due to non-linear 
traffics resulting in degraded quality services. In this paper we present an adaptive methodology towards 
developing routing scheme in optical network based on queue based mechanism at wavelength router for 
comparatively higher offering quality of services. 

KEYWORD: All-optical network, resource adaptive routing, dynamic routing scheme, throughput, overhead 

I. Introduction 

From the past ten to twenty years the usage of internet services are drastically increasing year by year. 
So we have to develop our communication systems to meet the demand for the data transfer. So to 
incorporate the quality of service for high data rate services, optical networks are the upcoming 
solution. Optical wavelength-division-multiplexing networks provide large bandwidth and are 
promising networks for the future Internet. Wavelength routed WDM systems that utilize optical cross 
connect are capable of switching data in the optical domain. In such systems, end-to-end all-optical 
light paths can be established and no optical-to electronic and electronic-to-optical conversions are 
necessary at intermediate nodes. Such networks are referred to as all-optical networks. Wavelength 
routed networks without wavelength conversion are also known as wavelength-selective (WS) 
networks [11]. In such a network, a connection can only be established if the same wavelength is 
available on all links between the source and the destination. This is the wavelength-continuity 
constraint. Wavelength routed networks with wavelength conversion are also known as wavelength- 
interchangeable (WI) networks [11]. In such a system, each router is equipped with wavelength 
converters so that a light path can be setup with different wavelengths on different links along the 
path. To establish a light path in a WDM network, it is necessary to determine the route over which 
the light path should be established and the wavelength to be used on all the links along the route. 
This problem is called the routing and wavelength assignment (RWA) problem. Routing and 
wavelength assignment requires that no two light paths on a given link may share the same 
wavelength. In addition, in WS networks, light paths must satisfy the wavelength continuity 
constraint, that is, the same wavelength must be used on all the links along the path. The RWA 
problem can be classified into two types: the static RWA problem and the dynamic RWA Problem. 
In the static RWA problem, the set of connections is known in advance, the problem is to set up light 
paths for the connections while minimizing network resources such as the number of wavelengths and 



289 



Vol. 2, Issue 1, pp. 289-298 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

the number of fibers. Alternatively, one may attempt to set up as many light paths as possible for a 
given number of wavelengths. 

Dynamic RWA tries to perform routing and wavelength assignment for connections that arrive 
dynamically. The objective of dynamic RWA is to minimize the blocking probability. The routing and 
wavelength assignment problem has been studied extensively. 

A summary of the research in this area can be found in [16]. This problem is typically partitioned into 
two sub-problems: the routing sub-problem and the wavelength selection sub-problem [2, 3, 5, 6, 7, 
10, 14]. For the routing sub-problem, there are basically three approaches, fixed routing [6], alternate 
routing [6, 7, 14], and dynamic adaptive routing [9, 10]. Among the routing schemes, dynamic 
adaptive routing, which is studied in this paper, offers the best performance. A large number of 
wavelength selection schemes have been proposed: random-fit [5], first-fit [5], least-used[15], most- 
used[15], min-product[8], least-loaded [11], max-sum[2, 15], and relative capacity loss[17]. 
The schemes can roughly be classified to three types. The first type, including random-fit and least- 
used, tries to balance the load among different wavelengths. The schemes in this category usually 
perform poorly in comparison to other types of RWA schemes. The second type, including first-fit, 
most-used, min-product, and least-loaded, tries to pack the wavelength usage. These schemes are 
simple and effective when the network state information is precise. The third type, including max-sum 
and relative capacity loss, considers the RWA problem from a global point of view. These schemes 
deliver better performance and are more computational intensive than the other types of schemes. In 
this study, we investigate the impact of route overhead information on the performance of the routing 
wavelength algorithms. 

II. Dynamic Routing Scheme 

In this paper we outline the approach of providing quality of service based on route queue mechanism 
for higher quality of service. It is observed that the congestion probabilities at the link points are very 
heavy and a large computation is carried out at each router to provide an optimal routing. As the 
overhead in the route is basically due to packet blockage and queuing it is prime requirement to 
reduce this overhead to achieve high quality services. To achieve this objective in this paper we 
propose a markovian approach for a distributed optical network. 

A queuing system consists of one or more routers that provide service of some sort to arriving node. 
Node who arrives to find all routers busy generally join one or more queues (lines) in front of the 
routers, hence the name queuing systems. There are several everyday examples that can be described 
as queuing systems [7], such as bank-teller service, computer systems, manufacturing systems, 
maintenance systems, communications systems and so on. Components of a Queuing System: A 
queuing system is characterized by three components: Arrival process - Service mechanism - Queue 
discipline. 

2.1. Arrival Process 

Arrivals may originate from one or several sources referred to as the calling population. The calling 
population can be limited or 'unlimited'. An example of a limited calling population may be that of a 
fixed number of machines that fail randomly. The arrival process consists of describing how node 
arrives to the system. If A. is the inter-arrival time between the arrivals of the (i-l) th and i th node, we 

shall denote the mean (or expected) inter-arrival time by E(A) and call it 

(k ) = 1/(E(A) the arrival frequency. 

2.2. Service Mechanism 

The service mechanism of a queuing system is specified by the number of routers (denoted by s), each 
server having its own queue or a common queue and the probability distribution of customer's service 
time. Let S. be the service time of the ith customer, we shall denote the mean service time of a 

customer by E(S) and \i = 1/(E(S) the service rate of a server. 

2.3. Queue Discipline 

Discipline of a queuing system means the rule that a server uses to choose the next customer from the 
queue (if any) when the server completes the service of the current customer. Commonly used queue 
disciplines are: 



290 



Vol. 2, Issue 1, pp. 289-298 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

FIFO - Node are served on a first-in first-out basis. LIFO - Node are served in a last-in first-out 

manner. Priority - Node are served in order of their importance on the basis of their service 

requirements. 

2.4. Measures of Performance for Queuing Systems: 

There are many possible measures of performance for queuing systems. Only some of these will be 

discussed here. 

Let, D. be the delay in queue of the ith customer W. be the waiting time in the system of the ith 

customer = D + S Q(t) be the number of node in queue at time t L(t) be the number of node in the 

system at time t = Q(t) + No. of node being served at t 
Then the measures, 



2 



Lim 



I D 



w 



Z W L 

Lim -^ --- (D 



L 



n 

(if they exist) are called the steady state average delay and the steady state average waiting time in the 
system. Similarly, the measures, 

Q = Lim ^r) Q (O-dt 

T -> CO 1 (J 

l = Lim ±r-\ L(t).dt (2) 

T -> oo 1 

(if they exist) are called the steady state time average number in queue and the steady state time 
average number in the system. Among the most general and useful results of a queuing system are the 
conservation equations: 

Q ={X ) d and L = (X ) w (3) 

These equations hold for every queuing system for which d and w exist. Another equation of 
considerable practical value is given by, 

w = d + E(S) (4) 

Other performance measures are: 

the probability that any delay will occur. - the probability that the total delay will be greater than some 

pre-determined value - that probability that all service facilities will be idle. - the expected idle time of 

the total facility. - the probability of turn-always, due to insufficient waiting accommodation. 

2.5. Notation for Queues. 

Since all queues are characterized by arrival, service and queue and its discipline, the queue system is 

usually described in shorten form by using these characteristics. The general notation is: 

[A/B/s]:{d/e/f} 
Where, 

A = Probability distribution of the arrivals 
B = Probability distribution of the departures 
s = Number of routers (channels) 
d = The capacity of the queue(s) 
e = The size of the calling population 
f = Queue ranking rule (Ordering of the queue) 

There are some special notation that has been developed for various probability distributions 
describing the arrivals and departures. Some examples are, 
M = Arrival or departure distribution that is a Poisson process 
E = Erlang distribution 
G = General distribution 
GI = General independent distribution 



291 I 



Vol. 2, Issue 1, pp. 289-298 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Thus for example, the [M/M/l]:{infinity/infinity/FCFS} system is one where the arrivals and 
departures are a Poisson distribution with a single server, infinite queue length, calling population 
infinite and the queue discipline is FCFS. This is the simplest queue system that can be studied 
mathematically. This queue system is also simply referred to as the M/M/l queue. 

III. System Design 

The common characteristic of all markovian systems is that all interesting distributions, namely the 
distribution of the interarrival times and the distribution of the service times are exponential 
distributions and thus exhibit the markov (memoryless) property. From this property we have 
two important conclusions: 

1. State of the system can be summarized in a single variable, namely the number of node in the 
system. (If the service time distribution is not memoryless, this is not longer true, since not only the 
number of node in the system is needed, but also the remaining service time of the customer in 
service.) 

2. Markovian systems can be directly mapped to a continuous time markov chain (CTMC) which can 
then be solved. 

3.1. The M/M/1-Queue 

The M/M/l Queue has iid interarrival times, which are exponentially distributed with specified 
parameters and also iid service times with exponential distribution. The system has only a single 
server and uses the FIFO service discipline. The waiting line is of infinite size. It is easy to find the 
underlying markov chain. As the system state we use the number of node in the system. The M/M/l 
system is a pure birth-/death system, where at any point in time at most one event occurs, with an 
event either being the arrival of a new customer or the completion of a customer's service. What 
makes the M/M/l system really simple is that the arrival rate and the service rate are not state- 
dependent. 

Steady-State Probabilities: 
We denote the steady state probability that the system is in state k(k€N) by pk, which is defined by 

p k := Inn Pi, (J) .<-. 

^°° (?) 

Pk(t) Where pk(t)) denotes the (time-dependent) probability that there are 'k' node in the system at 
time t. The steady state probability p k does not dependent on t. We focus on a fixed state k and look at 
the flows into the state and out of the state. The state k can be reached from state k-1 and from state 

k+lwith the respective rates k J ^ j (the system is with probability ^ \0 ) (t) in the state k-1 at 
time t and goes with the rate from the predecessor state k-1 to state k) and '' --^--h (*) (the same from 
state k+1). The total flow into the state k is then simply ^ lW+Z^WM. The State k is left with the rate 
^") to the state k+1 and with the rate /'^(O to the state k-1 (for k=0 there is only a flow coming from 
or going to state 1). The total flow out of that state is then given by xp ^(0 + /<-?M0. The total rate of 
change of the flow into state k is then given by the difference of the flow into that state and the flow 
out of that state: 

at (5) 

Furthermore, since the pk are probabilities, the normalization condition 

(7) 

3.2. M/M/m-Queue 

The M/M/m-Queue (m > 1) has the same interarrival time and service time distributions as the M/M/l 
queue, however, there are m routers in the system and the waiting line is infinitely long. As in the 
M/M/l case a complete description of the system state is given by the number of node in the system 
(due to the memoryless property). The M/M/m system is also a pure birth-death system. 

3.3. M/M/1/K-Queue 

The M/M/1/K-Queue has exponential inter arrival time and service time distributions, each with the 
respective parameters X and |i. The nodes are served in FIFO-Order; there is a single server but the 
system can only hold up to K node. If a new customer arrives and there are already K nodes in the 



292 | 



Vol. 2, Issue 1, pp. 289-298 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

system the new customer is considered lost, i.e. it drops from the system and never comes back. This 
is often referred to as blocking. This behavior is necessary, since otherwise (e.g. when the customer is 
waiting outside until there is a free place) the arrival process will be no longer markovian. As in the 
M/M/l case a complete description of the system state is given by the number of node in the system 
(due to the memoryless property). The M/M/l/K system is also a pure birth-death system. This system 
is better suited to approximate "real systems" (like e.g. routers) since buffer space is always finite. 

IV. Result Observation 

For the evaluation of the suggested approach a distributed optical network environment is been 
developed. 




Fig 1: optical network architecture considered 

The above figure illustrates about how the assigned nodes are established as a network. By applying 
the routing method we have got all the possible links in between the nodes. Whenever a node has to 
deliver packets to the destination from the source it has to follow the shortest path and reliable path to 
travel. This is done by the routing methodology. It is observed from the above figure that the reliable 
path is chosen and the data is passed through that path. It is represented by red dotted lines. After 
calculation of reliable path the data packets has to travel to destination in communication phase. The 
setup phase finds out which are all the possible paths shown in above figure. 

The figure 2 plot is between number of data packets and data queue. It's been observed that delay 
performance is better in proposed system when compared to conventional systems. In proposed model 
we are using queuing methods to overcome the delay. It's clearly observed that as the number of data 
packets is increasing the queue management is good in the proposed work. 

D elay P erform ance 



20 - 

















--*-- P roposed 














•conventional 


- 


* r \ 










^ 


* 


x ^ ^ " " - • 


' ~~ "- -*- - _ 


- -^ - - 


- - -*- 





4 5 6 7 

N o. of data P ackets 

Fig 2: Delay Performance 



293 | 



Vol. 2, Issue 1, pp. 289-298 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 











Route Delay P lot 








4.5 






























--*- Proposed 




4 


- X 


\ 








• - - c ove ntio n al 


_ 










3.5 


- 


X^ 






- 


3 


^ 




"•^ 




- 


2.5 








^-^ 




2 


- 


^ -^ 






- 






*^ 




~~ 


" -m- __ 


1 .5 


- 






~ ^ ■"*- - - 


""" "• ---_-; 


1 








~~ "~ --*-- - 


~ ~* ~ ~ - -*- - 



9 10 11 

Offered Load 



Fig 3: Route Delay Plot 
Usually when the offered load is more the route delay will be there. The load is more means obviously 
the traffic and due to traffic congestion also will be more. In order to overcome the congestion in the 
network due to heavy traffic queuing models are used. The above plot mentions how the route delay 
varies when the offered load is increased. For the proposed method route delay is less when compared 
to the convention method. 



o 

1 0.6 



--*- — Conventional 
- -'V — Proposed 



^k 



~^ - V- ~^- -\7 - 



10 12 14 

Communication Time 



Fig 4: Route Overhead 
Due to route delay the route overhead will increase. It leads to failure in data packets arrival. Chance 
of data packet loss will be there. Hence by applying queing model the problem is clearly solved. It's 
observed that even increase in communication time the route overhead is less in proposed 
methodology when compared to conventional method. 

Throughput 



— © -Proposed 

\7 — Convnentional 



^~ 



4 5 6 7 

Cycles 

Fig 5: Throughput Plot 



294 



Vol. 2, Issue 1, pp. 289-298 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

For any system throughput is the main parameter to be concentrated on. The above plot gives idea that 
the routing system which is used without any queing model has got less throughput when compared to 
the reliable model which we have proposed. The throughput is comparatively high when compared to 
the conventional method. Under a similar observation for different topology of network is simulated 
and observed as, 




20 40 60 80 100 120 140 160 180 200 



4.5 

4 
3.5 

3 
2.5 

2 
1.5 

1 
0.5 



x 0.8 

CD 
> 

o 

§ 0.I 



Fig 6: Link Probability 

Route Delay Plot 



Proposed 
coventional 



9 10 11 12 13 14 15 

Offered Load 



Fig 7: Route Delay Plot 



— * — Conventional 
-V - Pro posed 



" ^ -V - ^7- 



— *- - 4- 
- v- 



8 10 12 14 

Communication Time 

Fig 8: Route overhead 



295 



Vol. 2, Issue 1, pp. 289-298 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Throughput 



<y Proposed 
V Convnentional 



5 6 

Cycles 



Fig 9: Throughput Plot 



V. Conclusion 



In this paper authors give importance of communication system and based on its increasing usages, 
need for developments in communication area. In this paper authors identified some problems in 
optical networks and proposed a new methodology towards developing routing scheme in optical 
network. So this paper gives a clear idea about a different approach to improve the quality parameters 
based on adaptive routing mechanism. The concept of route overhead due to queue at the link points 
is considered. They developed a new model called markoivan model, to obtain an optimal routing in 
optical network so as to achieve the quality service in distributed optical network. The quality metrics 
developed for the proposed approach is observed to be higher in quality as compared to the 
conventional approach of routing scheme and finally they explained all these improvements with the 
simulation results 

References 

[1]. G. Apostolopoulos, R. Guerin, S. Kamat, and S. Tripathi, "Improving QoS Routing Performance Under 

Inaccurate Link State Information." Proceedings of the 16th International Tele traffic Congress, June 7-11, 
1999. 
[2]. R. A. Barry and S. Subramaniam, "The MAX-SUM Wavelength Assignment Algorithm for WDM Ring 

Networks,", OFC'97, 1997. 
[3]. K. Chan and T.P. Yun, "Analysis of Least Congested Path Routing in WDM Light wave Networks," 

IEEE INFOCOM'94, vol. 2, pages 962-969, 1994. 
[4]. C. Chen and S. Banerjee, "A New Model for Optimal Routing and Wavelength assignment in 

Wavelength Division Multiplexed Optical Networks." In Proc. IEEE INFOCOM'96, 1996, pages 164-171. 
[5]. I. Chlamtac, A. Ganz, and G. Karmi, "Purely Optical Networks for Terabit Communication," IEEE 

INFOCOM'89, pages 887-896, 1989. 
[6]. A. Girard, Routing and dimensioning in circuits witched networks. Addison- Wesley, 1990. 

[7]. H. Harai, M. Murata, and H. Miyahara, "Performance of Alternate Routing Methods in All-Optical 

Switching Networks," IEEE INFOCOM'97, vol. 2, pages 516-524, 1997. 
[8]. G. Jeong and E. Ayanoglu, "Effects of Wavelength-Interchanging and Wavelength Selective Cross- 

Connects in Multiwavelength All-Optical Networks," IEEE INFOCOM'96, vol. 1, pages 156-163, March 
1996. 
[9]. J. P. Jue and G. Xiao, "An Adaptive Light path Establishment Scheme for Wavelength-Routed Optical 

Networks", IEEE ICCCN, 2000. 
[10]. L. Li and A. K. Somani, "Dynamic Wavelength routing Using Congestion and Neighborhood 

Information," IEEE/ ACM Transactions on Networking, 1999. 
[11]. E. Karasan and E. Ayanoglu, "Effects of Wavelength Routing and Selection Algorithms on 
Wavelength Conversion Gain in WDM Optical Networks," IEEE/ ACM Transactions on Networking, vol. 
6, no. 2, pages 186-196, April 1998. 



296 



Vol. 2, Issue 1, pp. 289-298 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[12]. Urmila Bhanja,Sudipta mahapatra, Rajashri Roy, "A novel solution to the dynamic routing and 

wavelength assignment problem in transparent optical networks", "International Journal of computer 

networks & communication, Vol.2, No. 2, March 2010. 
[13]. Virendra Singh Shekhawat, Dinesh Kumar Tyagi, V.K. Chaubey, "Weight Based Edge Disjoint Path 

Routing and Wavelength Assignment (WEDP-RWA) Algorithm for WDM Networks", IEEE,2008. 
[14]. S. Ramamurthy and B. Mukherjee, "Fixed- Alternate Routing and Wavelength Conversion in 

Wavelength-Routed Optical Networks," IEEE GLOBECOM'98,vol. 4, pages 2295-2302, 1998. 
[15]. S. Subramaniam and R. A. Barry, "Wavelength Assignment in Fixed Routing WDM Networks," IEEE 

ICC'97, pages 406-410, June 1997. 
[16]. H. Zang, J. P. Jue, B. Mukherjee, "A Review of Routing and Wavelength Assignment Approaches for 

Wavelength- Routed Optical WDM Networks", Optical Networks Magazine, Vol. 1, No. 1, January 

2000.pp 47-60. 
[17]. X. Zhang and C. Qiao, "Wavelength Assignment for Dynamic Traffic in Multi-fiber WDM Networks," 

IEEE ICCCN, pages 479-485, Oct. 1998. 
[18]. Chan, K. and Yum, T.P., "Analysis of least congested path routing in WDM lightwave networks." 

INFOCOM '94. Networking for Global Communications, 13 th Proceedings IEEE, 1994. pp. 962-969. 
[19]. Banerjee, D. and Mukherjee, B. "A Practical Approach for Routing and Wavelength Assignment in 

Large Wavelength-Routed Optical Networks." IEEE Journal on Selec. Areas in Comm., vol 14, No. 5, June 

1996. 
[20]. Dorigo, M. and Gambardella, L.M., "Ant-Colony System: A Cooperative Learning Approach to the 

Travelling Salesman Problem." IEEE Transactions on Evolutionary Computation, pp. 53-66. 
[21]. Dijkstra, E. "A note on two problems in connexion with graphs." Numerische Mathematik, 1959. vol. 

1, pp. 269-271. 
[22]. Karasan, E. and Ayanoglu E., "Effects of Wavelength Routing and Selection Algorithms on 

Wavelength Conversion Gain in WDM Optical Networks." IEEE Trans. Networking, vol 6, pp. 186-196, 

April 1998. Dutton, H.J., Understanding Optical Communications, Prentice Hall, 1999. 
[23]. Hui, Z., Jue, J., and Mukherjee, B. "A Review of Routing and Wavelength Assignment Approaches for 

Wavelength- Routed Optical WDM Networks," Optical Networks, January 2000. 
[24]. Zhang, X. and Qiao, C. "Wavelength Assignment for Dynamic Traffic in Multi-fiber WDM Networks," 

ICCCN '98, pp. 479-585, 1998. 
[25]. Stern, T.E. and Bala, K., "Multiwavelength Optical Networks." Addison- Wesley, 1999. 

Authors Biography 

S. SURYANARAYANA is working as Professor, ECE Department, CMR Institute of 
Technology, Hyderabad, Andhra Pradesh, INDIA. He received the Bachelors degree in 
Electronics & communication engineering from JNTU College of Engg in 1991 and 
M.E(Microwaves) from Birla Institute of Technology (BIT), Ranchi. He is pursuing Ph.D 
(Optical Communications) under the guidance of Dr. K. Ravindra and Dr. K. Chenna Kesava t ■ 
Reddy. His research interests are Optical Communication, Networking, Switching & Routing 
and Electromagnetic waves. He has published 12 papers in International/National Journals and 
Conferences. He is an IEEE Member and life member of ISTE. 



K. RAVINDRA currently working as a principal in Mallareddy Institute of Tech & Science, 

Secunderabad. He received his B.Tech. degree in Electronics and Communication Engineering 

from Andhra University, M.E. with Honors from University of Roorkee (Presently IIT, 

Roorkee) and Ph.D. in the field of Mobile Communications from Osmania University. He is a 

recipient of GOLD MEDAL from Roorkee University for standing first in Masters Degree. He 

has Engineering teaching experience of over 25 years in various capacities. Dr. Ravindra 

served as Senior Scientist in the "Research and Training Unit for Navigational Electronics" 

Osmania University from December 2000 to December 2002. He is presently working as PRINCIPAL in Malla 

Reddy Institute of Technology & Science, Hyderabad. 

Ravindra has 36 technical papers to his credit published in various International, National Journals and 
Conferences. He co-authored 2 technical reports in the field of GPS. He is the resource person for video 
lectures produced by SONET, Andhra Pradesh. He is presently guiding 4 research scholars in the fields of 
Optical & Mobile Communications, Cellular and Next generation networks etc. He is life member of ISTE, 
IETE and SEMCE(I). 





297 | 



Vol. 2, Issue 1, pp. 289-298 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

K. CHENNA KESAVA REDDY currently is working as a principal in TKR College of Engg & Tech, 
Hyderabad. He received his B.E. (Electrical Engineering) in 1973, and M.Tech. (Electronic Instrumentation), 
from Regional Engineering College, Warangal, 1976. He obtained Ph.D. (Power Electronics), from JNT 
University, Hyderabad in 2001. He has guided many numbers of graduate and post graduate projects. He has 
worked in various capacities in JNTU. 



298 



Vol. 2, Issue 1, pp. 289-298 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Enhanced Bandwidth Utilization in WLAN for 

Multimedia Data 

Z. A. Jaffery , Moinuddin , Munish Kumar 

Associate Professor, Jamia Millia Islamia, New Delhi, India, 

2 Professor, Jamia Millia Islamia, New Delhi, India, 

3 Sr. Lecturer, C-DAC, Noida, India, 



Abstract 

Deployment of wireless local area networks (WLANs) is growing consistently and demanding the support of 
multimedia applications with acceptable quality of service (QoS). This is attracting the interest of researchers 
globally. Under the optimum QoS, a number of VoIP calls can be supported by a WLAN. Distributed 
Coordination Function (DCF) and Point Coordination Function (PCF), two MAC protocols specified in the 
IEEE 802.11 standard have upper bound on VoIP connections. Under DCF mode 12 calls and in 20 calls in 
PCF mode[ 1,2, 3, 4]- In this paper we are proposing an access media mechanism in which audio data is 
transmitted in PCF mode and best-effort traffic in DCF mode. In the proposed access media mechanism, polling 
list is dynamically updated so that only those stations are polled which have voice packets ready to transmit. We 
have proposed a multi-queued MAC architecture for the access point. We considered voice traffic in CBR mode. 
The simulation results show that the maximum number of VoIP calls supported by 802.11b is 26 and 14 when 
inter arrival time for voice packets is 20 ms and 14 ms respectively. 

KEYWORDS' Medium Access, Mechanism, Multimedia Data, QoS, Bandwidth. 

I. Introduction 

In future generations of WLANs the IEEE 802.11 WLANs will influence the style of daily life of 
people. The 802.11 technology provides flexible and cheap wireless access capability. Deployment of 
an 802.11 WLAN is very easy also in hospitals, stock markets, campuses, airports, offices and many 
other work places. Multimedia applications are increasing very fast in number as well as in size. 
Demand of voice and broadband video services through WLAN connections is growing day by day. 
Real time multimedia applications require strict QoS support such as guaranteed bandwidth and 
bounded delay/jitter etc[,6,7,8,17]. In the organization of this paper section-2 describes the 
functioning of DCF and PCF modes, section -3 discusses the past work done in this direction, section- 
4 focuses on the limited QoS support in PCF, section-5 describes the proposed approach, in section-6 
simulation environment and results are discussed. 

II. BACKGROUND 

In this section, we are discussing an overview of the IEEE 802.11 standard that provides two different 
channel access mechanisms, namely the Distributed Coordination Function (DCF) and Point 
Coordination Function (PCF). Our scheme introduces the enhancements in the PCF access scheme. 

2.1 Distributed Coordination Function (DCF) 

Most of the wireless LANs in the Industrial Scientific and Medical (ISM) band uses CSMA/CA 
(Carrier Sense Multiple Access/Collision Avoidance) as the channel access mechanism. The basic 
principles of CSMA are to listen before talk and the contention. This is asynchronous message 
passing mechanism (connectionless), delivering a best effort of service, and no bandwidth and latency 
are guaranteed. CSMA is fundamentally different from the channel access mechanisms used by 



299 



Vol. 2, Issue 1, pp. 299-308 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

cellular phone systems (i.e. TDMA). 

CSMA/CA is derived from the channel access mechanism CSMA/CD (Collision Detection) employed 
by Ethernet. However, collisions waste valuable transmission capacity, so rather than the collision 
detection (CD) used in Ethernet, CSMA/CA uses collision avoidance. Collision Avoidance (CA), on a 
wire, the transceiver has the ability to listen while transmitting and so to detect collisions (with a wire 
all transmissions have approximately the same strength). But, even if a radio node could listen on the 
channel while transmitting, the strength of its own transmissions would mask all other signals on the 
air. Thus, the protocol cannot directly detect collisions like with Ethernet and only tries to avoid them. 
The 802.11 standard defines the Distributed Coordination Function (DCF) as its fundamental access 
method and is based on CSMA/CA. DCF allows each multiple independent stations to interact 
without central control. Figure 1 illustrates the basic access method used in the DCF protocol. If a 
station finds a channel idle for at least a DIFS period, it sends the first frame from its transmission 
queue. If channel is busy, the station waits till the end of current transmission and then starts the 
contention. It selects a random slot time, so called back-off time from a Contention Window (CW) 
and waits for DIFS and its back-off time. 

Contention 
nypc Window 



Busy 
Medium 



Defer Access 



PIFS 
< > 



Backoff 
Window 



Select a slot & Decrease backoff as long as 
medium is idle 



Figure 1: DCF (CSMA/CA) basic access method 



The backoff time is calculated as 

T backoff =Rand (0, CW) * T slot , 

where T sht is a time slot specific to physical layer and 

Rand () is a uniform distribution random function[2]. 

The back-off time is computed to initialize the back-off timer and this timer is only decreased when 
the medium is idle. When the medium is sensed to be busy, this timer is frozen. 

When its back off timer expires, and if the channel is still idle, the node sends the frame. Thus, the 
node having chosen the shortest backoff time wins and transmits its frame. The other nodes just wait 
for the next contention (after waiting for the end of this packet transmission). Because the contention 
period is derived from a random number chosen with a uniform distribution, and done for every 
frame, each station is given an equal chance to access the channel. 

2.2 Point Coordination Function (PCF)[10,11,15] 

Periods of contention free service arbitrated by the Point Coordinator (PC) alternate with the standard 
DCF-based access (or contention period). The duration of the contention free period can be 
configured. 802.11 describes the contention-free period as providing near asynchronous service 
because the contention-free period will not always start at the expected time. 

The Contention-free service uses a centralized access control method. Access to the medium is 
restricted by the Point Coordinator, a specialized function implemented in access points. Associated 
stations can transmit data only when they are allowed to do so by the point coordinator. Contention- 
free access under the PCF resembles token-based networking protocols, with the point coordinator's 
polling taking the place of a token. Despite, access is under the control of a central entity, all 
transmissions must be acknowledged. The figure 2 illustrates the PCF access method. 
When the PCF is used, time on the medium is divided into contention-free period (CFP) and the 
contention period (CP). Access to the medium during the CFP is controlled by the PCF, while access 



300 



Vol. 2, Issue 1, pp. 299-308 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

to the medium in CP is controlled by the DCF[12,13,14]. In order to be fair with contending traffic, 
the contention period must be long enough for the transfer of at least one maximum size frame and its 
associated acknowledgement. Alternating periods of contention-free service and contention based 
service repeat at regular intervals, called the contention-free repetition interval (known also as super 
frame). 

At the beginning of the CFP, the PC (which resides in AP) transmits a management frame, called 
beacon. One of the beacon role components is the maximum duration, CFPMaxDuration, of the CFP. 
The PC generates beacons at regular beacon frame intervals, thus every station knows when the next 
beacon frame will arrive. This time is called target beacon transmission time (TBTT). All stations 
receiving the beacon set the NAV to the maximum duration to lo ck out DCF based access to the 
wireless medium. The access point maintains a polling list of associated stations and polls any station 
in this list. Since time in the CFP is precious, acknowledgements, polling, data transfer may be 



Contention Free Repetition Interval 

Contention Free Period > 

SIFS SIFS PIFS SIFS 



Frame from #1 
+CF-ACK 



2'il 



Contention 
Period 



Released I 
byCF-Endl 



Figure 2: PCF Access Scheme 

combined to improve efficiency (as shown in figure 2. All CFP transmissions are separated by short 
inter frame spaces (SIFS), where PCF waits some time if there is no response from a polled station, to 
prevent interference from DCF traffic (both are shorter than DCF inter frame space, DIFS). 

III. Related Work 

In the past many researchers have given analytical, simulation based or experimental results on 
transmitting Voice over IEEE 802.11 WLAN. MAC protocols defined in IEEE 802.11 standards 
[1,2,19] are DCF and PCF. Performance of transmitting Voice over WLAN in DCF mode as well as 
PCF mode has been evaluated. In PCF mode various polling algorithms have been used to transmit 
Voice over WLAN in the form of the interactive human speech. 

In [3] Ali Zahedi and Kevin Pahlavan obtained the capacity of IEEE 802.11 WLAN with voice and 
data services analytically in DCF mode. Precisely the question which they answered is the number of 
network telephone calls that can be carried over WLAN with a predefined amount of data traffic or 
what is the maximum data traffic per user for a given number of voice users. Priority to voice is given 
by assigning UDP protocol for voice and TCP protocol for data. They found that in 1Mbps 
bandwidth, a maximum 18 voice users are supported with upper bound for delay of 100ms and 
decreasing upper bound for delay to 50ms which reduces maximum voice users to 14. Data traffic is 
assumed to be less than 10Kbps. 

Sachin Garg and Martin Kappes found an upper bound on the number of simultaneous VoIP calls that 
can be placed in a single cell of an 802.11b network in [4]. They performed an experiment in which 
multiple Wireless PCs running Windows 2000, were associated with the same 802.11b AP, which was 
connected to a 100Mbps Ethernet. The setup was used to make full duplex VoIP calls between a 
wireless PC and a wired PC using IP phones. For each call the ITU G.711 codec was used where 
frames are sent every 10ms. Each call results in two RTP streams, from wired to wireless and vice- 
versa. Number of VoIP connections with acceptable voice quality is tested by successively 
establishing new calls in addition to the ongoing calls. The quality of connections was monitored 



301 



Vol. 2, Issue 1, pp. 299-308 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

measuring of loss, jitter and round trip time by a commercially available tool. For the first five calls, 
the quality of all the calls was acceptable. Loss (0%), round trip time (around 5ms) and jitter (around 
7ms) were all in acceptable ranges for a good quality of VoIP. When the sixth call was placed, except 
for an increase in the round-trip time for some of the connections the quality of all six simultaneous 
connections was still acceptable. As soon as the seventh call was placed, all seven wired to wireless 
streams started suffering approximately 16% loss and the call quality became unacceptable for all 
calls in this direction. All wireless to wired streams still exhibited acceptable quality. In addition to 
this experiment they obtained an upper bound for simultaneous calls analytically. They found that 
when a G.711 Codec with 20ms audio payload is used, an 802.11b cell could support only 3 to 12 
simultaneous VoIP calls. The actual number depends on the effective transmission rate of the wireless 
station, which for 802.11b can be 1Mbps, 2Mbps, 5.5Mbps and 11Mbps. 

In [6,16] practical investigation of the IEEE 802.11b MAC layer's ability to support simultaneous 
voice and data applications is done. DCF mechanism is modified and new mechanism is called Back- 
off Control with Prioritized Queuing (BC-PQ). BC-PQ addresses the two shortcomings of the DCF 
mode with respect to voice. First, it distinguishes voice packets from data packets and provides a 
higher priority to the voice traffic. Allocating separate prioritized queues for voice and non-traffic 
does this. Secondly, in addition to priority queuing, the enhanced AP transmits voice packets using 
zero back-off instead of random back-off as required by the 802.11b standard. The key parameter 
used to quantify voice performance is packet loss. 

Jing-Yuan Yeh and Chienhua Chen in [7] proposed three polling schemes (RR, (FIFO, Priority and 
Priority Effort-Limited Fair) combined with the Point Coordination Function (PCF) to improve the 
utilization of the wireless channel and support certain Quality of Service of multimedia traffic. The 
polling schemes proposed in [8] are-Round-Robin Scheme (RR), First-In-First-Out Scheme (FIFO), 
Priority Scheme, Priority Effort-Limited Fair Scheme. All the above-mentioned polling schemes are 
simulated with the network simulator OPNET in [7]. It is found through simulations that all these 
schemes perform better than the DCF mode. To achieve the maximum throughput in a BSS, the FIFO 
scheme is found to be the best. Priority Scheme provides a simple way to support QoS of traffic; 
however this scheme can exhaust all the bandwidth of best-effort traffic. The Priority-ELF scheme 
achieves high utilization of wireless channel in case of the bursty traffic. 

In [9,10,15] the capability of the Point Coordination Function (PCF) to support Voice over IP (VoIP) 
applications is evaluated. The capability of PCF mode in support of variable bit rate (VBR) VoIP 
traffic is investigated, where the silence suppression technique is deployed in voice codec so that no 
voice packets are generated in silence periods. Simulation shows that under the PCF using VBR mode 
for the VoIP traffic may effectively reduce the end-to-end delay of VoIP. Simulation is carried out in 
the OPNET network simulator. The upper bound for number of VoIP connections in CBR mode is 
found to be 15 and in VBR mode it is 22. Brady's model and May and Zebo's models are used for 
VBR voice traffic. 

E. Ziouva and T. Antonakopoulos in [12] proposed a new dynamically adaptable polling scheme for 
efficient support of voice communications over IEEE 802.11 networks. They proved analytically that 
when silence detection is used their scheme improves the capability of IEEE 802.11 wireless LANs 
for handling voice traffic efficiently. Their polling scheme is called Cyclic Shift and Station Removal 
Polling Process (CSSR). 

In [11] a distributed fair queuing scheme called Distributed Deficit Round Robin (DDRR) is proposed 
which can manage bandwidth allocation for delay sensitive traffic. The question which is answered in 
this paper is how many voice and video connections can be accommodated in an 802.11 WLAN 
satisfying the imposed QoS requirements under different scheduling schemes. 

Xiyan Ma, Cheng Du and Zhisheng Niu in [14] proposed two adaptive polling list arrangement 
schemes called Dynamic Descending Array (DDA) and Hybrid DDA and RR (HDR) in order to 
decrease the average delay of the voice packets, by means of reducing the possibility of null polls. 
One scheme is called Dynamic Descending Array (DDA) and the other scheme is a combination of 
DDA and traditional Round-Robin scheme and is called Hybrid DDA and RR (HDR). 

IV. PCF with Limited QoS Support 

Although the contention-free service is designed in 802.11 networks to provide QoS for real-time 



302 



Vol. 2, Issue 1, pp. 299-308 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

traffic, this service also have some limitations[8,9]. In the following we describe main limitations 
related PCF- 

• Unpredictable beacon delay-The problem is related to the uncontrolled length of CP. 
Indeed, the minimum length of the CP is the time required to transmit and acknowledge one 
maximum frame size. It is possible for the contention service to overrun the end of the CP, 
due to transmission of a contending traffic. When the contention based service runs past the 
TBTT, the CFP is foreshortened, and hence the beacon is delayed. 

• Unknown transmission time of polled stations- A station which is polled by the PC is 
allowed to send a single frame that may be fragmented and of arbitrary length, up to 
maximum of 2304 bytes (2312 bytes with encryption). Furthermore, different modulation and 
coding schemes are specified in 802.11a, thus the duration of the MSDU delivery that 
happens after the polling is not under the control of PC. This may destroy any attempt to 
provide any QoS to other stations that are polled during the rest of the CFP. 

• No Knowledge of the offered traffic at the stations- With CFP, the access point (AP) has 
any knowledge of the offered traffic at the polled stations. Thus, when polling the different 
stations with a round-robin scheduling algorithm, the PC may waste a lot of time until polling 
a special station having critical time traffic (e.g., CBR traffic). This may affect the QoS 
parameters for these traffic categories. Hence, with PCF there is no efficient scheduling 
algorithm, which has the knowledge of the different traffic categories at associated stations 
and uses this knowledge in order to meet the requirements (e.g., latency, bandwidth) of these 
different traffic categories. 

V. Proposed Mechanism to Access Media 

5.1 Overview 

In this section we will discuss our proposed access media mechanism to transmit voice over WLAN. 
In our scheme also we use PCF mode to transmit voice and DCF mode to transmit Best-Effort traffic. 
A lot of valuable bandwidth is wasted when a station which has nothing to transmit is polled. Keeping 
all the stations in polling list and polling them in Round-Robin manner hampers the channel 
utilization severely. We propose a dynamic polling list arrangement to poll only those stations which 
have data to transmit. Also in our scheme, the downlink voice traffic from AP is combined with CF- 
Poll which can significantly reduce the overhead and thus may improve the channel utilization. We 
give the MAC architecture of a WLAN station and the MAC architecture of the access point 
respectively. How the polling list is handled dynamically is discussed in the section 5.4. 

5.2 MAC Architecture of WLAN Station 

MAC architecture of a WLAN station in our scheme is shown in the figure 3. Each station maintains 
two queues at its MAC layer, one for voice traffic and the other for Best-Effort traffic. When a frame 
arrives at the MAC layer of a WLAN station from its upper layers a classifier checks whether it is a 
voice frame or Best-Effort data frame, and accordingly it is put in the voice queue or Best-Effort 
queue. When a station is polled in the CFP it transmits the voice frame from the head of the voice 
queue. The frames from the Best-Effort queue are transmitted in DCF mode in the CP. 

Multimedia Traffic from upper layers 

Best Effort Traffic 



Transmission Medium 



Figure 3: MAC Architecture of Station 



303 



Vol. 2, Issue 1, pp. 299-308 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

5.3 MAC Architecture of the Access Point 

MAC architecture of the access point, in our scheme is shown in the figure 4. The AP maintains one 
queue for Best-Effort traffic and (Poll Limit +1) queues for voice traffic at its MAC layer. Poll Limit 
is the maximum number of stations which can be kept in the polling list in our Dynamic Polling 
arrangement scheme discussed in 4.4. When a frame arrives at the MAC layer of the AP, a classifier 
checks whether it is a voice frame or data frame and accordingly it takes its path as shown in the 
figure 4. If it is a data frame, it simply enters the Best-Effort queue and is transmitted in CP using 
DCF method. Otherwise, if it is a voice frame, further it is checked whether the destination station is 
in polling list or not. In case destination station is not in the polling list, it is checked whether number 
of stations in polling list exceed Poll_Limit or not. If they exceed then the frame enters the Non_Poll 
queue, otherwise destination station is added in the polling list and the frame takes the path as shown 
in the figure 4. The frame enters the Non_Poll queue as shown in figure 4. On the other hand, if the 
destination station is in the polling list, then the frame enters queue for its destination station among 
Poll_Limit queues. 

5.4 Dynamic Polling list arrangement 

The main issues in maintaining a polling list are how to add a station in the polling list and how to 
remove a station from the polling list. 

5.4.1 Adding a station in the polling list: 

The stations which want to transmit voice frames in CFP send Resource Reservation (RR) request in 
CP in controlled contention interval (CCI). A CCI is started when the PC sends a specific control 
frame. Only those stations which want to transmit voice in the CFP contend in CCI using CSMA/CA. 
In case of collision between RR requests no retransmission is done. Corresponding stations can send 
their RRs in the next CCI. If a downlink frame arrives at the MAC layer of the AP and the destination 
station is not in the polling list, in this scenario if number of stations in the polling list is less than Poll 
Limit then the destination station is added to the polling list. 

5.4.2 Removing a station from the polling list: 

A station is removed from the polling list when its connection time is over. 



Multimedia Traffic from upper layer 




Add Station 
to Polling list 



Non Poll 
Queue I — 





Poll_Limit 










Queues 



|DCF Model 
I 



Transmission Medium 



Figure 4: MAC Architecture of a Polling List 



304 



Vol. 2, Issue 1, pp. 299-308 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

VI. Simulation Studies on the Proposed Access Media Mechanism 

6.1 Overview 

We have simulated the proposed access media mechanism in C programming language. All the 
parameters considered for simulation are also given in the table 1. Finally simulation results are 
discussed in the section 5. 3. Parameters used for simulation are shown in the table 1. The simulation is 
run for 85000 cycles of contention-free repetition interval. The results of initial warm-up period of 
5000 cycles are ignored. 

6.2. Maximum Number of VoIP Connections in 802.11b 

Calculation shows that given the 11 Mbps rate and 128kpbs needed for duplex VoIP calls. But due to 
network layer protocol, inter frame space, beacon frame, poll frame, CF-end frame and channel 
utilization is around 30% of the total payload. Figure 5 shows the plot between number of VoIP 
connections and average packet delay when the inter arrival time is 20ms. We see in that case the 
maximum number of voice calls is 26; the uplink average packet delay goes up abruptly. We see when 
there is no best-effort traffic the maximum number calls supported by 802.11b is 26. 

Table 1: Simulation Parameter 



PHY Layer Specifications 


DSSS 


Transmission Rate 


11Mbps 


Beacon Interval 


60ms 


Beacon Size 


106 bytes 


QoS Acknowledgement 


14 bytes 


CF-Poll Frame Size 


34 bytes 


CF-End Frame Size 


20 bytes 


PLCP Preamble 


18 bytes 


PLCP Header 


6 bytes 


SIFS Time 


lOus 


PIFS Time 


30us 


DIFS Time 


50us 


A Slot Time 


20us 


Nccop 


5 


RTP Header 


8 bytes 


UDP Header 


20 bytes 


IP Header 


8 bytes 


MAC Header 


34 bytes 


CWmin 


7 


CWmax 


255 


Retry limit 


5 



- No Best Traffic 



I 



40 

35 
30 
25 
20 
15 
10 



- 1 — i — i — i — i — i — i — i — i — i — i — i — i — i — i — i — i — i — i — r 
5 7 9 11 13 15 17 19 21 23 

Number Of VoIP Connections 



25 



Figure 5. Number of VoIP vs. Average Packet Delay 
(Inter Arrival Time is 20 ms) 



305 



Vol. 2, Issue 1, pp. 299-308 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

6.2 Simulation Results 

In another case, Figure 6 shows the plot between number of VoIP connections and average packet 
delay when the inter arrival time is 10ms. We see in that case the maximum number of voice calls is 
14; the uplink average packet delay goes up abruptly We see when there is no best-effort traffic the 
maximum number calls supported by 802.11b is 14. 



NnBei Traffic 




n 1 1 1 r 

5 6 7 8 9 10 11 12 13 14 15 

Number of VoIP Connections 



Figure 6. Number of VoIP vs. Average Packet Delay 
(Inter Arrival Time is 10 ms) 



As the best-effort traffic increases up to 10% the numbers of voice calls in this case are 24 and 13. 
This is shown in figure 7 and figure 8. 



Best Effort Traffic 
10% 



I 5 

* A 

Qj 

i s 

m 

S 2 

m 

3 . 



/ 



-| 1 1 1 1 1 1 1 1 1 1 1 1 1 1 - 

5 7 9 II 13 15 17 19 

Number Of VoIP Connections 



Figure 7. Number of VoIP Connections vs. Average Packet Delay 
(Inter arrival Time is 20 ms) 



306 



Vol. 2, Issue 1, pp. 299-308 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Best Effort Traffic 

10% 



I 5 

* L 

& 4 

Qj 

1 3 

Qj 
Qj 



JL. 



I 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 r 

5 7 9 II 13 15 17 19 



Number Of VoIP Connections 



VII. 



Figure 8. Number of VoIP Connections vs. Average Packet Delay 
(Inter arrival Time is 10 ms) 

Conclusion 



In this work we studied the VoIP over the 802.11 networks, for the perspective of number of the 
connections that an access point can support. We found that the maximum number of full duplex VoIP 
connections supported by 802.11b is 26. Our access media mechanism improves on the maximum 
number of VoIP connections supported. This enhancement is due to efficient utilization of the 
available bandwidth hence will support to multimedia real time applications as well. 



References 



[i]. 

[2]. 

[3]. 
[4]. 
[5]. 
[6]. 
[7]. 
[8]. 
[9]. 
[10]. 
[11]. 

[12]. 



IEEE Std 802.11, Wireless LAN Medium Access Control 
Specifications. 1999. 



(MAC) and Physical Layer (PHY) 



IEEE Draft Std 802. lie, Amendment: Medium Access Control (MAC) Enhancements for Quality of 
Service (QoS), D2.0a. Nov. 2001. 

Ali Zahedi and Kevin Pahlavan, "Capacity of a Wireless LAN with Voice and Data Services," IEEE 
Transactions on Communications, Vol.48, no. 7, pp.1 160-1 170, July 2000. 

S. Garg and M. Kappes, "Can I add a VoIP call?" IEEE International Conference on Communication, 
pp.779-783, 2003. 



F. Anjum, M. Elaud, D. Famolari, A. Ghosh, R. Vidyanathan, A. Dutta and P. Aggarwal, 
Performance in WLAN Networks- An Experimental Study," IEEE Globecom,2003. 



Voice 



M. Veeraraghavan, N. Cocker and T. Moors, "Support of Voice Services in IEEE 802.11 Wireless LANs," 
IEEEINFOCOM'01, vol. 1, pp. 448-497, April 2001. 

Jing-Yuan Yeh and Chienhua Chen, "Support of Multimedia of Services with the IEEE 802.11 MAC 
protocol," IEEE International Conference on Communication, 2002. 

Andreas Kopsel and Adam Wolisz, "Voice Transmission in an IEEE 802.11 WLAN Based Access 
Network," Wow Mom, Rome Italy, pp. 24-33, July 2001. 

D. Chen, S. Garg, M. Kappes and Kishor S. Trivedi, "Supporting VBR VoIP Traffic in IEEE 802.11 
WLAN in PCF mode," Opnet Work 2002, Washington DC, August 2002. 

E. Ziouvra and T. Antonakopouos, "Efficient Voice Communications over IEEE 802.11 WLANs Using 
Improved PCF Procedures," The Third International Network Conference-INC 2002, July 2002. 

Xian Ma, Cheng Du and Zhisheng Niu, "Adaptive Polling List Arrangement Scheme for Voice 
Transmission with PCF in Wireless LANs," 10 th Asia-Pacific Conference on Communications and 5 th 
International Symposium on Multi-Dimensional Mobile Communciations, 2004. 

Ravindra S. Ranasinghe, Lachlan L.H. Andrew and David Everett, "Impact of Polling Strategy on 



307 | 



Vol. 2, Issue 1, pp. 299-308 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Capacity of 802.11 Based Wireless Multimedia LANs," IEEE International Conference on Networks, 
Brisbane Australia, 1999. 

[13]. T. Kawata, S. Shin, Andrea G. Forte and H. Schulzrinne, "Using Dynamic PCF to Improve the Capacity 
for VoIP Traffic in IEEE 802.11 Networks, "IEEE Wireless Communications and Network Conference, 
March 2005. 

[14]. D.Chen, S. Garg, M. Kappes and Kishor S. Trivedi, "Supporting VoIP Traffic in IEEE 802.11 WLAN 
with Enhanced MAC for Quality of Service "Opnet Work 2003, Washington DC, September 2003. 

[15]. Md. Atiur Rahman Siddique and Joarder Kamruzzaman, "Performance Analysis of PCF based WLANs 
with Imperfect Channel and Failure Retries", GLOBECOM 2010, 2010 IEEE Global 
Telecommunications Conference, Miami, EL, Dec-2010,/?/? 1-6. 

[16]. Suchi Upadhyay, S.K.Singh, Manoj Gupta, Ashok Kumar Nagawat, "Improvement in Performance of the 
VoIP over WLAN", International Journal of Computer Applications (0975 - 8887) Volume 12- No A, 
December 2010, pp 12-15. 

[17]. Bum Gon Choi, Sueng Jae Bae, Tae-Jin Lee and Min Young Chung, "Performance Analysis of Binary 
Negative Exponential Backoff Algorithm in IEEE 802.1 la under erroneous Channel Conditions", ICCS 
Part-II, LNCS 5593, pp 237-249, 2009. 

[18]. Suparerk Manitpornsut, Bjorn Landfeldt, "On the Performance of IEEE 802.11 QoS Mechanisms under 
Spectrum Competition", IWCMC'06, July 2006, Vancouver, British Columbia, Canada, ,pp 719-724. 

[19]. Dileep Kumar, Yeonseung Ryu, Hyuksoo Jang "Quality of Service (QoS) of Voice over MAC Protocol 
802.11 using NS-2", CommunicabilityMS'08, October, 2008, Vancouver, BC, Canada, pp 39-44. 



Authors 

Z. A. Jaffery-He obtained his B. Tech and M. Tech in Electronics Engineering from 
Aligarh Muslim University, Aligarh, India in 1987 and 1989 respectively. He obtained his 
PhD degree from Jamia Millia Islamia (a central Govt, of India university) in 2004. 
Presently he is the associate professor in the Department of Electrical Engineering, Jamia 
Millia Islamia, New Delhi. His research area includes Application of soft computing 
Techniques in Signal Processing, Communication engineering and Computer Networking. 



Moin Uddin-He obtained his B. Tech and M. Tech in Electrical Engineering from Aligarh 
Muslim University, Aligarh, India in 1972 and 1978 respectively. He obtained his PhD 
degree from university of Roorkee in 1992. Dr. Moinuddin is the professor in the 
Department of Electrical Engineering, Jamia Millia Islamia, New Delhi. Presently he is on 
deputation as Pro-Vice Chancellor of Delhi Technological University, New Delhi. He has 
guided several PhD. His research area includes computer networking, soft computing and 
Artificial Intelligence 





Munish Kumar-He obtained his B.E. in Computer Science & Engg from MNREC, 
University of Allahabad, in 1992 and Master of Computer Science & Engg from Jadavpur 
University, Kolkata. Tech. in 1995. He is working as Astt. Professor in School of IT, 
CD AC, Noida. Presently is pursuing his PhD from Jamia Millia Islamia. His research area 
includes Ad-hoc Networks, Sensor Networks, Wireless Networks, Mobile Computing, 
Real-Time Applications. 






308 



Vol. 2, Issue 1, pp. 299-308 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Analysis and Interpretation of Land Resources 
using Remote Sensing and GIS: A Case Study 

S.S. Asadi \ B.V.T.Vasantha Rao 2 ,M.V. Raju 3 and M.Anji Reddy 4 
1 Assoc. Prof., Deptt. of Civil Engineering., KL University, Green fields, Vaddeswaram, 

Guntur, A.P, India 
2 Asstt. Prof., Deptt. of Civil Engineering., P.V.P. Siddhardha Institute of Tech., 

Kannure, Vijayawada. 

Asstt. Prof., Deptt. of Civil Engineering, Vignan University, Vadllamudi, Guntur, A.P, India 

4 Director Foreign Relations, Jawaharlal Nehru Technological University, 

Hyderabad, A.P, India 



Abstract 

The human Activities are Constantly adding industrial, domestic and Agricultural wastes to the ground water 
reservoirs at an alarming rate . In the last few decades, parallel with rapidly developing technology, increase in 
population and urbanization we have been witnessing alarmed phenomena all over the world. Anthropogenic 
activities including generation and indiscriminate disposal of solid wastes and extensive use of fertilizers have 
resulted in increasing levels of air, water and soil pollution, changing land use patterns, decrease in arable land 
and other dominant problems. The thematic map of the study area is prepared from linearly enhanced fused 
data of IRS -ID PAN and LISS-III merged satellite imagery and Survey Of India (SOI) toposheets on 1:50,000 
scale using visual interpretation technique and using AutoCad and Arc/Info GIS software forming the spatial 
database. 

KEYWORDS' Thematic maps, groundwater quality, remote sensing and GIS 

I. Introduction 

Man needs Land for domestic purposes such as cooking, cleaning utensils, gardening, washing clothes 
and above all for drinking. It is also needed for commercial, industrial and recreational purposes. 
Land used for such purposes should be not polluted, but should be of good quality. Urbanization and 
industrialization have directly or indirectly polluted most of the land sources on a global scale. Impact 
studies can contribute to improve urban development and environmental planning at the project and 
policy levels and it also introduces analytical tools to support such planning. Remote sensing 
applications have been operationalized in most of the natural resource management themes and at 
present the trend is on integrated surveys to arrive at sustainable developmental packages. Keeping 
this in view, an attempt is made. 

II. Study Area 

The Maripeda Mandal lies geographically between latitudes 17° 20' 00" and 17° 35' 00" and 
longitudes 79°45 00 to 80° 00' 00" is covered in the Survey of India toposheet numbers 56 0/14 and 
56 0/15. It is one of the 51 Mandals of Warangal district, in Andhra Pradesh. Maripeda town is at a 
distance of 90 kms. from Warangal (District H.Q.) and 120kms from Hyderabad (State Capital). 



309 



Vol. 2, Issue 1, pp. 309-314 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

2.1 Study Objectives 

> To prepare the digital thematic maps namely Base map, Transport network map, Geomorphology 
map, Ground Water Potential map, Land use/ Land cover, Hydro geomorphology, Physiographic 
map, Waste land map, Drainage map etc. using satellite data, collateral data and field data on 
ARC/INFO GIS platform. This constitutes the spatial database. 

III. Methodology 

3.1 Data collection 

Different data products required for the study include Survey of India (SOI) toposheets bearing with 
numbers 560/14 and 560/15 on 1:50,000 scale. Fused data of IRS-1D PAN and LISS-III satellite 
imagery obtained from National Remote Sensing Agency (NRSA), Hyderabad, India. Collateral data 
collected from related organizations, comprises of water quality and demographic data [2]. 

3.2 Database creation 

Satellite imageries are geo-referenced using the ground control points with SOI toposheets as a 
reference and further merged to obtain a fused, high resolution (5.8m of PAN) and colored (R,G,B 
bands of LISS-III) output in EASI/PACE Image processing software. The study area is then 
delineated and subsetted from the fused data based on the latitude and longitude values and a final 
hard copy output is prepared for the generation of thematic maps using visual interpretation technique 
as shown in Figure 1. These thematic maps (raster data) are converted to vector format by scanning 
using an A0 flatbed deskjet scanner and digitized using AutoCAD software for generation of digital 
thematic maps using Arc/Info and ARCVIEW GIS software. The GIS digital database consists of 
thematic maps like land use/land cover, drainage, road network using Survey of India (SOI) 
toposheets and fused data of IRS - ID PAN and IRS-ID LISS-III satellite imagery (Figure 2). 

IRS SATELLITE IMAGE SHOWING AREA OF MARIPAD MANDAL, WARANGAL Dt., A.P.(KHARIF) 




Figurel. Fused satellite imagery 



3.2.1 Spatial Database 

Thematic maps like base map and drainage network maps are prepared from the SOI toposheets on 
1:50,000 scale using AutoCAD and Arc/Info GIS software to obtain a baseline data. Thematic maps 
of the study area was prepared using visual interpretation technique from the fused satellite imagery 
(IRS-ID PAN + IRS-ID LISS-III) and SOI toposheets along with ground truth analysis. All the maps 
are scanned and digitized to generate a digital output (Figure 1). 



310 | 



Vol. 2, Issue 1, pp. 309-314 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



HI 




i#^/^T *3a 






Land Use - ji 
Land Cover Map " y^ 
Legend: 




| | BRock/SRock/S Waste 
i U Built-up -Are a 

H Double Crop 
y : ,J,, : ,| DryTank 

| Fallow Land 

i Land for Plotting 
j ~~ | Land with Scrub 

^ Land without Scrub 
Palleru River 

| River with Water 

] Single Crop 

| Tank with Water 




f? 




^S»l^SJ^^^^^ 




I^St^jB* ,» 




m*Z/g$ej££- "■3&y s *g'^£gI?>J 


SCALE 1:50,000 
Location Map: 




Si 






<kr§^t^ 



Figure 2 .Land Use/Land Cover 



IV. Results and Discussion 

4.1 Village Map 

This shows the geographical locations of all villages in the Mandal is called Village Map [1]. This 
map is prepared by digitization of the maps of Central Survey Office. Revenue boundaries of all the 
villages are plotted in this map. The entire data of the village available is attached to this Map as 
database using GIS. This database is very useful to know the present scenario of a village. This map is 
used for analyzing village wise land resources. In the study area there are 23 revenue villages, out of 
these Maripeda, which is the Mandal head quarter. By preparing the Village map feature of the 
individual village can be easily identified. 

4.2 Base map 

It consists of various features like the road network, settlements, water bodies, canals, railway track, 
vegetation etc. delineated from the toposheet. The map thus drawn is scanned and digitized to get a 
digital output. The information content of this map is used as a baseline data to finalize the physical 
features of other thematic maps. 

4.3 Transport Network Map 

In the study area all the settlements are connected either by Metalled road or Un-Metalled road. 
Where as, State Highway connects Maripeda. Railway network does not exist in the Maripeda 
Mandal. The nearest railway station is Khammam, which is at a distance of 18kms SouthEast of 
Maripeda village. 

4.4 Drainage 

Drainage map is prepared by using Survey of India Topographic maps on 1:50,000. All the streams 
and tanks existing in the study area are marked in this map. These streams further classified based on 
stream ordering. Only two minor rivers namely Palleru and Akeru exists [3]. The drainage system 
existing is dendritic. Tank bunds are also marked in the map. 

4.5 Watershed characteristics: 

The watershed map is prepared in accordance with the National Watershed Atlas and River Basin 
Atlas of India, 1985. According to this, India is divided into 6 regions out (watershed Atlas of India, 
1990) of which the present study area comes under Region-4 and part of basins D, catchment l,sub- 



311 | 



Vol. 2, Issue 1, pp. 309-314 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

catchment C,D. The study area is under sub-catchment C watersheds (4D1C6,4D1C7) and under D 
sub-catchment. (4D1D2) are coming under these watersheds 71 sub- water sheds has delineated [4] . 

4.6 Slope map 

Slope classes 1, 2 and 3 are observed in the study area. Most of the study area is covered by nearly 
level, very gently, gently slope class (92%). Small part of the study area (4%) comes under 
moderately sloping class 4 and ( 2%) study area comes under the strongly sloping class 5 (IMSD 
Technical Guidelines, 1995). 

4.7 Land Use/Land Cover 

The land use/land cover categories such as built-up land, agriculture, forest, water body and 
wastelands have been identified and mapped from the study area (Figure 3). Major part of the study 
area is covered with single crop and double crop (93%). About (0.015%)of the study area is under 
built-up land and Industrial area is( 0.017%). From the satellite data the agriculture area (96.05%) 
could be clearly delineated as four categories, single crop, double crop, fallow land and plantations 
[5]. Though single crop and double crop has been observed at various parts of the study area and 
plantations are observed at some places of the study area. Water bodies occupied( 0.18%). About 
(0.46%) of the study area is under scrub forest and( 4.21%) of area is under wasteland. Under this 
category land with scrub (3%), land without scrub (0.24%) and barren sheet rock area (0.09%) are 
observed (Figure 3). 



DATACQLlfCTIQN 



DATA INPUT 
I 



DATA CONVERSION 



DATABASE CREATION 



j_ 



SPATIAL DATABASE 

i 



Rew Satel ite Digita Data 

I 



Leading 



_!_ 



SOI Toposheet 



Pre-Frocessing 



Georeferendng [Extraction of GCPsJ 



EnhancEment 



Georefsrencing ^transferof GCP on image] 

I 



J 



Mosaicking 



Final rectifiEd 



Data Mending 
t 



Final LISS-III & PAN merged 
output (hard copy preparation) 



Visual Image Interpretation 



Field wort; for conformation of 
doubtful areas 



Generation of thEmatic maps from ImagEry/ toposheet 
Base, Drain-age,. Landuse/land caver, Transport at: 



Scannings Digitization using AUTOCAD, Export to Arc/1 nfc for analysis, editing, 
Cleaning, Crsation of thematic and topographical digital output maps in ArcVisw 



DATA ANALYSIS 



Idsntiftcation of Managamsnt Zones 



r:comm:\ca _ ion!. 



Figure 3: Flow chart showing the methodology adopted for the present study 



312 | 



Vol. 2, Issue 1, pp. 309-314 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

4.8 Geomorphology 

The geomorphological classes observed in the study area are Pediplain with moderate weathering 
(PPM) (42%), Pediplain with shallow weathering (PPS) (31%), valley (v) (14%), pediment (PD) 
(8%), pediment inselberg complex (PIC) (2%) ,inselberg (1%) ,pediment (1%) and dyke and dyke 
ridge (0.12%). 

4.9 Geology: 

The study area constitutes mainly a granitic terrain (pink-grey) exposive a variety of archaean 
granitorides of peniusular gneissic complex (PGC) and schistoic (older metamorphic) rocks. They are 
intruded by basic dykes (Proterozoic) and covered locally by the deccan traps (upper cretaceous to 
lower Eocene) [6] . The geological categories observed in the study area are mainly granite (98%), 
basalt (2%), and some of lineaments, dolerites and pegmatites . 

4.10 Soil 

The specific objectives of soil mapping are identification, characterization and classification of the 
soils of the area. The soil types identified in the study area are (1) loamy-skelital,mixed, 
rhodicpaleustalfs (55%). (2) Fine loamy, mixed, fluventicustropepts (10%) (3) Fine, 
montmorillonitic, typichaplusterts (35%). 

4.11 Groundwater potential 

The groundwater potential map is prepared based on the analysis of various themes such as 
geomorphology, land use / land cover, lineament, intersection points, drainage pattern, lithological 
evidences by using converging evidence concept, besides the collateral data obtained from State 
Groundwater Board with necessary field checks [7] . The groundwater potential map reveals the 
available quantum of groundwater and is delineated into zones showing high (53%), medium (30%), 
low (17%), groundwater potential areas. 

V. Conclusions and Recommendations 

1. Through the analysis of soils attribute data, it is clear that 39% of the study area is effected 
by erosion. In future this may lead to sedimentation and other consequential problems to the 
major water bodies of study area. This could be best controlled by construction of gully 
control bunds and extensive reforestation or through agricultural soil conservation and 
management practices. 

2. As irrigation water requirement varies with different crops, cropping pattern in the study 
area is to be changed for optimum utilization of this resource. Crops like pulses, vegetables 
should be cultivated, which may result in the reduction of water requirement as well as 
fertilizer and pesticide load in the study area. 

3. The three key activities that are essential for the development of a watershed area are, 

• Irrigation management 

• Catchment management 

• Drainage basin monitoring and management 

To address these three activities planners need physical characteristic information on 
comprehensive lines. Hence, the present work, concentrated on the development of physical 
characteristics for this study area.The planners at execution level can rely upon such kind of 
physical characteristic information system for various other watersheds. 

4. This study has been concluded to above stated findings. But this study will be useful as 
input base line data for models like LWAT (Land and Water Assessment Tool) that give 
more precise and detailed long term predictions on land and water resources. 



313 | 



Vol. 2, Issue 1, pp. 309-314 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

References 

[1] District Census Handbook of Hyderabad, 1991 Directorate of Census Operations, Andhra 

Pradesh, Census of India. 
[2] APHA, AWWA, WPCF, 1998 Standard Methods for the Examination of Water and Wastewater. 

(20 th edition). American Public Health Association, Washington DC, New York. 
[3] Tiwari, T.N and Mishra, M, 1985 A preliminary assessment of water quality index to major 

Indian rivers. Indian Journal of Environmental Protection, 5(4), 276-279. 
[4] Mahuya Das Gupta Adak, Purohit KM, Jayita Datta, 2001 Assessment of drinking water quality 

of river Brahmani. Indian Journal of Environmental Protection, 8(3), 285-291. 
[5] Pradhan, S.K., Dipika Patnaik and Rout, S.P, 2001 Water quality index for the ground water in 

and around a phosphatic fertilizer plant. Indian Journal of Environmental Protection, Vol.21, 

355-358. 
[6] Srivastava, A.K., and Sinha, D.K, 1994 Water Quality Index for river Sai at Rae Bareli for the 

premonsoon period and after the onset of monsoon. Indian Journal of Environmental Protection, 

Vol.14, 340-345. 
[7] Kurian Joseph, 2001 An integrated approach for management of Total Dissolved Solids in reactive 

dyeing effluents. Proceedings of International Conference on Industrial Pollution and Control 

Technologies, Hyderabad. 



Authors 

A. Sivasankar is working as Assoc. Prof., Deptt. of Civil Engineering., KL University, Green 
fields, Vaddeswaram, Guntur, A.P, India. He has 14years of research experience & supervised 2 
M.Sc & 2 M.Tech Dissertation. He was Principal Investigator DST Sponsored Fast Track Young 
Scientist Project cost of Rsl5,24,000/-. 



& 



314 | 



Vol. 2, Issue 1, pp. 309-314 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



IPV6 Deployment Status, the Situation in Africa 

and Way Out 

Agbaraji E.C., Opara F.K., and Aririguzo M.I. 
Electrical Electronic Engineering Deptt., Federal University of Technology Owerri, Nigeria 



Abstract 

The number of internet connected devices is increasing terrifically, with each device assigned a unique Internet 
Protocol (IP) address at a time. Hence the expected problem of IPv4 address exhaustion in the near future 
called for a better and permanent solution, which is switching to IPv6. Adoption and deployment of IPv6 
recorded a fast growth rate globally, though the slow growth rate recorded in Africa was suspected to be due to 
the poor capacity building and the level of the IPv6 awareness campaign in the region. It was concluded that 
the developmental strategies created to help in the deployment of IPv6, such as the global awareness campaign, 
was confirmed effective. Also the World IPv6 day provides a 24 hours experiment to uncover the challenges of 
the transition to IPv6 and to develop measures to resolve them. 

KEYWORDS: Internet, IPv4, IPv6, RIRs, AfriNIC, ISP. 

I. Introduction 

Today, most electronic devices such as mobile phones, Personal Digital Assistants (PDAs), PCs, 
Internet telephones, etc use in homes and other places, rely on the internet technology for their various 
services. The internet connected devices use the internet protocol (IP) address to communicate over 
the network with each device assigned a unique IP address. This means that, for any device to 
communicate through the internet, it must be assigned an IP address. Most private and business 
application services (online transactions), including social activities such as Facebook, Twitter, 
Yahoo, etc., depend on the IP address for their functions. Thence, the tremendous growth rate in the 
number of internet connected devices and high dependence on the internet for human daily activities 
have caused the expected exhaustion of the long-time used IPv4 addresses. 

The two versions of IP currently in use; Internet Protocol Version Four (IPv4) and Internet Protocol 
Version Six (IPv6), with the IPv6 adopted proactively to solve the expected problem of the first and 
widely used version (IPv4) exhaustion in the future. The Number Recourse Organization (NRO), 
made up of five regional internet registries (RIRs) was set up to work together at global and regional 
levels to promote the transition from IPv4 to IPv6 and layout strategies to manage the distribution of 
the remaining unallocated IPv4 address pool [2]. 

The objective of this paper is to examine the possible solutions towards the transition challenges, 
focusing on the situation in Africa which reveals the situation in most other developing nations or 
regions in the world. This will lead to a thorough look at the global experiment and awareness 
campaign on the world IPv6 day, which was setup to uncover the transition problems and develop 
strategies to resolve them. 

The analysis carried out in this work will be limited to the African IPv6 deployment from 1984 to 
2011 to justify the IPv6 promotion campaign realized through this 24-hour global experiment carried 
out every year. 

Section two discussed the internet protocol (IP) versions; Internet Protocol version four (IPv4) and 
Internet Protocol version six (IPv6). The regional internet registries (RIRs) and their functions are also 



315 | 



Vol. 2, Issue 1, pp. 315-322 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

presented. Section three discussed the transition from IPv4 to IPv6 the importance of the transition 
and the trend. Section four presents the results of the deployment statues of IPv6 in Africa, the 
situation in most of the countries in the region and the measure to improve the situation. Section five 
presents the conclusions and recommendations. 

II. Overview 

Internet Protocol is a set of technical rules that define how computers communicate over a network 
[6]. There are currently two versions [6]: IP version 4 (IPv4) and IP version 6 (IPv6). IPv4 was the 
first version of Internet Protocol to be widely used and still accounts for most of today's Internet 
traffic. There are just over 4 billion IPv4 addresses. While that is a lot of IP addresses, it is not enough 
to last forever. IPv6 is a newer numbering system to replace IPv4. It was deployed in 1999 and 
provides far more IP addresses [6], which are expected to meet the need in the future. All internet 
connected devices and websites have an IP address so that the internet's servers know where to send 
information to. When a website's address (or URL) is typed into a browser, the system needs to 
convert it into an IP address so that it knows which computer to connect to [9]. To do this, the system 
uses the internet's equivalent of a phonebook, known as the Domain Name System (DNS). At the 
moment, the vast majority of IP addresses in the DNS resolve to IPv4 - the current standard for 
addresses. So even if you have an IPv6-enabled machine that is connected to an IPv6-enabled 
network, you will still be connected to another computer or website using IPv4. Some websites have 
been set up to use IPv6, but generally you need to type in a special web address (such as 
http://ipv6.sanger.ac.uk, or http://ipv6.google.com) to connect using the new protocol [9]. 
A global experiment and awareness campaign on the World IPv6 day has been set up to uncover the 
transition problems and develop strategies to resolve them. Google, Facebook, Yahoo, Akamai, and 
Limelight Networks will be some of the organizations offering their content over IPv6 for a 24-hour 
'test flight' [1] [14]. The goal is to motivate organizations-Internet-service providers, hardware 
makers, operating-system vendors and web companies-to prepare their services for IPv6, ensuring a 
successful transition as IPv4 addresses run out [14]. On World IPv6 day, the Sanger Institute, along 
with more than 300 organisations, advertise both IPv4 and IPv6 addresses in the DNS [9]. This will 
allow users with IPv6-enabled devices to connect via IPv6 without need to use the special address. 
IPv4 and IPv6 will coexist on the Internet for many years [7]. Users without IPv6 connectivity will 
continue to access the sites using IPv4 as normal [9] for the moment but with little or increasing 
restriction in the future. In comparison (Table 1) the major difference between IPv4 and IPv6 is the 
number of IP addresses. Although there are slightly more than 4 billion IPv4 addresses, there are more 
than 16 billion-billion IPv6 addresses [6]. 

Table 1: Comparing IPv6 and IPv4 [6] 





Internet Protocol version 4 
(IPv4) 


Internet Protocol version 6 (IPv6) 


Deployed 


1981 


1999 


Address size 


32-bit number 


128 -bit number 


Address format 


Dotted decimal 
notation: 192. 168.0.202 


Hexadecimal 
notation:3FFE:0400:2807:8AC9::/64 


Number of addresses 


2 A 32 


2 A 128 



2.1. Regional Internet Registries 

Regional Internet Registries (RIRs) are independent, not-for-profit membership organizations that 
support the infrastructure of the Internet through technical coordination [2]. There are five RIRs in the 
world today (Figure 1). Currently, the Internet Assigned Numbers Association (IANA) allocates 
blocks of IP addresses and ASNs, known collectively as Internet number resources, to the RIRs, who 
then distribute them to their members within their own specific service regions. RIR members include 
Internet Service Providers (ISPs), telecommunications organizations, large corporations, 
governments, academic institutions, and industry stakeholders, including end users. The RIR model of 
open, transparent participation has proven successful at responding to the rapidly changing Internet 
environment. Each RIR holds one to two open meetings per year, as well as facilitating online 



316 | 



Vol. 2, Issue 1, pp. 315-322 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

discussion by the community, to allow the open exchange of ideas from the technical community, the 

business sector, civil society, and government regulators. 

ThefiveRIRsare[2]: 

AFRINIC - Africa region 

APNIC - Asia and Pacific region 

ARIN - Canada, many Caribbean and North Atlantic islands, and the United States 

LACNIC - Latin America and parts of the Caribbean 

RIPE NCC - Europe, Parts of Asia and the Middle East 




Fig. 1: The RIRs and the general areas of responsibility (courtesy of NOR) [5] 
Each RIR performs a range of critical functions including [2] : 

• The reliable and stable allocation of Internet number resources (IPv4, IPv6 and AS Number 
resources) 

• The responsible storage and maintenance of this registration data. 

• The provision of an open, publicly accessible database where this data can be accessed. 

• RIRs also provide a range of technical and coordination services for the Internet community. 

2.2. IPv4 Current Status 

The IPv4 address space is a 32 bit field. There are 4,294,967,296 unique values, considered in this 
context as a sequence of 256 78s", where each 78" corresponds to 16,777,216 unique address values. 
In adding up these special purposes use address reservations there are the equivalent of 35.078 /8 
address blocks in this category [11]. This is composed of 16 /8 blocks reserved for use in multicast 
scenarios, 16/8 blocks reserved for some unspecified future use, 1 /8 block (0.0.0.0/8) for local 
identification, a single /8 block reserved for loopback (127.0.0.0/8), and a /8 block reserved for 
private use (10.0.0.0/8). Smaller address blocks are also reserved for other special uses. The 
remaining 220.922 /8 address blocks are available for use in the public IPv4 Internet. IANA holds a 
pool of unallocated addresses, while the remainder has already been allocated by IANA for further 
downstream assignment by the RIRs [11]. The current status of the total IPv4 address space is 
indicated in figure 2. 

IPv4 Address Pool Status 



IETF_Reserved 


r 32 






1 220.922 




35.0 


Allocated 

















50 100 150 

Pool Size (/8s) 

Fig. 2: Address Pool Status [11] 



200 



250 



317 | 



Vol. 2, Issue 1, pp. 315-322 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

The current RIR address status (Table 2) which shows the present situation in the various RIRs based 
on the amount of the assigned addresses and the remaining addresses. 

Table 2: The current RIRs address status 



RIR 


Assigned Addresses (/8s) 


Remaining Addresses (/8s) 


AFRINIC 


8.5854 


4.4107 


APNIC 


53.7944 


1.2055 


ARIN 


77.9392 


5.9865 


LACNIC 


15.3909 


4.6091 


RIPE NCC 


45.2077 


3.7923 



Stephen Groat, et al [18] testified the eminence of the migration from IPv4 to IPv6 addresses. They 
argued that current methods of IPv6 address assignment, both stateful and stateless, use static 
identifiers that allow geotemporal tracking of users, monitoring and correlation of traffic over 
multiple sessions, and targeting for system-specific cyber-attacks. Susan Trulove [19] from Virginia 
Tech discussed the need for the IPv6 to replace the 20-year-old internet protocol version 4. Mark Tink 
[20] discussed the readiness for effects of IPv4 exhaustion, the dual-stack IP deployment on May 
2007 and the transition from IPv4 to IPv6 addresses in Africa. John Loughney [21] carried out 
research on IPv4 allocation pool exhaustion and the switch to IPv6 addresses. He concluded that IPv4 
addresses will run out, but there sre going to be some dynamic issues which affect this. According to 
him, public IPv4 addresses may be needed for transition, so earlier usage of IPv6 can help. Silvia 
Hagen [22] in a brief study about IPv6 discovered that Asia especially Korea, China, Japan and India 
have embraced IPv6. She further argued that USA and Europe are planning for IPv6 deployment but 
nothing was found describing what Africa has done to prepare for IPv6 deployment. 

III. The Transition 

The internet is fast running out of addresses [9], [23]. By the end of the year it is thought that almost 
all the available addresses for computers, smartphones and websites will have been exhausted [9]. The 
best solution to ensure that the web can grow to its full potential is to change the way the system reads 
websites' addresses by moving to the next generation of addresses, known as IPv6. However, this 
system has never been used at a global scale and potential problems need to be uncovered before it 
can become the internet's new standard. The leading internet providers and websites collaborated in a 
24-hour global experiment - World IPv6 day. The goal of this day is to tackle this pressing issue and 
drive forward the change needed ensure the continued delivery of full and free access to data and 
resources to the research community via the web [9]. 

On Wednesday 8 June, more than 300 organizations, institutions, companies and governments under 
went a trial experiment of the future language of the internet: World IPv6 day. From lam Nigerian 
time to lam (Nigeria time) on Thursday morning, alongside Facebook, Google, Cisco, Yahoo, Sony, 
universities and many US Government departments, some institutions such as the Sanger Institute 
opened its websites to visitors using two methods of delivery [9]: the current standard of IPv4 and the 
future standard of IPv6. 

This change is needed because IPv4 is about to run out of addresses [23] for all the websites, 
computers, smartphones and internet-enabled devices that are coming on stream. In fact, the last 
major batch of available IPv4 addresses (about 80 million of them) was given out in a ceremony on 3 
February 2011 [9], [15] [24]. It is expected that all these addresses will have been taken by September 
2011 [9]. 

The move to IPv6 is facing challenges. Although the new addressing system was designed in the 
1990s and its technical foundations are now well established, not everyone is using currently 
equipment that can handle IPv6. It is estimated that roughly 0.05% of internet users will not be able to 
connect to websites that offer both IPv6 and IPv4 (a system known as 'dual stacking') [9]. IPv4 and 
IPv6 will coexist on the Internet for decades, creating the need for additional transition mechanisms 
because the dual-stack model won't solve all of the emerging problems for network operators [25]. 
To uncover any problems that might occur with a switch to IPv6, and to create an event to drive 
forward the change, the Internet Society is coordinating a global experiment. The society - a charity 
dedicated to ensuring the open development, evolution and use of the internet - is using this day-long 



318 



Vol. 2, Issue 1, pp. 315-322 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

event, to create a common focus to bring together all stakeholders in the internet to resolve any issues 
[9]: from governments and universities, to internet service providers, operating system suppliers and 
hardware manufacturers. 

3.1. IPv6 Deployment Trend 

Globally, the rate of the IPv6 deployment is growing super-linearly [17] (Figure 3) showing the 
response towards the adoption of the future promising technology. 

1600 



1400 

1200 

1000 

300 

600 

400 






I 

























































Fig. 3: IPv6 deployment growth [17] 
The NRO announced that the rate of new entrants into the IPv6 routing system has increased by 
approximately 300 percent over the past two years. This growth has been attributed to the active 
promotion of IPv6 by the five RIRs and their communities [2]. 

IV. Results and Discussion 

The IPv6 adoption and deployment status in Africa was drawn from 1984 to 2011 as shown in figure 
4. The results were based on the latest information on IPv6 Addresses allocated in the AfriNIC region 
[4]. The deployment which was further categorized into countries (Figure 5) in the region also 
revealed the poor rate of IPv6 deployment, recorded in most the countries. 

IPv6 Addresses 



63 



56 



49 



42 



J 



1934 



87 



90 



93 



96 99 

Years 



02 



05 



08 



2011 



Fig.4: IPv6 Addresses (Yearly) [4] 



319 | 



Vol. 2, Issue 1, pp. 315-322 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 




| South Africa <47> □Other <31> 



■ Ktnya (22^ 

Mauritius <I2) 

I I Ghana <3> 
~] Tanzania (6 J 
| Zimbabwe C 4 J 
H Rwanda £4* 



I I Nigeria U6> 
■ Egypt <9> 

| Uganda <7^ 
I I Zambia <5> 
I I Sudan <4> 



Fig.5: IPv6 Address by Country [4] 
Note that "Other" refers to all countries that have less than 2% of the total. 

South Africa recorded the highest percentage of the IPv6 address allocation followed by Kenya and 
Nigeria with 26%, 12% and 9% respectively (Figure 5). According to the result, twelve countries in 
Africa recorded 2% and above of the IPv6 address allocation and the remaining greater number of 
countries recorded less than 2% of the allocation indicating little or no awareness of the future internet 
communication technology in most countries in Africa. 
The poor IPv6 adoption in Africa as reflected in the results was attributed to the following: 

• People are not aware of additional benefits IPv6 should bring in their lives; thereby, nothing 
is motivating them especially in the dynamic sustenance towards IPv6. The main cause is the 
fact that there is no active e-strategy program in the different government/ political policy 
agenda. 

• The continent on its entirety lacks the general IPv6 consciousness-raising campaign for total 
capacity building programs. Many organizations including AFRINIC tried to initiate some 
capacity building programs upon IPv6 for the attention of mainly some technical bodies such 
as the Internet Service Providers (ISPs). Since then, not sure those things are going ahead at 
the expected speed. It should not be strategic to begin the consciousness-raising campaign 
focusing on those specific bodies. 

• Many ISPs in Africa are not yet aware on the technical/business benefits of IPv6. Some of the 
ISPs in Africans are still relying on the simple seem fact that since IPv4 is working well 
presently, it is not cost-effective should they change or remove it. 

4.1. The Consequences of Delay in Ipv6 Deployment 

Without a dual-stacked network or deployed protocol translation services, an individual user gaining 
Internet access for the first time from an IPv6-only ISP may not be able to access the Web sites or 
mail servers for organizations that operate IPv4-only networks [8]. There are implications to not 
adopting IPv6. These implications become evident as wide-scale deployment of IPv6 accelerates. Not 
adopting IPv6 may cause the following types of issues for the various types of Internet users [8]: 
Individual Users: Individual users may not be able to view Web sites and communicate with certain 
destinations. Many individuals use the Internet to communicate with distant friends and family, 
research medical issues, and participate in group discussions among other things. 
Enterprise Organizations: Enterprise organizations and corporations may not be able to 
communicate with certain critical government resources, clients, and potential customers. E-mail is a 
critical form of communication for most enterprise organizations today and their Web sites are vitally 
important resources for them to communicate with the public. 



320 



Vol. 2, Issue 1, pp. 315-322 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Governments: Governments may lose their ability to see and communicate with the "whole 
Internet." Access to information is critical for governments. There also may be an inability for citizens 
and other Internet users to access information about the government and communicate over the 
Internet with government agencies. 

Service Providers: Organizations that provide services over the Internet may experience customer 
and/or revenue losses if they do not update their offerings to include IPv6. Customers will expect to 
be able to communicate with everyone else on the Internet and may seek out other ways to do that if 
their current service provider is not capable. 

4.2. The Way-out 

In other to facilitate the rate of IPv6 deployment, the various levels of governance (government, 
academic/ business organizations, etc) in African countries should in their day-to-day policy making 
include as an agenda, active e-strategy programs to help not only in the imperative deployment of 
IPv6 but also in other Information and Communication Technology (ICT) facility development. 
AFRINIC and other related organizations in Africa should be expected to develop some capacity 
building programs upon IPv6 awareness campaign for the general populace, thereby motivating the 
people and enlightening them about the technical/ business benefits derived from switching to IPv6. 

V. Conclusion and Recommendation 

The Internet Protocol Version 6 (IPv6) had been adopted globally to solve the future problem of 
Internet Protocol Version 4 (IPv4) exhaustion. The IPv4 was the first Internet protocol version to be 
used widely and still dominating today's internet traffic. The global rate of IPv6 deployment 
accelerated due to the total adoption of the technology in most developed regions of the world. The 
African situation was revealed to be different as reflected by the poor results of IPv6 deployment rate 
recorded in most countries of the region by AfriNIC over some past years. 

The various development strategies stipulated to aid the deployment of IPv6 such as the global 
awareness campaign of the World IPv6 day, general capacity building set up by the RIRs, and many 
more, have been confirmed effective with the recent IPv6 deployment improvement recorded globally 
even in Africa. 

It was recommended that the business sector should start to support IPv6 by hosting content on IPv6- 
enabled websites, ensuring accessibility to IPv6 users. Software and hardware vendors should 
implement IPv6 support in their products urgently, to ensure they are available at production standard 
when needed. Governments should learn more about IPv6 transition issues in order to support IPv6 
deployment efforts in their countries. IPv6 requirements in government procurement policies are 
critical at this time. Finally, civil society, including organizations and end users, should request IPv6 
services from their ISPs and vendors, to build demand and ensure competitive availability of IPv6 
services in coming years. 

References 

[1]. Carolyn, D. M., (2011). World IPv6 Day: Tech Industry's most-watched event since Y2K 

http://www.networkworld.com/news/201 1/06071 l-ipv6-expect.html 
[2]. Paul, W., (2008). IPv6 Growth Increases 300 Per Cent in Two Years, http://www.ripe.net/internet- 

coordination/news/industry-developments/ipv6-growth-increases-300-per-cent-in-two-years 
[3]. Philemon, (2007). Africa and IPv6, https://wwwl.ietf.org/mailman/listinfo/ietf 
[4]. AfriNIC, (2011). Statistics - IPv6 Recourses, http://www.afrinic.net/statistics/ipv6_resources.htm 
[5]. Michael, K., (2008). IPv6: What you need to know, http://techrepublic.com.com 

[6]. Arin, (2008). simple overview of IPv6 and the differences between it and IPv4 http://techrepublic.com.com 
[7]. Arin, (2008). IPv4 and IPv6 coexistence — what does that mean? http://techrepublic.com.com 
[8]. Arin, (2008). What really happens to my company Internet access if it or my ISP network doesn't transition 

in time? http://techrepublic.com.com 
[9]. Phil, B., (2011). 24 hours to shape the Internet's future 

http://www.sanger.ac.Uk/about/press/features/ipv6day.html#t_pg3 
[10]. Geoff, H., (2003). The Myth of IPv6. The Internet Protocol Journal, Volume 6, No. 2. 
[11]. Geoff, H., (201 1). Current Status. http://www.potaroo.net/tools/ipv4/#r4 



321 | 



Vol. 2, Issue 1, pp. 315-322 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[12]. Takashi, A., (201 1). IPv4 Address Report 2011; Projected RIR Unallocated Address Pool Exhaustion. 

http://inetcore.com/project/ipv4ec 
[13]. Adiel, A., Alian, P., (2010). Analysis of the future exhaustion of the IPv4 Central pool in relation to 

IANA and its impact on the AfriNIC region, http://www.afrinic.net/news/ipv4_exhaustion.htm 
[14]. Michael, K., (2011). How new research aims to protect our privacy on IPv6 networks. 

http://www.techrepublic.com/blog/security/how-new-research-aims-to-protect-our-privacy-on-ipv6- 

networks/5583?tag=nl.e036 
[15]. Matthew, F., (2011). Edging Toward the End of IPv4: A New Milestone in the History of the Internet. 

http://iso.org/wp/ietfjournal/files/2011/03/IETFJournal79FINAL.pdf 
[16]. Geoff, H., (2003). IPv4: How long do we have? The Internet Protocol Journal, Volume 6, No. 4 
[17]. Derek, M., (2011). What is the Status of IPv6 Outside of Penn State? 

https://wikispaces.psu.edu/display/ipv6/IPv6-i-FAQs 
[18]. Stephen, G., (2009). Dynamic Observation of IPv6 Addresses to Achieve a Moving Target Defense. 

Virginia Tech Intellectual Properties. 
[19]. Susan, T., (201 1). Greatest Winning Network Security. Blackburg , Virginia Tech. 
[20]. Mark, T. IPv6 Deployment Africa Online Zimbabwe. Global Transit, Kuala Lumpur, Malaysia. 
[21]. John, L., (2008). Converged Communication and IPv6. Nokia. 
[22]. Silvia, H. IPv6 Deployment in Africa. http://searchnetworking.techtarget.com/answer/IPv6- 

deployment-in- Africa 
[23]. Christian, J., (2011). Zero Address, One Solution Two Problems. 

http://iso.org/wp/ietfjournal/?p=21 87#more-21 87 
[24]. Olaf, K., (201 1). Words from the IAB Chair. http://iso.org/wp/ietfjournal/?p=2162#more-21862 
[25]. Carolyn, D.M., (2011). IPv4, IPv6 Coexistence Changes Network Operators. 

http ://iso.org/wp/ietfjournal/?p=2 1 73#more-2 173 

Authors 

Emmanuel Chukwudi Agbaraji completed his Bachelor of Engineering (B.ENG.) degree 
in Electrical Electronic Engineering in 2006 from Federal University of Technology Owerri 
(FUTO). He attained Microsoft Certified Professional (MCP) in 2010. His research interest 
includes software engineering, data communication and internet computing. At present Mr. 
Agbaraji is working on his thesis for the award of Masters of Engineering in computer 
engineering. 



Felix Kelechi Opara is presently the Head of Electrical Electronic Engineering department 
in Federal University of Technology Owerri (FUTO). He holds a PhD from FUTO, where 
he completed his Masters of Science and Bachelor of Engineering degrees. Engr. Dr. Opara 
has attained engineering certified professional. His research interest includes data 
communication, software engineering, and protocol development. 




Arvis Ijeaku Aririguzo is a lecturer in Federal Polytechnic Nekede. She holds a Bachelor 
of Engineering (B.ENG) degree in Electrical Electronic Engineering from Federal 
University of Technology Owerri. She has attained engineering certified professional and 
Cisco Certified Network Associate (CCNA). Her research interest includes data 
communication and internet computing. Engr. Mrs. Aririguzo is presently working on her 
thesis for the award of Masters of Engineering in computer engineering. 




322 | 



Vol. 2, Issue 1, pp. 315-322 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Study and Realization of Defected Ground 

Structures in the Perspective of Microstrip 

Filters and Optimization through ANN 

Bhabani Sankar Nayak 1 , Subhendu Sekhar Behera 2 , Atul Shah 1 

1 BTech, Department of ECE, National Institute of Science & Technology, Berhampur 

2 BTech, Department of CSE, National Institute of Science & Technology, Berhampur 



Abstract 

Defected ground structures (DGS) have been developed to enhance different characteristics of many microwave 
devices. In this paper a Micro-strip low pass filter with Dumbbell Shaped Slot Defected Ground Structure 
(DGS) is designed. The response of the filter is analyzed with respect to variation in dimension of the DGS unit 
.The variation of dimensions of defects studied with their corresponding change in capacitance, inductance as 
well as frequency response. The defects dimensions are modeled with respect to frequency using the artificial 
neural network. Optimizing the convergence of Artificial Neural Network (ANN) classifiers is an important task 
to increase the speed and accuracy in the decision-making. The frequency response of the micro strip filter is 
modeled with respect to the variation in dimension of DGS using CST microwave studio. The dimensions are 
further optimized in order to achieve minimum error in frequency response. Incremental and online back 
propagation learning approach is followed in the training of neural network because of its learning mechanism 
based upon the calculated error and its ability to keep track of previous learning iteratively. The simulation 
results are compared with the results obtained through ANN and the designs are further optimized. 

KEYWORDS' Filters, defected ground structures, ANN, CST microwave studio. 

I. Introduction 

Defected Ground Structures (DGS) have been developed in order to improve characteristics of many 
microwave devices [1]. Most of its advantages lies in the area of the microwave filter deign, 
microwave oscillators, microwave couplers as well as microwave amplifiers. DGS is motivated by the 
by the study of Electromagnetic band gap structures [2]. It is more easily an LC equivalent circuit. 
Presently there are vast applications of microwave components such as filters, amplifiers, couplers, 
antennas in various fields like mobile radio, wireless communication, and microwave millimeter wave 
communication [4]. Basically micro strip technology consists of transmission line made of conducting 
material on one side of dielectric substrate with the ground plane on other side. A microwave filter is 
a two- port network used to control the frequency response at a certain point in a microwave system 
by providing transmission at frequencies within the pass band of the filter and attenuation in the stop 
band of the filter. Defected ground structures (DGS) are recently one of the hottest topics which are 
researched in microwave domain, which developed from the photonics band gap (PBG) structures [1]. 
The high characteristic impedance of DGS is also used in digital systems [2]. DGS is an etched lattice 
which makes one or a few of PBG etched ground elements in the ground plane. DGS can achieve 
high-performance which can not be obtained by conventional technology. Because of the advantage 
of DGS, such as having small structure size, more transitional sharpness, achieving broader stop band 
responses, high characteristics impedance and simple model, it has been widely used in the design of 
microwave filter. The defects in the ground plane of the transmission lines [3] such as dumbbell , 
elliptical , square etc disturbs the shield current distribution and also changes the characteristics of 



323 | 



Vol. 2, Issue 1, pp. 323-330 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

transmission lines e.g. capacitance and inductance. The series inductance due to the DGS section 
increases the reactance of a micro strip with the increase of the frequency. Thus, the rejection of the 
certain frequency range can be started. The parallel capacitance with the series inductance provides 
the attenuation pole location, which is the resonance frequency of the parallel LC resonator. However, 
as the operating frequency increases, the reactance of the capacitance decreases. Thus, the band gap 
between the propagating frequency bands can be occurred. By etching DGS on the ground plane it is 
possible for the designer to increase the equivalent inductance L highly and to decrease the equivalent 
capacitance C at the same time, and finally to raise the impedance of the micro strip line to a level 
more than 200Q[3]. But the problem arises, as there is no fixed mathematical model in order to relate 
the frequency response with respect to the change in dimension of DGS Unit cell. Our main focus lies 
in optimizing the frequency response with the help of ANN being trained with Back Propagation 
Algorithm [14]. Back Propagation is the most popular neural network training algorithm for 
supervised learning with weight correction rule [11]. The weight correction methodology comprises 
of back-propagating the errors from output layer to hidden layer, thus finding the optimal set of 
weights. It is used in a greater extent in the field of Data Analysis, Weather Forecasting, and Trading 
Analysis etc. As the Learning Rate has a significant effect on the results, we choose the best through 
iteration. This allows the Back Propagation to be optimized. The design procedure is presented in the 
section.2 along with its response due to the variation of different dimensions of DGS. The designs are 
implemented using CST microwave studio and the results are analyzed. In the 3 rd section we 
implemented the back propagation neural network to model the frequency response with respect to the 
dimension of DGS. The application of artificial neural network ensures an optimum design 
methodology for microstrip filter design which is revealed when comparing the results with analytical 
methods and the results of the simulation software's [14]. The designs are made using the CST 
microwave studio software [15] and also the simulations for analyzing the frequency response for 
every change in dimensions of DGS, calculation of inductance and capacitance etc. ANN algorithm is 
implemented using C programming in DEV C++ compiler and the results obtained for training and 
testing is plotted with the help of MATLAB [15]. 

II. A Study on Related Work 

There has been a lot of research on optimization of frequency response using different soft computing 
algorithms. A novel approach for calculating the resonating frequency of microstrip antenna is 
presented in [14] by R.K. Mishra and A. Pattnaik. In the reference [4] A part of optimization is made 
to model the frequency response of the planar microstrip antenna with respect to the change in 
dimension of DGS. There are several algorithms to optimize the training process. Back Propagation is 
one of the most popular neural network training algorithms for supervised learning. The weight 
corrections are updated with a generalized delta rule to minimize the prediction error through 
iterations. There have been similar attempts made to choose the dielectric constant for antenna design 
using Neural network model [11]. In reference [13] a new methodology for determining the input 
impedance for microstrip antenna is presented. In this paper we have implemented the Artificial 
neural network algorithm to model the frequency response of microstrip filter with respect to the 
dimensions of dumbbell shaped DGS. The weight correction methodology comprises of back- 
propagating the errors from output layer to hidden layer, thus finding the optimal set of weights [7-9]. 
In this paper a feed-forward network (FFN) has been considered. FFN allows the signal to flow from 
input to output layer in feed-forward direction [7, 9]. 

III. Design of Filter and Response Due to Defected Ground 

The low pass filter configuration having five sections of alternating high and low impedances is 
shown in the figure 1. The lpf was designed using the formulations depicted in [3]. The order of filter 
designed is of 5 th order. The Dumbbell Shaped Slot DGS section is fully described by two parameters 
the etched lattice dimension and gap distance. The influences of these two parameters on frequency 
characteristics of a micro strip are shown by simulations. All simulations were carried out on CST 
Microwave studio. The dimension of DGS slot are given below in fig2 as l,w,g respectively. 



324 | 



Vol. 2, Issue 1, pp. 323-330 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



12 12 

4 ► 4 ► 



50 



Wi 



W. 



50 Q 



J" 



ii 




; 23545 1 3 



Figl. design of low pass filter and its response 

(Where the dimension are given by wl= 0.293 mm ,w2= 6.352 mm ,11= 2.917 mm,12= 7.1323 mm, 
13=11.036mm,and the corresponding inductance and capacitance are given as LI = 2.05 nH ,C2 
=2.1472 pF, L3 = 6.634 nH, C4= 2.146 pF, L = 2.05 nH) 

When the single dumbbell shaped slot is placed at the center, it provides inductance and hence by 
placing the DGS in the structure, effective inductance increases and the cut off frequency decreases. 







4 ► 

1 1 
1 1 
1 1 
L J w 






1 


i 




1 1 




1 


i i 
i i 
i j 











S-Parai -<■. lagnifuoe in fi 










C^ "^YY^ 


























n 








/ 






' ''\\ 

















2|2.36€ 3 



Fig2.design of low pass filter with defects and its response 

The line width is chosen to be the characteristic impedance of 50Q micro strip line for simulations. 
Three DGS unit circuits were simulated with the different dimensions. In order to investigate the 
influence of the square lattice dimension, the etched gap, which is related with the gap capacitance, 
was kept constant to 0.1 mm for all three cases and the etched square area was varied. The substrate 
with 0.762 mm thick and a dielectric constant of 3.2 is used for all simulations. We observe that 
employing the proposed etched lattice increases the series inductance to the micro strip line. This 
effective series inductance introduces the cutoff characteristic at certain frequency. As the etched area 
of the unit lattice is increased, the effective series inductance increases, and increasing the series 
inductance gives rise to a lower cutoff frequency, as seen in Table 1. There are attenuation poles in 
simulation results on the etched unit lattices. These attenuation poles can be explained by parallel 
capacitance with the series inductance. This capacitance depends on the etched gap below the 
conductor line [4]. The capacitance values are identical for all cases due to the identical gap distance. 
However, the attenuation pole location, which corresponds to the resonance frequency of the parallel 
LC circuit, also becomes lower because as the series inductance increases, the resonance frequency of 
the equivalent parallel LC circuit decreases. The results are shown in table 1. 

Tablel variation of length and gap in DGS 



Variable(unit) 


d=7 


d=8 


d=9 


Inductance(nH) 


5.24 


6.39 


7.56 


Capacitance(pF) 


0.70 


0.69 


0.67 


Cutofffreq(GHz) 


1.70 


1.48 


1.34 


Center freq (GHz) 


2.59 


2.36 


2.21 




G=0.1 


G=l 


G=2 


Inductance(nH) 


3.42 


3.58 


3.70 



325 | 



Vol. 2, Issue 1, pp. 323-330 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Capacitance(pF) 


0.72 


0.18 


0.08 


Cutofffreq(GHz) 


2.25 


3.4 


3.52 


Center freq (GHz) 


3.16 


7.14 


8.5 



The lattice dimension is kept constant to 5 mm for all three cases and the etched gap distance is 
varied. Due to the constant lattice dimensions, we can expect that the effective series inductances are 
also constant for all cases. There is no change in cutoff frequency despite the variation of the gap 
distance. This means that the gap distance does not affect the effective series inductance of a micro 
strip. Variation of the effective capacitance only affects the attenuation pole location[l]. As the etched 
gap distance increases, the effective capacitance decreases so that the attenuation pole location moves 
up to higher frequency. When the single dumbbell shaped slot is placed at the center, it provides 
inductance and hence by placing the DGS in the structure, effective inductance increases and the cut 
off frequency decreases. When the single dumbbell shaped slot is placed at the center, it provide 
inductance and hence by placing the DGS in the structure, effective inductance increases and the cut 
off frequency decreases. Response is improved in terms of sharpness because of decrease in the 
capacitance. The Cut off frequency of the low pass filter is 1.66 GHz and the slope is 9.65 dB/GHz. 
When g is reduced to 0.1 mm the effective capacitance increases which results in lowering of 
attenuation pole location. The insertion loss reaches -50 dB down. As the area of the slot is kept 
constant, there is no change in effective inductance and hence the cut off frequency is constant. When 
the width of the etched slot is decreased effective inductance is decreased because of which cut off 
frequency is increased. Also the response is improved in terms of insertion loss and return loss. 

Table2 . Variation with respect to the change in d 



S.No 


D(mm) 


Cutoff frequency(GHZ) 


Slope 
(dB/GHz) 


1 


6.3 


2.4214 


7.4808 


2 


6.1 


2.4434 


7.3361 


3 


5.9 


2.4787 


7.13 



According to the Quasistatic Theory of DGS depicted in [4] the electric and magnetic fields are 
mostly confined under the microstrip line. The return current on the ground plane is the mirror image 
of the current distribution occurred at the strip line. The maximum surface current lies over the ground 
plane and the width of side filament arm which contribute to the inductance of DGS [4]. The gap is 
represented by the equivalent capacitances, the inductances and capacitances are derived from the 
physical dimensions using quasi-static expressions for microstrip crosses, lines and gaps given in [5]. 
The electrical equivalent model of DGS is given below [4,6] .it is been observed that for various 
change in dimension of DGS we are getting a different frequency for which any mathematical model 
is not established yet. So for the simplification we are implementing neural network in order to model 
the frequency change and optimize the design. In the next section we implemented the artificial 
neural network using Dev CPP compiler [15] for the training and testing of the network. This is 
validated with the simulations made at CST microwave studio on the desired set of testing data sets 
and also the frequency response is checked. 



326 | 



Vol. 2, Issue 1, pp. 323-330 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



W 

/ 

DGS 







1 


—+» 


... 




¥ \ 

1 L- 


t 
1 — *w*-^ 

t 




1 




1 


<* 


1 




DGS 












Fig 3. Equivalent circuit of DGS 

IV. Optimization Through ANN 

Artificial neural network has been implemented to determine the problem of accurate determination of 
frequency of dumbbell shaped DGS for a desired dimension of DGS. The input to the ANN model are 
the dimension of defects 1, w,g and the target data is frequency. 



1 

S 





Fig 4. ANN model of DGS 

Back Propagation algorithm is implemented which comprises of two phases. First, a training input 
pattern is presented to the network input layer which is propagated forward to the output layer through 
hidden layers to generate the output. If this output differs from the target output presented then an 
error is calculated (Here Mean Square Error). This error is back-propagated through the network from 
the output layer to the input layer and weights are updated [7]. 

As we are not satisfied with a normal back propagation, we investigated the results with learning rate 
starting from 0.1 to 1.0 with momentum constant equal to 0.9 to speed up the learning process. The 
epoch size for each learning rate is 20 epochs [7, 8]. The cost function used here is the Mean Square 
Error (MSE). The log sigmoid function in equation 1 is used as the transfer function associated with 
the neurons in hidden and output layer to obtain the normalized [0, 1] nodal outputs. 

f(x) = (l + e~T l (1) 
As we use log sigmoid as the transfer function, we normalize the input values [0,1]. It will reduce 
calculation complexities. The class values for each dataset are also normalized in the range from to 
1. 

V. Algorithm 

Stepl: Set Learning Rate X = 0.1, Momentum Constant a = 0.9. Initialize No. of Tuples according to 
dataset. Initialize set of weights randomly. 



327 | 



Vol. 2, Issue 1, pp. 323-330 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Step2: Set MSE total = and i=0. 

Step3: Present i th input vector X i0 , X u X iN _! and specify the desired output d i0 . Calculate actual 

output Y i0 and MSEi. 

Step4: Modify the weights starting from output layer to input layer using delta rule given below. 

W jk (t+1) = W jk (t) + ^5 k x/ + a(W jk (t) - W jk (t-1)) [2] 

Where W jk (t) is the weight from node j to node k at time t; a is momentum constant; Xj is either the 
output of node j or is input j; X is learning rate; and 5 k , is an error term for node k. If node k is an 
output node, then 

8 k =j k (l-j k )(d k -3; k ) [3] 

Where d k is the desired output of node k and y k is the actual output. 
If node k is an internal hidden node, then 

5 k = x/U-x/^^Wk! [4] 

Where 1 is over all nodes in the layer above node k. 

Step5: MSE total = MSE total + MSEi. 

Step6: Repeat by going to step3 if i < No. of Tuples. 

Step7: MSE total = MSE total / No. of Tuples. Store MSE total . 

Step8: Repeat Step 2-6 for no. of epoch size. 

Step9: A, = A, + 0.1. Repeat by going to step2 with initialization of weights randomly if X <= 1.0. 

The ANN model is shown above with dimensions 1, g, w and cut off frequency obtained from the 
output of ANN for the chosen dielectric substrate. The design is made and parametric variations are 
obtained for 80 observations, 60 are used for training and rest 20 is used for testing. The best learning 
rate is chosen by testing each starting from 0.1 to l.O.The learning rate chosen at 0.1 turned to be the 
best learning rate. The neural network with 2 neurons in 1 hidden layer and best learning rate reduces 
the error to 0.003401 in 20 epochs only while testing the neural network. The obtained results from 
ANN were checked by designing in CST and the frequency response were matched mostly with 
respect to the result obtained in ANN. Incremental back propagation learning approach is followed in 
the training of neural network[9,10]. The training result is shown below with the least error of 
0.003401. 



0.14 



0.12 



0.1 



0.08 



0.06 



0.04 



S 0.02 



Performance of Back Propagation at Learning Rate=0.1 







. 


training cases 

test cases 


•\ 




- 


\ X: 14 

\ Y: 0.003401 


. 


X:20 

Y: 0.001943 




, — i \— 1 



2 4 6 
Epochs — 



10 12 14 16 




Fig 5 ANN result and regression plot 



328 



Vol. 2, Issue 1, pp. 323-330 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

After the neural network model is created different values of 1, w, g are taken and the frequency 
response is calculated with the help of artificial neural network and the results are cross checked with 
the help of CST microwave studio, it is observed that neural network works efficiently in determining 
the accurate frequency of the microwave filter with dumbbell shaped DGS. When we chose 1 =10 
,g=0.1, w=5 (in mm) , the neural network output was found out to be 1.649 GHz , where the simulated 
result is shown below which shows the frequency response at 1.6527 GHz. The response is shown 
below in fig 6 which shows that neural network is working efficiently with least MSE 0.003401 



S-Parameter Magnitude in dB 





j^-» T 


r* * — y ^ j^+^, — y^r^ — * 






/ \ / \ 


















>J / 
















SI,i: -3,1414039 
S2,l: -3,0022691 
SI.,2 : -3,0022682 

S2,2: -3,1414016 


















j j 1 






i 


: 








i 1 ' 


i i i 



■Sl,l 
■52,1 
■51,2 
■S2,2 



1,6527 2 



3 



5 



■: 



7 



Frequency / GHz 
Fig 6 frequency response obtained with CST 



VI. Conclusion 



We designed the five elements LPF. After that dumbbell shaped defect is created in the ground plane. 
The addition of defects enhances the response of the filter as well as reduces the size. It consists of L- 
C parallel circuit having a resonant frequency characteristic. It is having band gap property, which is 
used in the many microwave applications. The frequency response of the dumbbell shaped defect is 
studied with respect to the dimension of its length, width and gap. The applications of artificial neural 
network for getting the frequency response of filter with respect to the dimensions of defects are done 
with the minimal error of 0.003401. Although training ANN model has spent little extra time, the 
successful intelligent model can quickly provide precise answers to the task in their training values 
range. Neural network efficiently worked to model frequency and the dimensions are optimized to 
give rise better response with the least error of 0.003401. The learning rate is chosen to be highly 
optimized one through iterations. The future scope of the work lies to implement Adaptive neuro 
fuzzy inference system (ANFIS) for the optimization and modeling of frequency response of 
microwave circuits , which will have better learning approach and higher degree of accuracy at a 
shorter time in comparison to ANN shown in [16]. 

Acknowledgements 

The authors would like to thank, CST Company, India for their support in CST EM tool. We are 
grateful to Prof. Rabindra Kishore Mishra , for his kind suggestions and guidance in this work . We 
are thankful to P.K. Patra and A.K. Panda for their kind help. The authors are grateful to the 
anonymous reviewers for their constructive & helpful comments & suggestions. 

References 

[1] Kim, J. P. and Park,W.S., "Microstrip lowpass filter with multislots on ground plane," Electronics 

Letters, Vol. 37, No. 25, pp. 1525 -1526, Dec. 2001. 
[2] Yang, F., and Rahmat Samii, Y., "Electromagnetic band gap structures in antenna engineering", 

Cambridge University press, USA,2009 



329 



Vol. 2, Issue 1, pp. 323-330 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

[3] Bharathi Bhat , Shiban K. Koul "Strip line-like Transmission Lines for Microwave Integrated Circuits" 

New Age International (p) Ltd, Publishers,2001.. 
[4] A.K Arya,M.V kartikeyan ,A.pattnaik "Defected ground structures in micreostrip antenna: a review" in 

frequenz 64(2010) 
[5] Roy, S.M., Karmakar, N.C., Balbin, L, "Quasi-Static Modeling of Defected Ground Structure", IEEE 

Transaction on Microwave Theory and Technique, vol.54, no. 5, pp.2160 -2168, may 2006.. 
[6] Oskouei, H.D., Forooraghi, K. and Hakkak, M., "Guided and leaky wave characteristics of periodic 

defected ground structures", Progress in Electromagnetics Research, PIER 73, 15 -27, 2007 
[7] Jung I. and Wang G., "Pattern Classification of Back-Propagation Algorithm Using Exclusive 

Connecting Network", World Academy of Science, Engineering and Technology 36 2007. 
[8] ChattoPadhyay S., Kaur P., Rabhi F., and Acharya R.U., "An automated system to diagnose the 

severity of adult depression", EAIT, 201 1 
[9] Jha G.K., "Artificial Neural Network", IARI, 2006. 

[10] http://www.softcomputing.net/ann_chapter.pdf 
[11] Patnaik, A., Mishra, R.K., Patra, G.K., Dash, S.K., "An artificial neural network model for effective 

dielectric constant of microstrip line," IEEE Trans, on Antennas Propagation, Vol. 45, no. 11, p. 1697, 

November 1997. 
[12] S.S pattnaik , D.C panda, S devi "A novel method of using Artificial Neural Networks to calculate 

input impedance of circular microstrip antenna" 
[13] Bailer- Jones D.M.and Bailer- Jones C.A.L., "Modelling Data: Analogies in neural networks, simulated 

annealing and genetic algorithms" in Model-Based Reasoning: Science, Technology, Values, L. 

Magnani and N. Nersessian (eds.), New York: Kluer Academic/Plenum Publishers, 2002. 
[14] R.K.Mishra and Patnaik, "Designing Rectangular Patch Antenna Using the Neurospectral Method', IEEE 

Transactions on Antennas and Propagation, AP-5 1,8 August 2003,pp.l914-1921. 
[15] CST Design Studio , MATLAB and DEV CPP 
[16] Guney, K. and N. Sarikaya, "A hybrid method based on combining artificial neural network and fuzzy 

inference system for simultaneous computation of resonant frequencies of rectangular, circular, and 

triangular microstrip antennas," IEEE Trans. Antennas Propagat., Vol. 55, No. 3, 659-668, 2007. 

Biographies 

Bhabani Sankar Nayak is currently pursuing his B.Tech in dept of Electronics and 
communication engineering at National Institute of Science & Technology, Berhampur, Orissa. 
His research interest include Electromagnetic, antenna, microwave circuits, CAD, soft computing. 
He is currently working as research scholar at NIST under scholarship program. 



Subhendu Sekhar Behera is currently pursuing his B.Tech in dept of computer science & 
engineering at National Institute of Science & Technology, Berhampur, Orissa. His research 
interest include soft computing, web designing, algorithm design. 



Atul Shah is currently pursuing his B.Tech in dept of Electronics and communication engineering 
at National Institute of Science & Technology, Berhampur, Orissa. His research interest include 
intelligence system design, embedded systems. 






330 



Vol. 2, Issue 1, pp. 323-330 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 



Analysis of Discrete & Space Vector PWM 

Controlled Hybrid Active Filters for Power 

Quality Enhancement 

Jampula Somlal 1 , Venu Gopala Rao Mannam 2 , 
Assistant Professor, Department of EEE, K L University, Vijayawada, A.P, India 
2 Professor & Head, Department of EEE, K L University, Vijayawada, A.P, India 



Abstract 

It is known from the fact that Harmonic Distortion is one of the main power quality problems frequently 
encountered by the utilities. The harmonic problems in the power supply are caused by the non-linear 
characteristic based loads. The presence of harmonics leads to transformer heating, electromagnetic 
interference and solid state device mal- functioning. Hence keeping in view of the above concern, research has 
been carried out to mitigate harmonics. This paper presents an analysis and control methods for hybrid active 
power filter using Discrete Pulse Width Modulation and Space Vector Pulse Width Modulation (SVPWM) for 
Power Conditioning in distribution systems. The Discrete PWM has the function of voltage stability, and 
harmonic suppression. The reference current can be calculated by'd-q' transformation. In SVPWM technique, 
the Active Power Filter (APF) reference voltage vector is generated instead of the reference current, and the 
desired APF output voltage is generated by SVPWM. The THD will be decreased significantly by SVPWM 
technique than the Discrete PWM technique based Hybrid filters. Simulations are carried out for the two 
approaches by using MATIAB, it is observed that the %THD has been improved from 1.79 to 1.61 by the 
SVPWM technique. 

KEYWORDS' Discrete PWM Technique, Hybrid Active Power Filter, Reference Voltage Vector, Space Vector 
Pulse Width Modulation (SVPWM), Total Harmonic Distortion (THD), Voltage Source Inverter (VSI). 

I. Introduction 

High power non-linear and time varying loads, such as rectifiers, office equipments like computers 
and printers, and also adjustable speed drives causes undesirable phenomena in the operation of power 
systems like harmonic pollution and reactive power demand [1-2]. The application of passive tuned 
filters creates new system resonances which are dependent on specific system conditions. In addition, 
passive filters often need to be significantly overrated to account for possible harmonic absorption 
from the power system. Passive filter ratings must be co-ordinate with reactive power requirements of 
the loads and it is often difficult to design the filters to avoid leading power factor operation for some 
load conditions [3-4]. Parallel active filters have been recognized as a viable solution to current 
harmonic and reactive power compensation. Various active power filter configurations and control 
strategies have been proposed and developed in the last decade in order to reduce these undesirable 
phenomena. Active filters have the advantage of being able to compensate for harmonics without 
fundamental frequency reactive power concerns. This means that the rating of the active power can be 
less than a comparable passive filter for the same non-linear load and the active filter will not 
introduce system resonances that can move a harmonic problem from one frequency to another. The 
active filter concept uses power electronics to produce harmonic current components that cancel the 
harmonic current components from the non-linear loads. 

The active filter uses power electronic switching to generate harmonic currents that cancel the 
harmonic currents from a non-linear load. The active filter configuration investigated in this paper is 



331 | 



Vol. 2, Issue 1, pp. 331-341 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

based on a discrete pulse-width modulation and pulse-width modulated (PWM) voltage source 
inverter based filters. 

Among the various topologies the shunt active filter based on Voltage Source Inverter (VSI) is the 
most common one because of its efficiency [5]. The performance of active filter depends on the 
adoptive control approaches. There are two major parts of an active power filter controller. The first is 
that determines the reference current of APF and maintains a stable DC bus voltage. Various current 
detection methods, such as instantaneous reactive power theory, synchronous reference frame method, 
supplying current regulation, etc., are presented. The commonness of these methods is the request for 
generating reference current of Active Power Filter (APF), either with the load current or the mains 
current. The second is that controls the VSI to inject the compensating current into AC mains. The 
commonness of these methods is to control VSI with the difference between real current and reference 
current. 

In discrete PWM technique based hybrid filters, the system has the function of voltage stability, and 
harmonic suppression. The reference current can be calculated by 'd-q' transformation [6-7]. In pulse- 
width modulated (PWM) voltage source inverter based filter differs from previously discussed 
approach in the following ways: a) To generate APF reference voltage vector instead of reference 
current; b) to generate desired APF output voltage by Space Vector Pulse Width Modulation 
(SVPWM) [8-9] based on generated reference voltage. Therefore, the proposed method is simple and 
easy to carry out. This paper discussed the basic principle of this method in detail and proved its 
validity by simulation results. 

II. Proposed Control Methods 

2.1.Using Discrete PWM Technique Based Hybrid Filter 




Figure 2.1. Simulation circuit of integral controller with Discrete PWM Generator 

The Figure 2.1 is the integral controller used to generate the PWM pulses, which are generated based 
on the error produced by comparing the reference current and the source current. The differences 
calculated along with the gains are sent to discrete PWM generator and the resultant PWM pulses are 
given to the IGBT bridge for controlling. 

2.2. d-q Transformation 

The abc_to_dq0 Transformation block computes the direct axis, quadratic axis, and zero sequence 
quantities in a two-axis rotating reference frame for a three-phase sinusoidal signal. The following 
transformation is used 

(2.1) 

(2.2) 
(2.3) 



V d = 2 - (V a sin(o) t) + V h sin(o) t - 27r / 3 ) + V c sin(o) t + 2n / 3 

V q=~ (Va COS(O) t) + V b COS(O) t - 2n / 3 + V c COS(O) t + 2lt / $ 



v =-(v a + v b + v c ) 

Where co = rotation speed (rad/s) of the rotating frame. 

The transformation is the same for the case of a three-phase current; which can be obtained by 

replacing the V a , V b , V c , V d , V q , and V variables with the I a , I b , I c , I d , I q , and I variables.This block 



332 | 



Vol. 2, Issue 1, pp. 331-341 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

can be used in a control system to measure the positive-sequence component Vi of a set of three- 
phase voltages or currents. The V d and V q (or I d and I q ) then represent the rectangular coordinates of 
the positive-sequence component. 
2.3. Control Method Using SVPWM Based Hybrid Filter 

The main section of the APF shown in Figure 2.2 is a forced-commutated VSI connected to D.C 

capacitor [10]. Considering that the distortion of the voltage in public power network is usually very 

low, it can be assumed that the supply voltage is ideal sinusoidal and three-phase balanced as shown 

below: 

V sa = V s sin(wt) 

Vsb — Vs sin(wt — 2n/3) 

V sb = V s sin(wt + 2tt/3) (2.4) 

Where Vs is the supply voltage amplitude. 



X 






Tliree-Pliase 

Load 



W 



£ 



<& 



IP fa _l_f fh- Lffi 



Iim.ljleim.eixfcitioix of 
tlie SVPWM 
coixti'cill^i' ftor 

liarcnoiiic filtering 



Figure 2.2. Configuration of an APF using SVPWM 

It is known that the three-phase voltages [v sa v sb v sc \ in a-b-c can be expressed as two-phase 
representation in d-q frame by Clark's transformation and it is given by 

Vsa 

(2.5) 



Vs = 



Vd 
Vq 



1 -- 

2 



2 2 . 



Vsb 
Vsc 



It is possible to write equation (1.2) more compactly as 

[Vs~] = 2/ 3 (v sa a o + VsbU i + Vsca 2 ) = Vsd+ jVsq = V$l _Qs ( 2.6) 

■ 2 

Where a = e 7s7r , so balanced three-phase set of voltages is represented in the stationary reference 

frame by a space vector of constant magnitude, equal to the amplitude of the voltages, and rotating 

with angular speed w = 2nf. 

As shown in Figure 2.2, the shunt APF takes a three-phase voltage source inverter as the main circuit 

and uses capacitor as the energy storage element on the DC side to maintain the DC bus voltage V dc 

constant. Figure 2.3 shows the per-phase (Phase A) equivalent circuit of the system. 

2.4. Compensation Principle 

In the Figure 2.3, v fa j and v fah denote the output fundamental and harmonic voltages of the inverter, 

respectively. These voltage sources are connected to a supply source ( v sa ) in parallel via a link 

inductor L f and capacitor C f .The supply current i sa is forced to be free of harmonics by appropriate 

voltages from the APF and the harmonic current emitted from the load is then automatically 

compensated. 

It is known from Figure 2.3, that only fundamental component is taken into account, the voltages of 

the AC supply and the APF exist the following relationship in the steady state 



V s = Lf 



dl 



dt 






A 



(2.7) 



333 | 



Vol. 2, Issue 1, pp. 331-341 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

Where V s is the supply voltage, lf X is the fundamental current of APF, Vf 1 is the fundamental voltage 
of APF, and above variables are expressed in form of space vector. 





isa 


CM 














L r i 








6 


r 


v w(qj) 


*<B 


«« 















Figure 2.3. Equivalent circuit of a simple power system together with the APF 

The APF is joined into the network through the inductor L f and C f The function of these is to filter 
higher harmonics nearly switching frequency in the current and to link two AC voltage sources of the 
inverter and the network. So the required inductance and capacitance can just adopt a small value. 
Then the total reactance caused by inductor and capacitor for the frequency of 50Hz, and the 
fundamental voltages across the link inductors and capacitors are also very small, especially 
compared with the mains voltages. Thus the effect of the voltage of the link inductor and capacitor is 
neglected. So the following simplified voltage balanced equation can be obtained from equation (2.7). 
V S = V}~ (2.8) 

The control object of APF is to make the supply current sinusoidal and in phase with the supply 
voltage. Thus the nonlinear load and the active power filter equals to a pure resistance load R s , and the 
supply voltage and the supply current satisfy the following equation: 

V S = R S .I S " ' * " (2.9) 

Where T s = -{i sa a° + i^a 1 + i sc a 2 = I sd +JI sq = 7 5 zfy 

Then the relationship between / 5 and the supply voltage amplitude Vs is 

(2.10) 



V S = R S .I S 



Substituting (2.9), (2.10) into (2.8) results in 

VTi = T T s (2- 11 ) 

l s 

Equation (2.11) describes the relationship between the output fundamental voltage of APF, the supply 
voltage and the supply current, which ensure that the APF operate normally[ll-12]. However, for 
making the APF normally achieving the required effect, the DC bus voltage V dc has to be high enough 
and stable. In the steady state, the power supplied from the supply must be equal to the real power 
demanded by the load, and no real power passes through the power converter for a lossless APF 
system. Hence, the average voltage of DC capacitor can be maintained at a constant value. If a power 
imbalance, such as the transient caused by load change, occurs, the DC capacitor must supply the 
power difference between the supply and the load, the average voltage of the DC capacitor is reduced. 
At this moment, the magnitude of the supply current must be enlarged to increase the real power 
delivered by the supply. On the contrary, the average voltage of the DC capacitor rises, and the supply 
current must be decreased. Therefore, the average voltage of the DC capacitor can reflect the real 
power flow information. In order to maintain the DC bus voltage as constant, the detected DC bus 
voltage is compared with a setting voltage. The compared results is fed to a PI controller, and 
amplitude control of the supply current I s can be obtained by output of PI controller. 
The Figure 2.4 shows the block diagram of active filter controller implemented for reducing the 
harmonics with hybrid active filter system. In each switching cycle, the controller samples the supply 
currents i sa , i sc and the supply current i sc is calculated with the equation of— (i sa + i sc ), as the 
summation of three supply current is equal to zero. These three-phase supply currents are measured 
and transformed into synchronous reference frame (d-q axis) [13-14]. The fundamental component of 



334 



Vol. 2, Issue 1, pp. 331-341 



International Journal of Advances in Engineering & Technology, Jan 2012. 

©IJAET ISSN: 2231-1963 

the supply current is transformed into DC quantities in the (d-q) axis and the supply current amplitude 
I s generated by the PI controller with V dc and V re f, the reference value of the DC bus voltage. The 
obtained d-q axis components generate voltage command signal. By using Fourier magnitude block, 
voltage magnitude and angle is calculated from the obtained signal. These values are fed to the 
developed code and compared with the repeating sequence. Then the time durations T u T 2 and T , the 
on-time of Vi, V 2 and V are calculated. The